repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | fixed_commit stringclasses 20
values | patch stringlengths 400 56.6k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels listlengths 0 7 β | category stringclasses 4
values | edit_functions listlengths 1 10 | added_functions listlengths 0 19 | edit_functions_length int64 1 10 | __index_level_0__ int64 1 659 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
flet-dev/flet | flet-dev__flet-4425 | 97f42c4602c7ee63b571c29af555d4eca6203659 | null | diff --git a/sdk/python/packages/flet/src/flet/core/date_picker.py b/sdk/python/packages/flet/src/flet/core/date_picker.py
index 4abb8122d..09d735f86 100644
--- a/sdk/python/packages/flet/src/flet/core/date_picker.py
+++ b/sdk/python/packages/flet/src/flet/core/date_picker.py
@@ -173,24 +173,6 @@ def __init__(
def... | Opening DatePicker returns AssertionError
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
When running example from docs, clicking on Pick Date button, which opens DatePicker, returns an error:
Future exception... | 1,732,622,526,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/date_picker.py:DatePicker.before_update"
] | [] | 1 | 588 | ||
flet-dev/flet | flet-dev__flet-4388 | 5fb877b3a3f886f3475cd8ebca1cee52472d0ef7 | null | diff --git a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
index a61657801..a097bb454 100644
--- a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
+++ b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
@@ -14,6 +14,12 @@
... | Add an option to skip the running `flutter doctor` when `flet build` fails
### Discussed in https://github.com/flet-dev/flet/discussions/4359
<div type='discussions-op-text'>
<sup>Originally posted by **DFNJKD-98** November 13, 2024</sup>
### Question
Due to some reasons, my country cannot freely access GitHu... | 1,731,860,800,000 | null | Feature Request | [
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.__init__",
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.handle",
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.add_arguments",
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Comman... | [] | 4 | 589 | ||
flet-dev/flet | flet-dev__flet-4384 | 5fb877b3a3f886f3475cd8ebca1cee52472d0ef7 | null | diff --git a/sdk/python/packages/flet/src/flet/core/icon.py b/sdk/python/packages/flet/src/flet/core/icon.py
index 8944f28bf..5af67aa7b 100644
--- a/sdk/python/packages/flet/src/flet/core/icon.py
+++ b/sdk/python/packages/flet/src/flet/core/icon.py
@@ -130,6 +130,7 @@ def _get_control_name(self):
return "icon"... | Icon rotation doesn't work with flet-0.25.0.dev3711
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
Icon rotation doesn't work anymore with flet 0.25
IconButton and other controls are OK
### Code sample
<detail... | There is a lot of issues we are facing after 0.24.1 | 1,731,816,481,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/icon.py:Icon.before_update"
] | [] | 1 | 590 | |
flet-dev/flet | flet-dev__flet-4373 | 5fb877b3a3f886f3475cd8ebca1cee52472d0ef7 | null | diff --git a/sdk/python/packages/flet/src/flet/core/markdown.py b/sdk/python/packages/flet/src/flet/core/markdown.py
index 118e4e1c4..77a761c88 100644
--- a/sdk/python/packages/flet/src/flet/core/markdown.py
+++ b/sdk/python/packages/flet/src/flet/core/markdown.py
@@ -400,7 +400,12 @@ def before_update(self):
... | Regression in `Markdown.code_theme` when using `MarkdownCodeTheme` enum
A custom theme works great although the only issue I faced was setting `code_theme` with `ft.MarkdownCodeTheme.ATOM_ONE_DARK` or any other value but **only** using `ft.MarkdownTheme` class the error it throws is:
# Code
```python
import flet ... | 1,731,605,637,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.before_update",
"sdk/python/packages/flet/src/flet/core/markdown.py:Markdown.code_theme"
] | [] | 2 | 591 | ||
flet-dev/flet | flet-dev__flet-4340 | 0f7b14b787eb4249b93e2abd5da23cc953b1e091 | null | diff --git a/sdk/python/packages/flet/src/flet/core/colors.py b/sdk/python/packages/flet/src/flet/core/colors.py
index 0fb695878..66b4293e8 100644
--- a/sdk/python/packages/flet/src/flet/core/colors.py
+++ b/sdk/python/packages/flet/src/flet/core/colors.py
@@ -37,9 +37,12 @@
import random
from enum import Enum, Enu... | Using `ft.colors.with_opacity` returns exception, should be warning
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
This code used to work before:
```
tooltip_bgcolor=ft.Colors.with_opacity(0.5, ft.Colors... | 1,731,103,482,000 | null | Bug Report | [
"sdk/python/packages/flet/src/flet/core/colors.py:colors.with_opacity",
"sdk/python/packages/flet/src/flet/core/colors.py:Colors.with_opacity"
] | [] | 2 | 592 | ||
flet-dev/flet | flet-dev__flet-4314 | 3b7241e3f5024ee47f3bcba3092a9e71e56bfe42 | null | diff --git a/sdk/python/packages/flet/src/flet/core/segmented_button.py b/sdk/python/packages/flet/src/flet/core/segmented_button.py
index 01f303af6..7c1934834 100644
--- a/sdk/python/packages/flet/src/flet/core/segmented_button.py
+++ b/sdk/python/packages/flet/src/flet/core/segmented_button.py
@@ -203,12 +203,11 @@ d... | user customized style for SegmentedButton not wrapped
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
user cutomized style is not working in SegmentButton
 and related resources and couldn't find an answer to my question.
**Your Question**
After the sentence is split, i think the `.` should no longer be used as a split condition. Many other ... | 1,730,804,031,000 | null | Feature Request | [
"src/ragas/metrics/_noise_sensitivity.py:NoiseSensitivity._decompose_answer_into_statements"
] | [] | 1 | 594 | ||
scikit-learn/scikit-learn | scikit-learn__scikit-learn-30241 | 551d56c254197c4b6ad63974d749824ed2c7bc58 | null | diff --git a/sklearn/utils/estimator_checks.py b/sklearn/utils/estimator_checks.py
index f5d542d9a59fc..da1300817b148 100644
--- a/sklearn/utils/estimator_checks.py
+++ b/sklearn/utils/estimator_checks.py
@@ -2076,11 +2076,11 @@ def check_regressor_multioutput(name, estimator):
assert y_pred.dtype == np.dtype("f... | Missing format string arguments
This assertion error string is not properly formatted as the 2 format arguments `y_pred.shape` and `y.shape` are missing:
https://github.com/scikit-learn/scikit-learn/blob/551d56c254197c4b6ad63974d749824ed2c7bc58/sklearn/utils/estimator_checks.py#L2139
```python
assert y_pred.shap... | Please feel free to directly submit a PR with the fix in the future in such cases :) | 1,731,059,196,000 | null | Bug Report | [
"sklearn/utils/estimator_checks.py:check_regressor_multioutput"
] | [] | 1 | 595 | |
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20484 | 601c0608059ed33ac617a57bb122e17b88c35c9a | null | diff --git a/src/lightning/pytorch/loops/prediction_loop.py b/src/lightning/pytorch/loops/prediction_loop.py
index 7044ccea87a7f..dcfd873a28b4b 100644
--- a/src/lightning/pytorch/loops/prediction_loop.py
+++ b/src/lightning/pytorch/loops/prediction_loop.py
@@ -233,8 +233,9 @@ def _predict_step(
self.batch_pr... | UnboundLocalError: local variable 'any_on_epoch' referenced before assignment in prediction loop
### Bug description
`UnboundLocalError` raises when using the predict method with `return_predictions=False`.
This is due to `any_on_epoch` [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be... | nice catch
> This is due to any_on_epoch [not being defined](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b9b0727df2f5476f0a1d90bc59/src/lightning/pytorch/loops/prediction_loop.py#L236) if data_fetcher is [not an instance](https://github.com/Lightning-AI/pytorch-lightning/blob/be608fa355b835b... | 1,733,763,932,000 | null | Bug Report | [
"src/lightning/pytorch/loops/prediction_loop.py:_PredictionLoop._predict_step"
] | [] | 1 | 596 | |
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20420 | 20d19d2f5728f7049272f2db77a9748ff4cf5ccd | null | diff --git a/examples/fabric/build_your_own_trainer/run.py b/examples/fabric/build_your_own_trainer/run.py
index 01044f5d94fa8..c0c2ff28ddc41 100644
--- a/examples/fabric/build_your_own_trainer/run.py
+++ b/examples/fabric/build_your_own_trainer/run.py
@@ -41,7 +41,8 @@ def training_step(self, batch, batch_idx: int):
... | OptimizerLRScheduler typing does not fit examples
### Bug description
The return type of `LightningModule.configure_optimizers()` is `OptimizerLRScheduler`, see the [source code](https://github.com/Lightning-AI/pytorch-lightning/blob/e6c26d2d22fc4678b2cf47d57697ebae68b09529/src/lightning/pytorch/core/module.py#L954)... | Hey @MalteEbner
The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this quick fix? Would be much appreciated π
> Hey @MalteEbner The optimizer should be inside the dict in that example. Thank you for noticing it. Would you like to send a PR with this... | 1,731,575,568,000 | null | Bug Report | [
"examples/fabric/build_your_own_trainer/run.py:MNISTModule.configure_optimizers"
] | [] | 1 | 597 | |
Lightning-AI/pytorch-lightning | Lightning-AI__pytorch-lightning-20401 | c110f4f3f60c643740f5e3573546abfcb5355315 | null | diff --git a/src/lightning/pytorch/cli.py b/src/lightning/pytorch/cli.py
index 26af335f7be93..e0de8a24b38f5 100644
--- a/src/lightning/pytorch/cli.py
+++ b/src/lightning/pytorch/cli.py
@@ -389,6 +389,7 @@ def __init__(
self._add_instantiators()
self.before_instantiate_classes()
self.instantia... | Proposal(CLI): after_instantiate_classes hook
### Description & Motivation
Adds a `after_instantiate_classes` hook to the Lightning CLI, called after `self.instantiate_classes()` during the initalization of `LightningCLI`.
### Pitch
While having the Lightning CLI is great, it is not perfect for each use case out-of-... | 1,730,892,823,000 | null | Feature Request | [
"src/lightning/pytorch/cli.py:LightningCLI.__init__"
] | [
"src/lightning/pytorch/cli.py:LightningCLI.after_instantiate_classes"
] | 1 | 598 | ||
kornia/kornia | kornia__kornia-3084 | b230615f08bb0fff1b3044fc8ccb38f21bd9e817 | null | diff --git a/kornia/augmentation/container/augment.py b/kornia/augmentation/container/augment.py
index a9cad91924..ebd892bf17 100644
--- a/kornia/augmentation/container/augment.py
+++ b/kornia/augmentation/container/augment.py
@@ -507,11 +507,9 @@ def __call__(
if output_type == "tensor":
... | AugmentationSequential explicitly moves the output to the CPU if data_keys is given
### Describe the bug
With the 0.7.4 release, augmentations on the GPU are not possible anymore because the output of the input tensor is always explicitly moved to the CPU.
The problem is that `_detach_tensor_to_cpu` is called exp... | i can see that this was touched in https://github.com/kornia/kornia/pull/2979 @ashnair1 @shijianjian @johnnv1 do you recall why we need here `_detach_tensor_to_cpu` instead of something like `_detach_tensor_to_device` ?
It was mainly for passing the tests, due to the randomness handling for CUDA and CPU are different.... | 1,733,310,216,000 | null | Bug Report | [
"kornia/augmentation/container/augment.py:AugmentationSequential.__call__"
] | [] | 1 | 599 | |
wandb/wandb | wandb__wandb-9011 | 87070417bcde22e45fc3a662b2dfd73d79981ad9 | null | diff --git a/wandb/apis/public/runs.py b/wandb/apis/public/runs.py
index c605630e079..b29b5109f91 100644
--- a/wandb/apis/public/runs.py
+++ b/wandb/apis/public/runs.py
@@ -236,7 +236,6 @@ def histories(
if not histories:
return pd.DataFrame()
combined_df = pd.concat(histories... | [Bug]: Incorrect order in `runs.histories()`
### Describe the bug
`Runs` have a `order` attribute. However in the `Runs.histories()` method the order is not respected and the `runid` is used to sort instead (see [Relevant code](https://github.com/wandb/wandb/blob/v0.18.7/wandb/apis/public/runs.py#L236). I don't think ... | Hi @GuillaumeOpenAI! Thank you for writing in!
I have tried out something like this on my end:
```
import wandb
api = wandb.Api()
runs = api.runs("acyrtest/support-team-external-storage")
hist = runs.histories()
for hists in hist:
print(hists)
```
and got this as the result, which is now sorted by run_id, and ... | 1,733,315,969,000 | null | Bug Report | [
"wandb/apis/public/runs.py:Runs.histories"
] | [] | 1 | 600 | |
wandb/wandb | wandb__wandb-8931 | 70058a9e7bf09249d546226192ad3f8b0de04cb7 | null | diff --git a/wandb/sdk/data_types/video.py b/wandb/sdk/data_types/video.py
index 54e338ef2a5..41310640d6f 100644
--- a/wandb/sdk/data_types/video.py
+++ b/wandb/sdk/data_types/video.py
@@ -138,10 +138,21 @@ def __init__(
self.encode(fps=fps)
def encode(self, fps: int = 4) -> None:
- mpy = uti... | [Bug]: "wandb.Video requires moviepy when passing raw data" Error due to new moviepy version
### Describe the bug
The moviepy package was updated to 2.x and removed the `moviepy.editor` namespace (see [here](https://zulko.github.io/moviepy/getting_started/updating_to_v2.html)), breaking the `Video.encode` method.
Fix... | 1,732,214,869,000 | null | Bug Report | [
"wandb/sdk/data_types/video.py:Video.encode"
] | [] | 1 | 601 | ||
speechbrain/speechbrain | speechbrain__speechbrain-2760 | 16b6420d4ff23210cfca2e888be8853264e0cb17 | null | diff --git a/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py b/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py
index 1507f85093..2dad9e1e46 100644
--- a/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py
+++ b/speechbrain/lobes/models/huggingface_transformers/weighted_... | Weighted SSL model not unfreezable
### Describe the bug
In our HF Weighted SSL model implementation, we `detach()` the hidden states, meaning weights are not updated.
Relevant code:
https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py#L81
```
... | Also if `layernorm=True` then the hidden states are converted to a list which causes a program crash. They should be re-stacked into a tensor. | 1,732,134,555,000 | null | Bug Report | [
"speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py:WeightedSSLModel.__init__",
"speechbrain/lobes/models/huggingface_transformers/weighted_ssl.py:WeightedSSLModel.forward"
] | [] | 2 | 602 | |
speechbrain/speechbrain | speechbrain__speechbrain-2742 | c4a424306a58a08dbdf3f86f4c9a32eecf7c94f3 | null | diff --git a/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py b/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
index 5c65a49682..90e0d118d3 100644
--- a/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
+++ b/recipes/LibriSpeech/ASR/transformer/train_with_whisper.py
@@ -107,7 +107,6 @@ def c... | Syntax Bug in Librispeech Whisper Recipe
### Describe the bug
These bugs listed below are related to whisper specifically following this [recipe](recipes/LibriSpeech/ASR/transformer/train_with_whisper.py)
1) line 228 in dataio_prepare, hparams is a dictionary so `hasattr(hparams, "normalized_transcripts")` does not w... | Hi @matthewkperez, thanks for opening this issue! Would you like to open a PR to fix it? It would be a very welcome contribution :)
Just submitted PR #2737 for this. Cheers! | 1,730,394,415,000 | null | Bug Report | [
"recipes/LibriSpeech/ASR/transformer/train_with_whisper.py:dataio_prepare"
] | [] | 1 | 603 | |
mesonbuild/meson | mesonbuild__meson-13881 | f0851c9e4b1760c552f7921e6b6a379b006ba014 | null | diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py
index cb3552d7f0c1..7b573e4e4d8a 100644
--- a/mesonbuild/backend/ninjabackend.py
+++ b/mesonbuild/backend/ninjabackend.py
@@ -2369,7 +2369,7 @@ def generate_dynamic_link_rules(self) -> None:
options = self._rsp_optio... | Building meson-python fails in AIX
When I tried to meson-python master branch in AIX using meson, I get the below error
```
Traceback (most recent call last):
File "/meson/install/opt/freeware/lib/python3.9/site-packages/mesonbuild/mesonmain.py", line 193, in run
return options.run_func(options)
File "/mes... | cc: @eli-schwartz
I can confirm this bug exists and the fix solves the issue.
Ah hmm, right. This happens because we only iterate over all configured project languages that also support using a linker, and then create AIX_LINKER for the last one basically. This fails for projects that don't have any configured lan... | 1,730,961,926,000 | null | Bug Report | [
"mesonbuild/backend/ninjabackend.py:NinjaBackend.generate_dynamic_link_rules"
] | [] | 1 | 604 | |
ultralytics/ultralytics | ultralytics__ultralytics-18212 | 626e42ef253b5c20fa83412e7daf9b713484a866 | null | diff --git a/ultralytics/engine/model.py b/ultralytics/engine/model.py
index db8d87ebc2..8affd958f2 100644
--- a/ultralytics/engine/model.py
+++ b/ultralytics/engine/model.py
@@ -115,7 +115,7 @@ def __init__(
self.predictor = None # reuse predictor
self.model = None # model object
self.trai... | Saving yolov6n crashes
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
Crash on "yolov6n.yaml" save.
```
New https://pypi.org/project/ultralytics/8.3.4... | π Hello @EmmanuelMess, thank you for your interest in Ultralytics π! We appreciate you taking the time to report this issue.
To help us investigate further, could you please confirm the reproducibility of this issue using the latest version of Ultralytics? You can upgrade with the command below:
```bash
pip instal... | 1,734,061,207,000 | null | Bug Report | [
"ultralytics/engine/model.py:Model.__init__",
"ultralytics/engine/model.py:Model.train"
] | [] | 2 | 605 | |
ultralytics/ultralytics | ultralytics__ultralytics-17872 | 21162bd870444550286983a601afbfb142f4c198 | null | diff --git a/ultralytics/engine/predictor.py b/ultralytics/engine/predictor.py
index c28e1895d07..c5250166e9e 100644
--- a/ultralytics/engine/predictor.py
+++ b/ultralytics/engine/predictor.py
@@ -155,7 +155,7 @@ def pre_transform(self, im):
same_shapes = len({x.shape for x in im}) == 1
letterbox = Le... | Imx500 usage example error
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
I encountered an error when running the example code from the [Sony IMX500 usage... | π Hello @Magitoneu, thank you for your interest in Ultralytics π! We appreciate you taking the time to report this issue. Hereβs a quick guide to help us investigate this further:
It seems like youβre experiencing an error related to image resizing when running the Sony IMX500 example. If this is indeed a π Bug Rep... | 1,732,862,312,000 | null | Bug Report | [
"ultralytics/engine/predictor.py:BasePredictor.pre_transform"
] | [] | 1 | 606 | |
ultralytics/ultralytics | ultralytics__ultralytics-17728 | 426879d80d49d0180b525c4fc2484772f9f6f8cc | null | diff --git a/ultralytics/data/augment.py b/ultralytics/data/augment.py
index d092e3c3703..bd821de28de 100644
--- a/ultralytics/data/augment.py
+++ b/ultralytics/data/augment.py
@@ -1591,7 +1591,7 @@ def __call__(self, labels=None, image=None):
labels["ratio_pad"] = (labels["ratio_pad"], (left, top)) # for... | Significant mAP Drop When Using Bottom-Right Padding Instead of Center Padding in YOLOv8 Training
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no simi... | π Hello @Gebbap, thank you for bringing your findings to the Ultralytics community's attention π!
We recommend checking out our [Docs](https://docs.ultralytics.com), where you can find comprehensive information on [Python](https://docs.ultralytics.com/usage/python/) and [CLI](https://docs.ultralytics.com/usage/cli/)... | 1,732,360,098,000 | null | Performance Issue | [
"ultralytics/data/augment.py:LetterBox.__call__"
] | [] | 1 | 607 | |
ultralytics/ultralytics | ultralytics__ultralytics-17544 | a132920476b2d38bdd58c7a232888f425f476977 | null | diff --git a/ultralytics/utils/callbacks/wb.py b/ultralytics/utils/callbacks/wb.py
index b82b8d85ec3..22bbc347566 100644
--- a/ultralytics/utils/callbacks/wb.py
+++ b/ultralytics/utils/callbacks/wb.py
@@ -138,7 +138,7 @@ def on_train_end(trainer):
art.add_file(trainer.best)
wb.run.log_artifact(art, al... | wandb callback reporting fails if no positive examples in validator
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
When using the `wandb` callback, the f... | 1,731,625,609,000 | null | Bug Report | [
"ultralytics/utils/callbacks/wb.py:on_train_end"
] | [] | 1 | 608 | ||
ultralytics/ultralytics | ultralytics__ultralytics-17499 | 496e6a3b8680e4ccd4f190e30841748aee2cb89c | null | diff --git a/ultralytics/engine/results.py b/ultralytics/engine/results.py
index 029e4471e04..8de0a2e6a1c 100644
--- a/ultralytics/engine/results.py
+++ b/ultralytics/engine/results.py
@@ -750,7 +750,7 @@ def save_crop(self, save_dir, file_name=Path("im.jpg")):
save_one_box(
d.xyxy,
... | Save_crop method from Results with default params results in double file extension
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Save_crop me... | π Hello @M3nxudo, thank you for bringing this to our attention! We're excited to assist you π and appreciate your proactive approach to contribute with a PR.
For anyone facing similar issues, we highly recommend checking out our [Docs](https://docs.ultralytics.com) for guidance on both [Python](https://docs.ultralyt... | 1,731,417,643,000 | null | Bug Report | [
"ultralytics/engine/results.py:Results.save_crop"
] | [] | 1 | 609 | |
huggingface/diffusers | huggingface__diffusers-10269 | 2739241ad189aef9372394a185b864cbbb9ab5a8 | null | diff --git a/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py b/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
index 6ddd9ac23009..c7474d56c708 100644
--- a/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
+++ b/src/diffusers/schedulers/scheduling_flow_match_euler_discr... | Allow configuring `shift=` for SD3 dynamically
**Is your feature request related to a problem? Please describe.**
Allow passing `shift=` per inference call (like timesteps) on the pipeline, for flow matching scheduler, or allow `set_shift()` etc. on the scheduler. This seems to be the key to getting good results with ... | Hi, you can do it like this:
```python
from diffusers import FlowMatchEulerDiscreteScheduler
pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(pipe.scheduler.config, shift=3.0)
```
yep! but the same format is applicable for timesteps and was wondering if we can get around without re-instating the sc... | 1,734,456,767,000 | null | Feature Request | [
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.__init__",
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.set_timesteps"
] | [
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.shift",
"src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py:FlowMatchEulerDiscreteScheduler.set_shift"
] | 2 | 610 | |
huggingface/diffusers | huggingface__diffusers-10262 | f9d5a9324d77169d486a60f3b4b267c74149b982 | null | diff --git a/src/diffusers/models/unets/unet_2d.py b/src/diffusers/models/unets/unet_2d.py
index 5972505f2897..d05af686dede 100644
--- a/src/diffusers/models/unets/unet_2d.py
+++ b/src/diffusers/models/unets/unet_2d.py
@@ -97,6 +97,7 @@ def __init__(
out_channels: int = 3,
center_input_sample: bool = ... | Make `time_embed_dim` of `UNet2DModel` changeable
**Is your feature request related to a problem? Please describe.**
I want to change the `time_embed_dim` of `UNet2DModel`, but it is hard coded as `time_embed_dim = block_out_channels[0] * 4` in the `__init__` function.
**Describe the solution you'd like.**
Make `t... | 1,734,429,874,000 | null | Feature Request | [
"src/diffusers/models/unets/unet_2d.py:UNet2DModel.__init__"
] | [] | 1 | 611 | ||
huggingface/diffusers | huggingface__diffusers-10185 | 43534a8d1fd405fd0d1e74f991ab97f743bd3e59 | null | diff --git a/src/diffusers/schedulers/scheduling_repaint.py b/src/diffusers/schedulers/scheduling_repaint.py
index 97665bb5277b..ae953cfb966b 100644
--- a/src/diffusers/schedulers/scheduling_repaint.py
+++ b/src/diffusers/schedulers/scheduling_repaint.py
@@ -319,7 +319,7 @@ def step(
prev_unknown_part = alpha_... | Potential bug in repaint?
https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322
According to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?
thanks!
| I also think that should be removed as mentioned in algorithm 1 Line 5 from the [paper](https://arxiv.org/pdf/2201.09865)
```math
x_{t-1}^{known} \ =\ \sqrt{\overline{\alpha }_{t}} x_0 \ +( 1-\ \overline{\alpha }_{t}) \epsilon
```
Corrected
```python
prev_known_part = (alpha_prod_t_prev**0.5) * original_image + (... | 1,733,904,201,000 | null | Bug Report | [
"src/diffusers/schedulers/scheduling_repaint.py:RePaintScheduler.step"
] | [] | 1 | 612 | |
huggingface/diffusers | huggingface__diffusers-10182 | 43534a8d1fd405fd0d1e74f991ab97f743bd3e59 | null | diff --git a/src/diffusers/loaders/lora_pipeline.py b/src/diffusers/loaders/lora_pipeline.py
index eb9b42c5fbb7..1445394b8784 100644
--- a/src/diffusers/loaders/lora_pipeline.py
+++ b/src/diffusers/loaders/lora_pipeline.py
@@ -2313,7 +2313,7 @@ def _maybe_expand_transformer_param_shape_or_error_(
for name, mod... | Can't load multiple loras when using Flux Control LoRA
### Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I lo... | Oh, we should have anticipated this use case. I think the correct check should be `module_bias = module.bias.data if module.bias is not None else None` instead.
Even with the above fix, I don't think the weights would load as expected because the depth control lora would expand the input features of `x_embedder` to ... | 1,733,878,565,000 | null | Bug Report | [
"src/diffusers/loaders/lora_pipeline.py:FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_"
] | [] | 1 | 613 | |
huggingface/diffusers | huggingface__diffusers-10176 | 09675934006cefb1eb3e58c41fca9ec372a7c797 | null | diff --git a/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py b/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py
index c6748ad418fe..6c36ec173539 100644
--- a/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffu... | Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline`
To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I d... | Hi @abcdefg133hi. Thanks for finding this. Your understanding is correct, the longer of `gligen_phrases` and `gligen_images` will be clipped:
```python
for phrase, image in zip(["text", "text1", "text2"], ["image", "image1"]):
print(phrase, image)
text image
text1 image1
```
We should add this to `check_... | 1,733,854,068,000 | null | Bug Report | [
"src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py:StableDiffusionGLIGENTextImagePipeline.check_inputs",
"src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py:StableDiffusionGLIGENTextImagePipeline.__call__"
] | [] | 2 | 614 | |
huggingface/diffusers | huggingface__diffusers-10170 | 0e50401e34242dbd4b94a8a3cf0ee24afc25ea65 | null | diff --git a/src/diffusers/image_processor.py b/src/diffusers/image_processor.py
index 00d8588d5a2a..d6913f045ad2 100644
--- a/src/diffusers/image_processor.py
+++ b/src/diffusers/image_processor.py
@@ -236,7 +236,7 @@ def denormalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, to
`np.nda... | Post processing performance can be improved
## Problem
Images generated in batches pay a performance penalty in the post-processing step of the diffusion pipeline.
A lot of calls to image_processor.denormalize are made instead of batching the computation.
### Suggested Improvements
#### Using multiplication... | 1,733,826,374,000 | null | Performance Issue | [
"src/diffusers/image_processor.py:VaeImageProcessor.denormalize",
"src/diffusers/image_processor.py:VaeImageProcessor.postprocess",
"src/diffusers/image_processor.py:VaeImageProcessorLDM3D.postprocess"
] | [
"src/diffusers/image_processor.py:VaeImageProcessor._denormalize_conditionally"
] | 3 | 615 | ||
huggingface/diffusers | huggingface__diffusers-10115 | 65ab1052b8b38687bcf37afe746a7cf20dedc045 | null | diff --git a/src/diffusers/models/embeddings.py b/src/diffusers/models/embeddings.py
index 91451fa9aac2..8f8f1073da74 100644
--- a/src/diffusers/models/embeddings.py
+++ b/src/diffusers/models/embeddings.py
@@ -959,7 +959,12 @@ def forward(self, ids: torch.Tensor) -> torch.Tensor:
freqs_dtype = torch.float32 i... | Some bugs in FLUX pipeline
### Describe the bug
1. missing self.theta in get_1d_rotary_pos_embed:
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py#L961-L963
2. if prompt_embeds is None, pooled_prompt_embeds will never be computed:
https://github.com/huggingface/diffusers/blob/ma... | `pooled_prompt_embeds` has to be passed when `prompt_embeds` is used so that's ok
https://github.com/huggingface/diffusers/blob/8421c1461bf4ab7801070d04d6ec1e6b28ee5b59/src/diffusers/pipelines/flux/pipeline_flux.py#L422-L425
Would you like to open a PR that passes `self.theta` to `get_1d_rotary_pos_embed`? | 1,733,314,229,000 | null | Bug Report | [
"src/diffusers/models/embeddings.py:FluxPosEmbed.forward"
] | [] | 1 | 616 | |
huggingface/diffusers | huggingface__diffusers-10086 | 827b6c25f9b78a297345f356a7d152fd6faf27d8 | null | diff --git a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py b/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
index a77231cdc02d..aee1ad8c75f5 100644
--- a/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
+++ b/src/diffusers/pipelines/stable_... | RuntimeError with LyCORIS, Batch Inference and skip_guidance_layers
### Describe the bug
A `RuntimeError` occurs when using the following combination:
* SD3
* Batch inference (`num_images_per_prompt > 1`)
* LyCORIS
* `skip_guidance_layers` is set
The error message is: `"RuntimeError: The size of tensor a ... | This is not a fully reproducible snippet. Please provide one.
Cc: @asomoza and @yiyixuxu for skip layer guidance.
Reproducible with:
```bash
pip install lycoris_lora
```
```python
from diffusers import StableDiffusion3Pipeline, FlowMatchEulerDiscreteScheduler
import torch
from huggingface_hub import hf_... | 1,733,160,636,000 | null | Bug Report | [
"src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py:StableDiffusion3Pipeline.__call__"
] | [] | 1 | 617 | |
huggingface/diffusers | huggingface__diffusers-10067 | 827b6c25f9b78a297345f356a7d152fd6faf27d8 | null | diff --git a/src/diffusers/models/upsampling.py b/src/diffusers/models/upsampling.py
index cf07e45b0c5c..af04ae4b93cf 100644
--- a/src/diffusers/models/upsampling.py
+++ b/src/diffusers/models/upsampling.py
@@ -165,6 +165,14 @@ def forward(self, hidden_states: torch.Tensor, output_size: Optional[int] = None
# ... | [BUG - STABLE DIFFUSION 3] Grey images generated
### Describe the bug
I'm running the SD3 model [stabilityai/stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) with the following settings:
- Height: 1024
- Width: 1024
- Inference steps: 50
- Guidance scale: 7
- Prompts leng... | Does this happen when you switch to torch.bfloat16? Also, was this working ever as expected and suddenly stopped working as expected?
@sayakpaul Hi, the error persists when using BF16.
Out of the 32 generated images, the first 16 are fine, but the last 16 are all in grayscale.
Weird.
Does it happen on other prompt... | 1,733,055,063,000 | null | Bug Report | [
"src/diffusers/models/upsampling.py:Upsample2D.forward"
] | [] | 1 | 618 | |
huggingface/diffusers | huggingface__diffusers-9978 | 64b3e0f5390728f62887be7820a5e2724d0fb419 | null | diff --git a/src/diffusers/loaders/single_file_utils.py b/src/diffusers/loaders/single_file_utils.py
index d1bad8b5a7cd..9a460cb5d1ef 100644
--- a/src/diffusers/loaders/single_file_utils.py
+++ b/src/diffusers/loaders/single_file_utils.py
@@ -62,7 +62,14 @@
"xl_base": "conditioner.embedders.1.model.transformer.res... | ControlNet broken from_single_file
### Describe the bug
controlnet loader from_single_file was originally added via #4084
and method `ControlNet.from_single_file()` works for non-converted controlnets.
but for controlnets in safetensors format that contain already converted state_dict, it errors out.
its not ... | > even worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format
which makes them impossible to load in difufsers.
for example: https://huggingface.co/Laxhar/noob_openpose/tree/main
Isn't this an actual error as it is partially in the diffusers format by not inclu... | 1,732,125,328,000 | null | Bug Report | [
"src/diffusers/loaders/single_file_utils.py:infer_diffusers_model_type",
"src/diffusers/loaders/single_file_utils.py:convert_controlnet_checkpoint"
] | [] | 2 | 619 | |
huggingface/diffusers | huggingface__diffusers-9885 | 5588725e8e7be497839432e5328c596169385f16 | null | diff --git a/src/diffusers/utils/dynamic_modules_utils.py b/src/diffusers/utils/dynamic_modules_utils.py
index f0cf953924ad..50d9bbaac57c 100644
--- a/src/diffusers/utils/dynamic_modules_utils.py
+++ b/src/diffusers/utils/dynamic_modules_utils.py
@@ -325,7 +325,7 @@ def get_cached_module_file(
# We always copy... | Replace shutil.copy with shutil.copyfile
shutil.copy copies permission bits which fails when the user who's running the script is trying to use a common cache that was generated by another user, even though the first user has read & write permissions over the cache (through Group permission for example). A real case sc... | Maybe related to https://github.com/huggingface/huggingface_hub/pull/1220, https://github.com/huggingface/diffusers/issues/1517, https://github.com/huggingface/huggingface_hub/issues/1141.
For info I'm using v0.23.0.
@Wauplin WDYT?
Agree with using `shutil.copyfile` yes! I didn't thought about permission issues back ... | 1,731,005,873,000 | null | Bug Report | [
"src/diffusers/utils/dynamic_modules_utils.py:get_cached_module_file"
] | [] | 1 | 620 | |
fortra/impacket | fortra__impacket-1860 | e9a47ffc2b56755908b4a0e73348c650cf5c723f | null | diff --git a/impacket/examples/secretsdump.py b/impacket/examples/secretsdump.py
index 43b776218..537c45dab 100644
--- a/impacket/examples/secretsdump.py
+++ b/impacket/examples/secretsdump.py
@@ -1432,7 +1432,7 @@ def dump(self):
userName = V[userAccount['NameOffset']:userAccount['NameOffset']+userAccount... | SAM Dump for accounts without secrets
I realised that some defaults Windows accounts, like for example WDAGUtilityAccount, throw the following error:

However there is no error here. WDAGUtilisatyAccount does not have a NT ha... | Hi @Dfte,
Which configuration are you running on? I tried here with a Windows Server 2019 in azure and the account `WDAGUtilityAccount` has a hash that is printed when running secretsdump
Also rechecked that the account was disabled (according to #802), and it was
:
# size = self.calcUnpackSize(format, options[i+1:])
size = options[i+1]
# print i, name, format, s... | dhcp.py: decode error "object has no attribute 'encode'"
### Configuration
impacket version: HEAD
Python version: 3.11
Target OS: Linux
### Debug Output With Command String
````
dhcp = DhcpPacket(buffer)
print(dhcp)
``````
```
Traceback (most recent call last):
File "/home/mdt/Source/... | it seems the line 381 in `impacket/structure.py` should read:
```
if (isinstance(data, bytes) or isinstance(data, bytearray)) and dataClassOrCode is b:
```
because the buffer is not bytes but bytearray. fixing this leads to the next error:
```
Traceback (most recent call last):
File "/home/mdt/Source/emd... | 1,733,378,609,000 | null | Bug Report | [
"impacket/dhcp.py:DhcpPacket.unpackOptions"
] | [] | 1 | 622 | |
modin-project/modin | modin-project__modin-7400 | 78674005577efea7aa7c5e3e7c6fb53bd0365fe5 | null | diff --git a/modin/pandas/dataframe.py b/modin/pandas/dataframe.py
index de96ea0ab26..2ce83913ebb 100644
--- a/modin/pandas/dataframe.py
+++ b/modin/pandas/dataframe.py
@@ -2074,12 +2074,12 @@ def squeeze(
Squeeze 1 dimensional axis objects into scalars.
"""
axis = self._get_axis_number(axis)... | Avoid unnecessary length checks in `df.squeeze`
It is possible that when `axis=1` in squeeze we still check `len(self.index)`, which is never necessary when `axis=1`. Link to code here: https://github.com/modin-project/modin/blob/eac3c77baf456c7bd7e1e5fde81790a4ed3ebb27/modin/pandas/dataframe.py#L2074-L2084
This is ... | 1,726,780,817,000 | null | Performance Issue | [
"modin/pandas/dataframe.py:DataFrame.squeeze"
] | [] | 1 | 623 | ||
ccxt/ccxt | ccxt__ccxt-24388 | f6119ba226704f2907e48c94caa13a767510fcd4 | null | diff --git a/python/ccxt/base/exchange.py b/python/ccxt/base/exchange.py
index 9b79354f89c5..66f8170154a4 100644
--- a/python/ccxt/base/exchange.py
+++ b/python/ccxt/base/exchange.py
@@ -382,6 +382,7 @@ def __init__(self, config={}):
self.transactions = dict() if self.transactions is None else self.transaction... | binance myLiquidations uninitialized before accessed
### Operating System
Ubuntu
### Programming Languages
Python
### CCXT Version
4.4.27
### Description
Got following error while watching binance websocket.
### Code
```
2024-11-26 05:56:31,267 - 875 - exchanges.binance - ERROR - 'NoneType' object does no... | Hello @sytranvn,
Thanks for reporting it, we will fix it asap
@sytranvn Btw, what's the best way of reproducing the issue?
I was just listening for balance events only. Maybe place a future position and wait for it to be liquidated. | 1,732,671,237,000 | null | Bug Report | [
"python/ccxt/base/exchange.py:Exchange.__init__"
] | [] | 1 | 624 | |
Qiskit/qiskit | Qiskit__qiskit-13554 | b7b26e000cd4baf3dcd28ca2f4607404bf736e2b | null | diff --git a/qiskit/circuit/parameterexpression.py b/qiskit/circuit/parameterexpression.py
index fe786762c09..feaa0b772c7 100644
--- a/qiskit/circuit/parameterexpression.py
+++ b/qiskit/circuit/parameterexpression.py
@@ -340,7 +340,7 @@ def _apply_operation(
either a constant or a second ParameterExpression.
... | Doc string of `operation` in ParameterExpression._apply_operation
It says
```
operation: One of operator.{add,sub,mul,truediv}.
```
But the function is already called also with other operations, for example `pow` in `ParameterExpression.__pow__`.
| Good point, and while this is not user facing it would be nice to have the dev-facing docs correct. Would you like to open a small PR to fix it? | 1,733,917,262,000 | null | Bug Report | [
"qiskit/circuit/parameterexpression.py:ParameterExpression._apply_operation"
] | [] | 1 | 625 | |
Qiskit/qiskit | Qiskit__qiskit-13552 | 17648ebb030c90fa7a595333b61823735275f68f | null | diff --git a/qiskit/circuit/parameterexpression.py b/qiskit/circuit/parameterexpression.py
index fe786762c09..feaa0b772c7 100644
--- a/qiskit/circuit/parameterexpression.py
+++ b/qiskit/circuit/parameterexpression.py
@@ -340,7 +340,7 @@ def _apply_operation(
either a constant or a second ParameterExpression.
... | Doc string of `operation` in ParameterExpression._apply_operation
It says
```
operation: One of operator.{add,sub,mul,truediv}.
```
But the function is already called also with other operations, for example `pow` in `ParameterExpression.__pow__`.
| Good point, and while this is not user facing it would be nice to have the dev-facing docs correct. Would you like to open a small PR to fix it? | 1,733,909,546,000 | null | Bug Report | [
"qiskit/circuit/parameterexpression.py:ParameterExpression._apply_operation"
] | [] | 1 | 626 | |
aio-libs/aiohttp | aio-libs__aiohttp-9767 | 51145aad138d03fc9f462e59b9c9398a75905899 | null | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index 27636977774..151f9dd497b 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: L... | Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict loo... | > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.int... | 1,731,233,876,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] | 5 | 627 | |
aio-libs/aiohttp | aio-libs__aiohttp-9766 | cc9a14aa3a29e54e2da3045083cca865654e3ff9 | null | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index 27636977774..151f9dd497b 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: L... | Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict loo... | > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.int... | 1,731,233,868,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] | 5 | 628 | |
aio-libs/aiohttp | aio-libs__aiohttp-9762 | 50cccb3823e53e187723f5dd713e2f1299405d1e | null | diff --git a/aiohttp/payload.py b/aiohttp/payload.py
index ea50b6a38cb..9979ed269b6 100644
--- a/aiohttp/payload.py
+++ b/aiohttp/payload.py
@@ -101,6 +101,7 @@ def __init__(self) -> None:
self._first: List[_PayloadRegistryItem] = []
self._normal: List[_PayloadRegistryItem] = []
self._last: L... | Payload registry has to do a linear search to find payloads
https://github.com/aio-libs/aiohttp/blob/21f5f92a755dc5ac7225b5e76f561553cf86565e/aiohttp/payload.py#L97
There is a note that its inefficent.
We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict loo... | > We might be able to clean it up by giving each payload a key as part of the base class and then doing a dict lookup instead
Doesn't really work, because it works with isinstance checks, which could be a subclass or an abc. I thought about this previously and didn't think of any improvements. Not sure what zope.int... | 1,731,230,347,000 | null | Performance Issue | [
"aiohttp/payload.py:PayloadRegistry.__init__",
"aiohttp/payload.py:PayloadRegistry.get",
"aiohttp/payload.py:PayloadRegistry.register",
"aiohttp/payload.py:Payload.__init__",
"aiohttp/payload.py:BytesPayload.__init__"
] | [] | 5 | 629 | |
langchain-ai/langgraph | langchain-ai__langgraph-2735 | 083a14c2c5bc90d597dd162219d1006a723abdf0 | null | diff --git a/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py b/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
index f8d280b96..ff5a91f5d 100644
--- a/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
+++ b/libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py
@@ -438,28 +438,36 @@ def _msgpack_defa... | msgpack deserialization with strictmap_key=False
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/m... | 1,734,019,110,000 | null | Bug Report | [
"libs/checkpoint/langgraph/checkpoint/serde/jsonplus.py:_msgpack_ext_hook"
] | [] | 1 | 630 | ||
langchain-ai/langgraph | langchain-ai__langgraph-2724 | ff3bc2f9821d9dffe5d1a8fcf6eb1758f3715da8 | null | diff --git a/libs/langgraph/langgraph/prebuilt/tool_node.py b/libs/langgraph/langgraph/prebuilt/tool_node.py
index d3d0751e2..e2ac50b8e 100644
--- a/libs/langgraph/langgraph/prebuilt/tool_node.py
+++ b/libs/langgraph/langgraph/prebuilt/tool_node.py
@@ -297,7 +297,7 @@ def _run_one(
try:
input = ... | Langgraph 0.2.58 resulted in empty config passed to tools in langgraph
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://st... | How are you invoking the tools?
We bind the tools to a gpt-4-o agent
```python
llm = AzureChatOpenAI(
azure_deployment="gpt4o",
api_version="2024-06-01",
temperature=0,
timeout=120,
)
self.runnable = prompt | llm.bind_tools(tools)
runnable_input = {
**stat... | 1,733,952,975,000 | null | Bug Report | [
"libs/langgraph/langgraph/prebuilt/tool_node.py:ToolNode._run_one",
"libs/langgraph/langgraph/prebuilt/tool_node.py:ToolNode._arun_one"
] | [] | 2 | 631 | |
langchain-ai/langgraph | langchain-ai__langgraph-2571 | c6fe26510e814e1cf165bc957b42bf4d5adf789b | null | diff --git a/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py b/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
index 440cb452e..4c0f5295c 100644
--- a/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
+++ b/libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py
@@ -5,7 +5,... | langgraph-checkpoint-postgres: Calls to postgres async checkpointer setup() fail on new postgres db
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I u... | 1,732,788,364,000 | null | Bug Report | [
"libs/checkpoint-postgres/langgraph/checkpoint/postgres/aio.py:AsyncPostgresSaver.setup"
] | [] | 1 | 632 | ||
sktime/sktime | sktime__sktime-7417 | d7f582335197b9c1382d33e40c4dbe1dbae14137 | null | diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py
index b7efd883810..675d45bb05a 100644
--- a/sktime/forecasting/base/adapters/_statsmodels.py
+++ b/sktime/forecasting/base/adapters/_statsmodels.py
@@ -53,6 +53,11 @@ def _fit(self, y, X, fh):
-----... | [BUG] Unable to use _StatsModelsAdapter.predict if config remember_data=False
**Describe the bug**
If the config `remember_data=False` is set then `_StatsModelsAdapter._predict` will throw an error when trying to use `_y` and `_X`.
**To Reproduce**
```python
import numpy as np
from sktime.forecasting.sarim... | 1,732,046,055,000 | null | Bug Report | [
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._fit",
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._predict",
"sktime/forecasting/base/adapters/_statsmodels.py:_StatsModelsAdapter._predict_interval"
] | [] | 3 | 633 | ||
numpy/numpy | numpy__numpy-27598 | a905925ef40a7551d16d78d81c7e6d08b59559e4 | null | diff --git a/numpy/ctypeslib.py b/numpy/ctypeslib.py
index 370cdf224cdc..d11b9dcb43d3 100644
--- a/numpy/ctypeslib.py
+++ b/numpy/ctypeslib.py
@@ -527,6 +527,26 @@ def as_array(obj, shape=None):
The shape parameter must be given if converting from a ctypes POINTER.
The shape parameter is ignored if ... | DOC: Examples in docstrings β tracking issue
"Examples, more examples, and more detailed examples" is the recurrent theme in the feedback about the NumPy documentation we received via the 2020 and 2021 NumPy user surveys.
If you come across a docstring where a function is missing an example or more/better examples ... | how can I help in this issue ?
Take a look at the functions you use from NumPy, and see if the docstrings have examples similar to your non-trivial use cases. If not, comment below asking if it seems an example would help newer users figure out how to replicate what you do.
This could be a good activity for a sprint wi... | 1,729,353,573,000 | null | Feature Request | [
"numpy/ctypeslib.py:as_array",
"numpy/ctypeslib.py:as_ctypes"
] | [] | 2 | 634 | |
numpy/numpy | numpy__numpy-27595 | a905925ef40a7551d16d78d81c7e6d08b59559e4 | null | diff --git a/numpy/lib/_function_base_impl.py b/numpy/lib/_function_base_impl.py
index 477c6a4f39a8..7a2c69bad0e6 100644
--- a/numpy/lib/_function_base_impl.py
+++ b/numpy/lib/_function_base_impl.py
@@ -5198,7 +5198,7 @@ def delete(arr, obj, axis=None):
----------
arr : array_like
Input array.
- o... | DOC: types for numpy.delete's obj argument don't cover all possibilities
### Issue with current documentation:
The docstring for `numpy.delete` specifues the type of the `obj` parameter as:
> **obj : _slice, int or array of ints_**
It seem it can also be anything that can be cast as an array of ints, including e.g... | 1,729,307,703,000 | null | Feature Request | [
"numpy/lib/_function_base_impl.py:delete"
] | [] | 1 | 635 | ||
vllm-project/vllm | vllm-project__vllm-11275 | 60508ffda91c22e4cde3b18f149d222211db8886 | null | diff --git a/vllm/executor/ray_gpu_executor.py b/vllm/executor/ray_gpu_executor.py
index 4bf5cbbd18ffe..e2c549cbd5331 100644
--- a/vllm/executor/ray_gpu_executor.py
+++ b/vllm/executor/ray_gpu_executor.py
@@ -123,6 +123,7 @@ def _init_workers_ray(self, placement_group: "PlacementGroup",
# Create the workers.... | [Bug]: LLM initialization time increases significantly with larger tensor parallel size and Ray
### Your current environment
vllm 0.5.2
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used ... | someone correct me if im wrong but the way the workers are initialized are done sequentially on the main process. which can be seen in the function I linked below
https://github.com/vllm-project/vllm/blob/bbd3e86926f15e59e4c62246b4b3185e71fe7ff2/vllm/executor/ray_gpu_executor.py#L109
ray add additional overhead b... | 1,734,483,554,000 | null | Performance Issue | [
"vllm/executor/ray_gpu_executor.py:RayGPUExecutor._init_workers_ray"
] | [] | 1 | 636 | |
vllm-project/vllm | vllm-project__vllm-9617 | e7116c017c86cb547f4d1888edaf13a9be2a4562 | null | diff --git a/vllm/engine/llm_engine.py b/vllm/engine/llm_engine.py
index 3a29e6a9ae094..51a0d10db8f38 100644
--- a/vllm/engine/llm_engine.py
+++ b/vllm/engine/llm_engine.py
@@ -1612,7 +1612,7 @@ def _get_stats(self,
# KV Cache Usage in %
num_total_gpu = self.cache_config.num_gpu_blocks
gpu_ca... | [Bug]: Support Falcon Mamba
### Your current environment
Does VLLM support Falcon Mamba models? if not, when it will be supported
### π Describe the bug
Does VLLM support Falcon Mamba models? if not, when it will be supported
| cc @tlrmchlsmth
Unsubscribe
On Wed, 14 Aug, 2024, 1:37 am Robert Shaw, ***@***.***> wrote:
> cc @tlrmchlsmth <https://github.com/tlrmchlsmth>
>
> β
> Reply to this email directly, view it on GitHub
> <https://github.com/vllm-project/vllm/issues/7478#issuecomment-2287037438>,
> or unsubscribe
> <https://git... | 1,729,693,073,000 | null | Feature Request | [
"vllm/engine/llm_engine.py:LLMEngine._get_stats"
] | [] | 1 | 637 | |
mlflow/mlflow | mlflow__mlflow-13390 | 49e038235f64cee0d6985293b9e5a24d2718abab | null | diff --git a/mlflow/openai/_openai_autolog.py b/mlflow/openai/_openai_autolog.py
index d67da788da443..5ddbf87dc4379 100644
--- a/mlflow/openai/_openai_autolog.py
+++ b/mlflow/openai/_openai_autolog.py
@@ -159,7 +159,6 @@ def _stream_output_logging_hook(stream: Iterator) -> Iterator:
yield chunk
... | Remove useless `chunk_dicts`
### Summary
```diff
diff --git a/mlflow/openai/_openai_autolog.py b/mlflow/openai/_openai_autolog.py
index 149e92793..45e486808 100644
--- a/mlflow/openai/_openai_autolog.py
+++ b/mlflow/openai/_openai_autolog.py
@@ -158,7 +158,6 @@ def patched_call(original, self, *args, **kwargs):
... | @harupy ill work on this issue | 1,728,658,506,000 | null | Performance Issue | [
"mlflow/openai/_openai_autolog.py:patched_call"
] | [] | 1 | 648 | |
huggingface/transformers | huggingface__transformers-34507 | dadb286f061f156d01b80e12594321e890b53088 | null | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
index 1603a4ec215557..80f8a60a34b622 100755
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1671,21 +1671,21 @@ def num_examples(self, dataloader: DataLoader) -> int:
except (NameError, AttributeError, TypeError): #... | Move Trainer's tokens per second metric into the inner training loop
### Feature request
Right now `include_tokens_per_second=True` in `Trainer` only reports the tokens per second metric at [the end of training](https://github.com/huggingface/transformers/blob/c1753436dbb8bcbcee183cdd6eba9f08a90d602a/src/transformers/... | cc @muellerzr @SunMarc | 1,730,291,622,000 | null | Feature Request | [
"src/transformers/trainer.py:Trainer.num_tokens",
"src/transformers/trainer.py:Trainer._inner_training_loop",
"src/transformers/trainer.py:Trainer._maybe_log_save_evaluate",
"src/transformers/trainer.py:Trainer.log"
] | [] | 4 | 649 | |
huggingface/transformers | huggingface__transformers-34279 | 93352e81f5019abaa52f7bdc2e3284779e864367 | null | diff --git a/src/transformers/integrations/integration_utils.py b/src/transformers/integrations/integration_utils.py
index 4f7cf3632fe549..a09116552c8e34 100755
--- a/src/transformers/integrations/integration_utils.py
+++ b/src/transformers/integrations/integration_utils.py
@@ -1218,6 +1218,8 @@ def setup(self, args, s... | Limit number of parametes logged with `MLflowCallback`
### Feature request
Add a new environment variable, such as `MLFLOW_MAX_LOG_PARAMS`, which can limit the number of parameters logged by the `MLflowCallback`.
### Motivation
When using mlflow in Azure ML, there is a limit of 200 parameters that can be logge... | 1,729,505,760,000 | null | Feature Request | [
"src/transformers/integrations/integration_utils.py:MLflowCallback.setup"
] | [] | 1 | 650 | ||
huggingface/transformers | huggingface__transformers-34208 | 343c8cb86f2ab6a51e7363ee11f69afb1c9e839e | null | diff --git a/src/transformers/agents/tools.py b/src/transformers/agents/tools.py
index cfb1e4cf95ced9..a425ffc8f106b2 100644
--- a/src/transformers/agents/tools.py
+++ b/src/transformers/agents/tools.py
@@ -138,7 +138,7 @@ def validate_arguments(self):
"inputs": Dict,
"output_type": str,
... | Boolean as tool input
### Feature request
It would be great if `boolean` was authorized as input to a `Tool`
### Motivation
I am willing to use my own tools with transformers CodeAgent ; using the method `tool`
I have a proper function `func` with typing and doc-strings as required. One of the input of the fu... | cc @aymeric-roucher
Please assign this to me | 1,729,135,245,000 | null | Feature Request | [
"src/transformers/agents/tools.py:Tool.validate_arguments"
] | [] | 1 | 651 | |
django/django | django__django-18654 | c334c1a8ff4579cdb1dd77cce8da747070ac9fc4 | null | diff --git a/django/urls/base.py b/django/urls/base.py
index 753779c75b46..bb40ba222436 100644
--- a/django/urls/base.py
+++ b/django/urls/base.py
@@ -127,8 +127,9 @@ def clear_script_prefix():
def set_urlconf(urlconf_name):
"""
- Set the URLconf for the current thread (overriding the default one in
- set... | Clarify django.urls.set_urlconf scoping behaviour
Description
django.urls.set_urlconf βdocstring mentions setting the urlconf for the current thread. However, this is backed by asgiref.local.Local, which is supposed to provide scoping features related to asyncio tasks as well. This becomes relevant, for example, whe... | ["I'm struggling to follow what this is asking for - can you share an example of the behavior you're seeing? From what I can see, both async and sync requests handle the urlconf the same - it is the ROOT_URLCONF unless set by a middleware \u200bas documented.", 1728267595.0]
["Firstly, just for Django, set_urlconf is n... | 1,728,288,525,000 | null | Feature Request | [
"django/urls/base.py:set_urlconf",
"django/urls/base.py:get_urlconf"
] | [] | 2 | 654 | |
huggingface/diffusers | huggingface__diffusers-9815 | 13e8fdecda91e27e40b15fa8a8f456ade773e6eb | null | diff --git a/src/diffusers/training_utils.py b/src/diffusers/training_utils.py
index d2bf3fe07185..2474ed5c2114 100644
--- a/src/diffusers/training_utils.py
+++ b/src/diffusers/training_utils.py
@@ -43,6 +43,9 @@ def set_seed(seed: int):
Args:
seed (`int`): The seed to set.
+
+ Returns:
+ `Non... | [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int... | Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue π
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contri... | 1,730,319,460,000 | null | Feature Request | [
"src/diffusers/training_utils.py:set_seed",
"src/diffusers/training_utils.py:compute_snr"
] | [] | 2 | 655 | |
huggingface/diffusers | huggingface__diffusers-9606 | 92d2baf643b6198c2df08d9e908637ea235d84d1 | null | diff --git a/src/diffusers/training_utils.py b/src/diffusers/training_utils.py
index 57bd9074870c..11a4e1cc8069 100644
--- a/src/diffusers/training_utils.py
+++ b/src/diffusers/training_utils.py
@@ -36,8 +36,9 @@
def set_seed(seed: int):
"""
- Args:
Helper function for reproducible behavior to set the s... | [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int... | Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue π
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contri... | 1,728,390,456,000 | null | Feature Request | [
"src/diffusers/training_utils.py:set_seed",
"src/diffusers/training_utils.py:cast_training_params",
"src/diffusers/training_utils.py:compute_density_for_timestep_sampling",
"src/diffusers/training_utils.py:compute_loss_weighting_for_sd3",
"src/diffusers/training_utils.py:free_memory",
"src/diffusers/train... | [] | 9 | 656 | |
huggingface/diffusers | huggingface__diffusers-9583 | 99f608218caa069a2f16dcf9efab46959b15aec0 | null | diff --git a/src/diffusers/utils/import_utils.py b/src/diffusers/utils/import_utils.py
index 34cc5fcc8605..daecec4aa258 100644
--- a/src/diffusers/utils/import_utils.py
+++ b/src/diffusers/utils/import_utils.py
@@ -668,8 +668,9 @@ def __getattr__(cls, key):
# This function was copied from: https://github.com/huggingfa... | [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int... | Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue π
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contri... | 1,728,062,817,000 | null | Feature Request | [
"src/diffusers/utils/import_utils.py:compare_versions",
"src/diffusers/utils/import_utils.py:is_torch_version",
"src/diffusers/utils/import_utils.py:is_transformers_version",
"src/diffusers/utils/import_utils.py:is_accelerate_version",
"src/diffusers/utils/import_utils.py:is_peft_version",
"src/diffusers/... | [] | 7 | 657 | |
huggingface/diffusers | huggingface__diffusers-9579 | 0763a7edf4e9f2992f5ec8fb0c9dca8ab3e29f07 | null | diff --git a/src/diffusers/models/embeddings.py b/src/diffusers/models/embeddings.py
index 80775d477c0d..91451fa9aac2 100644
--- a/src/diffusers/models/embeddings.py
+++ b/src/diffusers/models/embeddings.py
@@ -86,12 +86,25 @@ def get_3d_sincos_pos_embed(
temporal_interpolation_scale: float = 1.0,
) -> np.ndarray... | [community] Improving docstrings and type hints
There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!
Our convention looks like:
```python3
def function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int... | Hi @a-r-r-o-w I'd like to take this up, please let me know if there are any other prerequisites I should be aware of before submitting a PR against this issue π
Not prerequisites I can think of off the top of my head. Just that the PRs should be limited in scope as mentioned. You can maybe look at the Diffusers contri... | 1,728,042,903,000 | null | Feature Request | [
"src/diffusers/models/embeddings.py:get_3d_sincos_pos_embed",
"src/diffusers/models/embeddings.py:get_2d_sincos_pos_embed",
"src/diffusers/models/embeddings.py:get_2d_sincos_pos_embed_from_grid",
"src/diffusers/models/embeddings.py:get_1d_sincos_pos_embed_from_grid",
"src/diffusers/models/embeddings.py:get_... | [] | 6 | 658 | |
sktime/sktime | sktime__sktime-7221 | 0f75b7ad0dce8b722c81fe49bb9624de20cc4923 | null | diff --git a/sktime/datatypes/_adapter/polars.py b/sktime/datatypes/_adapter/polars.py
index e1fdd5f3ab7..e8138e4faa9 100644
--- a/sktime/datatypes/_adapter/polars.py
+++ b/sktime/datatypes/_adapter/polars.py
@@ -226,22 +226,31 @@ def check_polars_frame(
# columns in polars are unique, no check required
+ i... | [ENH] `polars` schema checks - address performance warnings
The current schema checks for lazy `polars` based data types raise performance warnings, e.g.,
```
sktime/datatypes/tests/test_check.py::test_check_metadata_inference[Table-polars_lazy_table-fixture:1]
/home/runner/work/sktime/sktime/sktime/datatypes/_a... | 1,727,991,899,000 | null | Performance Issue | [
"sktime/datatypes/_adapter/polars.py:check_polars_frame"
] | [] | 1 | 659 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.