repo stringclasses 21
values | pull_number float64 45 194k | instance_id stringlengths 16 34 | issue_numbers stringlengths 6 27 | base_commit stringlengths 40 40 | patch stringlengths 263 270k | test_patch stringlengths 312 408k | problem_statement stringlengths 38 47.6k | hints_text stringlengths 1 257k ⌀ | created_at stringdate 2016-01-11 17:37:29 2024-10-18 14:52:41 | language stringclasses 4
values | Dockerfile stringclasses 279
values | P2P stringlengths 2 10.2M | F2P stringlengths 11 38.9k | F2F stringclasses 86
values | test_command stringlengths 27 11.4k | task_category stringclasses 5
values | is_no_nodes bool 2
classes | is_func_only bool 2
classes | is_class_only bool 2
classes | is_mixed bool 2
classes | num_func_changes int64 0 238 | num_class_changes int64 0 70 | num_nodes int64 0 264 | is_single_func bool 2
classes | is_single_class bool 2
classes | modified_nodes stringlengths 2 42.2k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 27,797 | huggingface__transformers-27797 | ['27676'] | c99f25476312521d4425335f970b198da42f832d | diff --git a/src/transformers/generation/logits_process.py b/src/transformers/generation/logits_process.py
--- a/src/transformers/generation/logits_process.py
+++ b/src/transformers/generation/logits_process.py
@@ -1071,7 +1071,14 @@ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> to
... | diff --git a/tests/generation/test_logits_process.py b/tests/generation/test_logits_process.py
--- a/tests/generation/test_logits_process.py
+++ b/tests/generation/test_logits_process.py
@@ -610,6 +610,13 @@ def prefix_allowed_tokens_fn(batch_id, inputs_ids):
torch.isinf(filtered_scores).tolist(), [[False,... | RuntimeError with prefix_allowed_tokens_fn and do_sample=True When Allowed Tokens List is Empty
### System Info
- `transformers` version: 4.36.0.dev0
- Platform: macOS-13.4.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.19.3
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- ... | Hey! Thanks for reporting, this would require use to check that the output of `self._prefix_allowed_tokens_fn(batch_id, sent)` on each token is not `[]` before applying the mask. It does make sense because we don't specify that the list cannot be empty.
Would you like to open a PR for a fix? (meaning something like:
... | 2023-12-01 21:21:16+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_remove_nan_inf_logits_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_hamming_diversity', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_normalization', 'tests/generation/test_logits_process.py:LogitsProc... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_prefix_constrained_logits_processor'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/generation/test_logits_process.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/generation/logits_process.py->module->class_definition:PrefixConstrainedLogitsProcessor->function_definition:__call__"] |
huggingface/transformers | 28,010 | huggingface__transformers-28010 | ['28622'] | f7ef7cec6c6c162087421f36a17eabdbb223579d | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -585,6 +585,9 @@ def converted(self) -> Tokenizer:
replacement = "▁"
add_prefix_space = True
+ ... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -306,6 +306,34 @@ def test_pickle_subword_regularization_tokenizer(self):
def test_subword_regulariza... | Can `LlamaTokenizerFast` support the argument `add_prefix_space = False`
### System Info
With `transformers==4.36.2`
It seems the argument `add_prefix_space` is invalid here.
### Who can help?
@ArthurZucker
### Reproduction
```
>>> from transformers import LlamaTokenizerFast
>>> tokenizer = LlamaT... | null | 2023-12-13 16:59:44+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/seamless_m4t/test_tokenization_seamless_m4t.py:SeamlessM4TTokenizationTest:test_truncation_side_in_kwargs', 'tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_special_tokens_initialization_with_non_empty_additional_special_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokeniz... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_add_prefix_space', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_add_prefix_space'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/llama/test_tokenization_llama.py /testbed/tests/models/seamless_m4t/test_tokenization_seamless_m4t.py /testbed/tests/models/t5/test_tokenization_t5.py | Bug Fix | false | false | false | true | 10 | 11 | 21 | false | false | ["src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py->module->class_definition:SeamlessM4TTokenizer", "src/transformers/convert_slow_tokenizer.py->module->class_definition:LlamaConverter->function_definition:normalizer", "src/transformers/models/t5/tokenization_t5.py->module->class_definition:T5Tokenizer... |
huggingface/transformers | 28,071 | huggingface__transformers-28071 | ['26598'] | 43ee58588be4dc754c9f0dea874437fe7201bf00 | diff --git a/src/transformers/models/speecht5/modeling_speecht5.py b/src/transformers/models/speecht5/modeling_speecht5.py
--- a/src/transformers/models/speecht5/modeling_speecht5.py
+++ b/src/transformers/models/speecht5/modeling_speecht5.py
@@ -64,13 +64,17 @@ def shift_tokens_right(input_ids: torch.Tensor, pad_token... | diff --git a/tests/models/speecht5/test_modeling_speecht5.py b/tests/models/speecht5/test_modeling_speecht5.py
--- a/tests/models/speecht5/test_modeling_speecht5.py
+++ b/tests/models/speecht5/test_modeling_speecht5.py
@@ -909,6 +909,23 @@ def test_model_forward(self):
config_and_inputs = self.model_tester.pre... | [SpeechT5] Attention mask not changed according to decoder inputs
### System Info
- `transformers` version: 4.33.3
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate conf... | cc @ylacombe could you take a look when you get the chance? You know SpeechT5 pretty well by now!
Hey, thanks for opening this issue!
I will take a look in the next few days, in the meantime, do you have a script to reproduce the mismatch @Joao-Maria-Janeiro ?
Hey @Joao-Maria-Janeiro , any update on a reproducing scri... | 2023-12-15 13:45:49+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForSpeechToTextTest:test_training', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ModelTest:test_tied_weights_keys', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ModelTest:test_inputs_embeds', 'tests/models/speecht5/test_modeling_speecht5.... | ['tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForSpeechToSpeechTest:test_model_forward_with_labels', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForTextToSpeechTest:test_model_forward_with_labels'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/speecht5/test_modeling_speecht5.py | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/speecht5/modeling_speecht5.py->module->class_definition:SpeechT5ForTextToSpeech->function_definition:forward", "src/transformers/models/speecht5/modeling_speecht5.py->module->class_definition:SpeechT5ForSpeechToSpeech->function_definition:forward", "src/transformers/models/speecht5/modeling_sp... |
huggingface/transformers | 28,115 | huggingface__transformers-28115 | ['28021'] | 71d47f0ad498b7649f11d3a9cca3cd3585e4341f | diff --git a/src/transformers/models/mixtral/configuration_mixtral.py b/src/transformers/models/mixtral/configuration_mixtral.py
--- a/src/transformers/models/mixtral/configuration_mixtral.py
+++ b/src/transformers/models/mixtral/configuration_mixtral.py
@@ -79,7 +79,7 @@ class MixtralConfig(PretrainedConfig):
... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -469,6 +469,7 @@ def test_load_balancing_loss(self):
config, input_dict = self.model_tester.pre... | Incorrect router probability calculation
### System Info
transformers version 4.36.0
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)... | Sorry could you either show the issue or detail where you had a problem? The computation is different because the output shape are also different, the routing mecanism is also different. 🤗
Sure! @ArthurZucker
Here's the current loss function for convenience
```
def load_balancing_loss_func(gate_logits: torch.Te... | 2023-12-18 15:38:54+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/models/mixtral/configuration_mixtral.py->module->class_definition:MixtralConfig", "src/transformers/models/mixtral/configuration_mixtral.py->module->class_definition:MixtralConfig->function_definition:__init__", "src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_ba... |
huggingface/transformers | 28,256 | huggingface__transformers-28256 | ['28255'] | 932ad8af7a333875a36a9a2007d2601510b1f601 | diff --git a/src/transformers/models/mixtral/modeling_mixtral.py b/src/transformers/models/mixtral/modeling_mixtral.py
--- a/src/transformers/models/mixtral/modeling_mixtral.py
+++ b/src/transformers/models/mixtral/modeling_mixtral.py
@@ -103,11 +103,7 @@ def load_balancing_loss_func(gate_logits: torch.Tensor, num_expe... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -474,7 +474,7 @@ def test_load_balancing_loss(self):
model.eval()
result = model(input_i... | Incorrect implementation of auxiliary loss
### System Info
- `transformers` version: 4.37.0.dev0
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version ... | null | 2023-12-27 07:48:22+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_balancing_loss_func"] |
huggingface/transformers | 28,398 | huggingface__transformers-28398 | ['23116'] | fff8ca8e597532f141bc3f522f47573320a06730 | diff --git a/src/transformers/models/oneformer/image_processing_oneformer.py b/src/transformers/models/oneformer/image_processing_oneformer.py
--- a/src/transformers/models/oneformer/image_processing_oneformer.py
+++ b/src/transformers/models/oneformer/image_processing_oneformer.py
@@ -15,11 +15,13 @@
"""Image process... | diff --git a/tests/models/oneformer/test_image_processing_oneformer.py b/tests/models/oneformer/test_image_processing_oneformer.py
--- a/tests/models/oneformer/test_image_processing_oneformer.py
+++ b/tests/models/oneformer/test_image_processing_oneformer.py
@@ -15,10 +15,11 @@
import json
+import os
+import tempf... | OneFormerImageProcessor does not support passing local config file, always tries to download from repo
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch ... | @rbavery Thanks for raising this issue.
I'm able to load a processor locally on the development branch without issue:
```python
from transformers import OneFormerProcessor
processor = OneFormerProcessor.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny')
processor.save_pretrained('foo')
new_processor =... | 2024-01-08 16:33:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_init_without_params', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_image_processor_to_json_file', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcess... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_can_load_with_local_metadata'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/oneformer/test_image_processing_oneformer.py | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor->function_definition:__init__", "src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor", "src/transformers/models/oneformer/image_processing_one... |
huggingface/transformers | 28,517 | huggingface__transformers-28517 | ['28505'] | edb170238febf7fc3e3278ed5b9ca0b2c40c70e3 | diff --git a/src/transformers/models/mixtral/modeling_mixtral.py b/src/transformers/models/mixtral/modeling_mixtral.py
--- a/src/transformers/models/mixtral/modeling_mixtral.py
+++ b/src/transformers/models/mixtral/modeling_mixtral.py
@@ -74,7 +74,9 @@
_CONFIG_FOR_DOC = "MixtralConfig"
-def load_balancing_loss_fun... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -462,7 +462,6 @@ def test_load_balancing_loss(self):
r"""
Let's make sure we can actuall... | Exclude the load balancing loss of padding tokens in Mixtral-8x7B
### Feature request
The auxiliary loss in Mixtral-MoE shouldn't **include the loss from padding tokens**.
### Motivation
I think it is better to change the function
[load_balancing_loss_func](https://github.com/huggingface/transformers/blob/main/sr... | cc @ArthurZucker
feel free to open a PR for this! Otherwise will mark it as a good second issue 🤗
I would like to work on this issue, i will go through the linked file today and ask any questions i have.
I was looking at the code.
Below is what the model outputs
`return MoeModelOutputWithPast(
last_hi... | 2024-01-16 02:39:12+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_balancing_loss_func", "src/transformers/models/mixtral/modeling_mixtral.py->module->class_definition:MixtralForCausalLM->function_definition:forward"] |
huggingface/transformers | 28,522 | huggingface__transformers-28522 | ['26547'] | 0cdcd7a2b319689d75ae4807cfb7b228aa322f83 | diff --git a/src/transformers/models/barthez/tokenization_barthez.py b/src/transformers/models/barthez/tokenization_barthez.py
--- a/src/transformers/models/barthez/tokenization_barthez.py
+++ b/src/transformers/models/barthez/tokenization_barthez.py
@@ -251,6 +251,7 @@ def _convert_id_to_token(self, index):
"... | diff --git a/tests/models/speecht5/test_tokenization_speecht5.py b/tests/models/speecht5/test_tokenization_speecht5.py
--- a/tests/models/speecht5/test_tokenization_speecht5.py
+++ b/tests/models/speecht5/test_tokenization_speecht5.py
@@ -202,3 +202,17 @@ def test_tokenizer_integration(self):
revision="c5e... | [SpeechT5] Decode function strips space after special token
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.1
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorc... | Thanks for reporting! This is happening because:
```python
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
current_sub_tokens = []
out_string = ""
for token in tokens:
# make sure that special tokens are... | 2024-01-16 09:16:28+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_batch_encode_plus_padding', 'tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_full_tokenizer', 'tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_call', 'tests/models/speecht5/test... | ['tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_encode_decode'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/speecht5/test_tokenization_speecht5.py | Bug Fix | false | false | false | true | 1 | 5 | 6 | false | false | ["src/transformers/models/barthez/tokenization_barthez.py->module->class_definition:BarthezTokenizer", "src/transformers/models/speecht5/tokenization_speecht5.py->module->class_definition:SpeechT5Tokenizer->function_definition:convert_tokens_to_string", "src/transformers/models/speecht5/tokenization_speecht5.py->module... |
huggingface/transformers | 28,535 | huggingface__transformers-28535 | ['28387'] | 07ae53e6e77ec6ff4fb25fbacfec4b11cfc82749 | diff --git a/src/transformers/models/esm/tokenization_esm.py b/src/transformers/models/esm/tokenization_esm.py
--- a/src/transformers/models/esm/tokenization_esm.py
+++ b/src/transformers/models/esm/tokenization_esm.py
@@ -14,10 +14,9 @@
# limitations under the License.
"""Tokenization classes for ESM."""
import os
... | diff --git a/tests/models/esm/test_tokenization_esm.py b/tests/models/esm/test_tokenization_esm.py
--- a/tests/models/esm/test_tokenization_esm.py
+++ b/tests/models/esm/test_tokenization_esm.py
@@ -87,3 +87,25 @@ def test_tokenize_special_tokens(self):
self.assertEqual(len(token_2), 1)
... | Issue with Adding New Tokens to ESM2 Model Tokenizer
Hello
I am encountering an issue while working with the ESM2 models (`facebook/esm2_t6_8M_UR50D`). Specifically, when I try to add new tokens to the tokenizer, they are automatically classified as special tokens, even though I am specifying `special_tokens=False`.... | Seems like a bug with ESMTokenizer, (which doesn't use this library).
@ArthurZucker for insights or the more relevant people ?
Hey, I cannot reproduce this:
```python
In [23]: model_checkpoint = "facebook/esm2_t6_8M_UR50D"
...: tokenizer_2 = AutoTokenizer.from_pretrained(model_checkpoint)
huggingface/token... | 2024-01-16 15:06:24+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenize_special_tokens', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_pad', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_no_pad', 'tests/models/esm/test_tokenization_esm.py:E... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_add_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/esm/test_tokenization_esm.py | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition:get_vocab", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition... |
huggingface/transformers | 28,563 | huggingface__transformers-28563 | ['28002'] | 2c1eebc1216549d8195d7d1c6adb8b99afee3ec5 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -57,6 +57,8 @@
logger = logging.get_logger(__name__)
+_HIDDEN_STATES_START_PO... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -2292,16 +2292,15 @@ def get_subsampled_output_lengths(self, input_lengths):
def encoder_seq_length(s... | Not handled case when use_weighted_layer_sum and return-dict=True in WhisperForAudioClassification
@sanchit-gandhi
I use WhisperForAudioClassification task and want to use `use_weighted_layer_sum=True`, but there is a problem when call forward,
the encoder part can return tuple or dict if `return_dict=True` but the... | Hi @ElsebaiyMohamed, thanks for raising this issue and providing details on the error + a snippet. Could you also provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?
Hi @amyeroberts ,
Apologies for the delayed response! 🙏 Life threw a curveball, bu... | 2024-01-17 17:22:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperEncoderModelTest:test_forward_pass_weighted_layer_sum'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForAudioClassification->function_definition:forward"] |
huggingface/transformers | 28,940 | huggingface__transformers-28940 | ['28817'] | dd1c9052159ae824c8acef7c2552f9fad5ca020a | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -861,7 +861,7 @@ def __init__(
raise ValueError(f"{device} unrecognized or not available.")
else:
self.device =... | diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -199,6 +199,29 @@ def test_unbatch_attentions_hidden_states(self):
outputs = text_classifier(["This is great !"] * 20... | Populate torch_dtype from a model to a pipeline
### Feature request
When constructing a pipeline object from a model and a tokenizer, the pipeline doesn't inherit the `torch_dtype` field from the underlying model.
```
model = AutoModelForCausalLM.from_pretrained("t5-small", torch_dtype = torch.bfloat16)
pipeline ... | cc @Rocketknight1 WDYT? Sounds good to me
This sounds like a safe assumption to me too, though obviously I'd like to confirm that with some tests! I'm in favour of the PR if you're happy to open it @B-Step62
@ArthurZucker @Rocketknight1 Great! I will open a PR soon, in the meantime could you assign the issue to me?
... | 2024-02-09 12:05:13+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_unbatch_attentions_hidden_states', 'tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_check_task', 'tests/pipelines/test_pipelines_common.py:PipelinePadTest:test_pipeline_padding', 'tests/pipelines/test_pipelines_common.py:CommonPipelineT... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_torch_dtype_property'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/pipelines/test_pipelines_common.py | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:__init__", "src/transformers/pipelines/base.py->module->class_definition:Pipeline", "src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:torch_dtype"] |
huggingface/transformers | 29,175 | huggingface__transformers-29175 | ['28919'] | ae49b218c3d718df90d8e4a109016450fb8f0632 | diff --git a/src/transformers/dynamic_module_utils.py b/src/transformers/dynamic_module_utils.py
--- a/src/transformers/dynamic_module_utils.py
+++ b/src/transformers/dynamic_module_utils.py
@@ -185,19 +185,35 @@ def check_imports(filename: Union[str, os.PathLike]) -> List[str]:
return get_relative_imports(filenam... | diff --git a/tests/models/auto/test_modeling_auto.py b/tests/models/auto/test_modeling_auto.py
--- a/tests/models/auto/test_modeling_auto.py
+++ b/tests/models/auto/test_modeling_auto.py
@@ -376,6 +376,27 @@ def test_from_pretrained_dynamic_model_distant_with_ref(self):
for p1, p2 in zip(model.parameters(), re... | dependency issue when working with a custom architecture in a repo that has a dot in its name
### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installe... | cc @Rocketknight1 I can do it if you are low on bandwidth! Think it makes sense as a lot of models have `2.5B` or such names!
I can take this one, I think!
to anyone reading this in the future:
I found a work around this, **if you cannot rename your repo and remove the dot from its name**, you can follow these steps... | 2024-02-21 14:48:16+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/auto/test_modeling_auto.py:AutoModelTest:test_model_from_tf_suggestion', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:test_attr_not_existing', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:test_from_pretrained_with_tuple_values', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:t... | ['tests/models/auto/test_modeling_auto.py:AutoModelTest:test_from_pretrained_dynamic_model_with_period'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/auto/test_modeling_auto.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/dynamic_module_utils.py->module->function_definition:get_class_from_dynamic_module", "src/transformers/dynamic_module_utils.py->module->function_definition:get_class_in_module"] |
huggingface/transformers | 29,300 | huggingface__transformers-29300 | ['29239'] | 8f2f0f0f85f9e517c495b2083c218215819bae34 | diff --git a/src/transformers/models/conditional_detr/image_processing_conditional_detr.py b/src/transformers/models/conditional_detr/image_processing_conditional_detr.py
--- a/src/transformers/models/conditional_detr/image_processing_conditional_detr.py
+++ b/src/transformers/models/conditional_detr/image_processing_c... | diff --git a/tests/models/conditional_detr/test_image_processing_conditional_detr.py b/tests/models/conditional_detr/test_image_processing_conditional_detr.py
--- a/tests/models/conditional_detr/test_image_processing_conditional_detr.py
+++ b/tests/models/conditional_detr/test_image_processing_conditional_detr.py
@@ -3... | `YolosImageProcessor.preprocess` drops annotations when padding
### System Info
- `transformers` version: 4.38.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- Py... | null | 2024-02-26 16:11:46+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/deformable_detr/test_image_processing_deformable_detr.py:DeformableDetrImageProcessingTest:test_cast_dtype_device', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_image_processor_from_and_save_pretrained', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProc... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_batched_coco_panoptic_annotations'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/conditional_detr/test_image_processing_conditional_detr.py /testbed/tests/models/deformable_detr/test_image_processing_deformable_detr.py /testbed/tests/models/deta/test_image_processing_deta.py /testbed/tests/models/detr/test_image_processing_d... | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/models/deformable_detr/image_processing_deformable_detr.py->module->class_definition:DeformableDetrImageProcessor->function_definition:preprocess", "src/transformers/models/detr/image_processing_detr.py->module->class_definition:DetrImageProcessor->function_definition:preprocess", "src/transformers/m... |
huggingface/transformers | 29,311 | huggingface__transformers-29311 | ['29243'] | b27aa206ddf3fe66b36db587603141b3d0379a82 | diff --git a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
--- a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
+++ b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
@@ -125,7 +125,6 @@ class Wav2Vec2CTCTokenizerOutput(ModelOut... | diff --git a/tests/models/wav2vec2/test_tokenization_wav2vec2.py b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
--- a/tests/models/wav2vec2/test_tokenization_wav2vec2.py
+++ b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions ... | `skip_special_tokens` for `Wav2Vec2CTCTokenizer` does not work expectedly.
### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelera... | it could / should but should also be left to the super class IMO!
Would you like to open a PR for a fix? I don't think that this is intended behaviour | 2024-02-27 06:22:32+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_maximum_encoding_length_pair_input', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_right_and_left_truncation', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_neste... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_tokenizer_decode_added_tokens', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2TokenizerTest:test_tokenizer_decode_added_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/wav2vec2/test_tokenization_wav2vec2.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer->function_definition:_decode", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_... |
huggingface/transformers | 29,449 | huggingface__transformers-29449 | ['28591'] | 17b06e2c6650de162e7954babf6224c1975c2852 | diff --git a/src/transformers/models/idefics/processing_idefics.py b/src/transformers/models/idefics/processing_idefics.py
--- a/src/transformers/models/idefics/processing_idefics.py
+++ b/src/transformers/models/idefics/processing_idefics.py
@@ -149,7 +149,7 @@ def __init__(self, image_processor, tokenizer=None, image... | diff --git a/tests/models/idefics/test_modeling_idefics.py b/tests/models/idefics/test_modeling_idefics.py
--- a/tests/models/idefics/test_modeling_idefics.py
+++ b/tests/models/idefics/test_modeling_idefics.py
@@ -656,7 +656,7 @@ def test_inference_natural_language_visual_reasoning(self):
"HuggingFaceM4/i... | Idefics - AttentionMasks wrongly set with padding='longest'
### System Info
transformers==4.36.2
### Reproduction
Reported by https://huggingface.co/VishnuSuganth
https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/discussions/11
| Cc @ArthurZucker @younesbelkada
Might be a tokenization issue will have a look
Is anyone working on this issue? If not, would it be something a new contributor could look at?
I think the issue may be how `unpadded_seq_len` is calculated here: https://github.com/huggingface/transformers/blob/main/src/transformers/m... | 2024-03-05 04:48:47+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_training', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_config', 'tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_resize_embeddings_untied', 'tests/models/idefics/test_modeling_ide... | ['tests/models/idefics/test_processor_idefics.py:IdeficsProcessorTest:test_tokenizer_left_padding', 'tests/models/idefics/test_processor_idefics.py:IdeficsProcessorTest:test_tokenizer_padding'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/idefics/test_modeling_idefics.py /testbed/tests/models/idefics/test_processor_idefics.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/idefics/processing_idefics.py->module->class_definition:IdeficsProcessor->function_definition:__call__"] |
huggingface/transformers | 29,519 | huggingface__transformers-29519 | ['29176'] | b338a6c3b8eda29610d4d472cad8cd87cbfdaaed | diff --git a/src/transformers/modeling_attn_mask_utils.py b/src/transformers/modeling_attn_mask_utils.py
--- a/src/transformers/modeling_attn_mask_utils.py
+++ b/src/transformers/modeling_attn_mask_utils.py
@@ -164,10 +164,10 @@ def _make_causal_mask(
# add lower triangular sliding window mask if necessary
... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1673,7 +1673,7 @@ def check_to_causal(self, mask_converter, q_len, kv_len, bsz=3):
def compute_num_context_mask(self, kv_len, context, q_len):
# This function ... | Sliding window inconsistency between PyTorch and Flax
### System Info
transformers main (ae49b218c), Python 3.10.8
### Who can help?
@ArthurZucker, @sanchit-gandhi
### Reproduction
The attention `sliding_window` has different interpretation for PyTorch and Flax. Here's are matching examples:
**PyTorch... | Hey! Pretty sure `MistralSdpaAttention` does not support sliding window yet! Are you using `attn_implementation="flash_attention_2"`?
@ArthurZucker I'm using the default implementation on the CPU, I've just checked to make sure and it's "eager". Initially I thought the issues may be in flash_attn, but you made me re... | 2024-03-07 15:56:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_shard_checkpoint', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:ModelUtilsTest:test_no_super_init_config_and_model', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d', 'tests/test_modeling_uti... | ['tests/test_modeling_utils.py:AttentionMaskTester:test_causal_mask_sliding', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal_sliding'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter->function_definition:_make_causal_mask"] |
huggingface/transformers | 29,563 | huggingface__transformers-29563 | ['29514'] | 0290ec19c901adc0f1230ebdccad11c40af026f5 | diff --git a/src/transformers/models/mamba/modeling_mamba.py b/src/transformers/models/mamba/modeling_mamba.py
--- a/src/transformers/models/mamba/modeling_mamba.py
+++ b/src/transformers/models/mamba/modeling_mamba.py
@@ -211,7 +211,7 @@ def slow_forward(self, input_states, cache_params=None):
# 2. Convolut... | diff --git a/tests/models/mamba/test_modeling_mamba.py b/tests/models/mamba/test_modeling_mamba.py
--- a/tests/models/mamba/test_modeling_mamba.py
+++ b/tests/models/mamba/test_modeling_mamba.py
@@ -170,7 +170,7 @@ def create_and_check_mamba_model(self, config, input_ids, *args):
self.parent.assertEqual(result... | Cannot propagate gradients in Mamba
### System Info
- `transformers` version: 4.39.0.dev0
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.21.4
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): ... | Hi @gsarti, thanks for reporting!
Looking at the error message, it's likely due to an in place operation in the model implementation. Would you like to open a PR to fix this?
Pretty sure ~setting `use_cache=False` fixes it, let me check~ let's fix it (It's only for the slow version, which I tried but not on CPU)! ... | 2024-03-09 22:35:02+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_sample_generate_dict_output', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_model_is_small', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_generate_with_head_masking', 'tests/models/mamba/test_modeling_mamba.py:MambaModelT... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_mamba_cached_slow_forward_and_backwards'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mamba/test_modeling_mamba.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaMixer->function_definition:slow_forward"] |
huggingface/transformers | 29,585 | huggingface__transformers-29585 | ['29860'] | 6cdbd73e01a9719bfaec07d91fd108e8d932bbbb | diff --git a/src/transformers/generation/candidate_generator.py b/src/transformers/generation/candidate_generator.py
--- a/src/transformers/generation/candidate_generator.py
+++ b/src/transformers/generation/candidate_generator.py
@@ -148,6 +148,11 @@ def __init__(
self.generation_config.return_dict_in_generat... | diff --git a/tests/generation/test_utils.py b/tests/generation/test_utils.py
--- a/tests/generation/test_utils.py
+++ b/tests/generation/test_utils.py
@@ -1977,6 +1977,20 @@ def test_max_length_if_input_embeds(self):
out_gen_embeds = model.generate(inputs_embeds=inputs_embeds, max_length=max_length)
s... | `assisted_decoding` called directly inside `generate` triggering warning to use when it shouldn't
### System Info
- `transformers` version: 4.39.1
- Python version: 3.10.14
- Huggingface_hub version: 0.22.0
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch ... | null | 2024-03-11 10:34:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_generated_length_assisted_generation', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_assisted_decoding_num_assistant_tokens_heuristic_schedule', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_custom_stopping_criteria', '... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_length_warning_assisted_generation'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_utils.py | Bug Fix | false | false | false | true | 3 | 3 | 6 | false | false | ["src/transformers/generation/utils.py->module->class_definition:GenerationMixin->function_definition:_prepare_generated_length", "src/transformers/generation/candidate_generator.py->module->class_definition:AssistedCandidateGenerator->function_definition:get_candidates", "src/transformers/generation/utils.py->module->... |
huggingface/transformers | 29,589 | huggingface__transformers-29589 | ['29425'] | fadb053379b3ef24c4ec8e6d7d58555af21f58db | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -4247,8 +4247,23 @@ def _add_sm_patterns_to_gitignore(self) -> None:
self.repo.git_push()
def create_accelerator_and_postprocess(self):
- grad_acc_kwar... | diff --git a/src/transformers/testing_utils.py b/src/transformers/testing_utils.py
--- a/src/transformers/testing_utils.py
+++ b/src/transformers/testing_utils.py
@@ -52,6 +52,7 @@
)
from .integrations.deepspeed import is_deepspeed_available
from .utils import (
+ ACCELERATE_MIN_VERSION,
is_accelerate_availa... | Allow Trainer to Sync Gradients Each Batch When Performing Gradient Accumulation
### Feature request
We propose a feature to allow:
- `_do_sync` to take a `force` boolean flag, where `_do_sync(force=True)` forces a gradient sync.
- `Trainer` / `Accelerate` to appropriately pass the `force` flag if the user requests ... | Hi! This solution does indeed make sense to me, let's start with a PR to accelerate and then the upstream to transformers? :)
Note: for the `TrainingArguments`, we need to add this to the Accelerator config class instead and handle the logic that way as we are no longer adding more args to the `TrainingArguments` w... | 2024-03-11 14:19:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_galore_matched_modules', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_predict_with_jit', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_bnb_paged_adam8bit', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_logging_inf_na... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_yaml', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerate_config_from_dataclass_grad_accum', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_dict', 'tests/trainer/test_trainer.py:Tra... | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/src/transformers/testing_utils.py /testbed/tests/trainer/test_trainer.py | Feature | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/utils/import_utils.py->module->function_definition:is_accelerate_available", "src/transformers/trainer_pt_utils.py->module->class_definition:AcceleratorConfig", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:create_accelerator_and_postprocess"] |
huggingface/transformers | 29,675 | huggingface__transformers-29675 | ['29665'] | 56b64bf1a51e29046bb3f8ca15839ff4d6a92c74 | diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -652,7 +652,8 @@ def save_pretrained(
Additional key word arguments p... | diff --git a/tests/trainer/test_trainer_seq2seq.py b/tests/trainer/test_trainer_seq2seq.py
--- a/tests/trainer/test_trainer_seq2seq.py
+++ b/tests/trainer/test_trainer_seq2seq.py
@@ -181,3 +181,22 @@ def prepare_data(examples):
assert (
metrics["eval_samples"] == dataset_len * num_return_s... | GenerationConfig.from_pretrained raise ValueError after training, maybe raise it earlier?
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.13
- Huggingface_hub version: 0.21.4
- Safetensors version: 0.4.2
- Accelerate version... | Hi @YiqunChen1999 👋 Thank you for opening this issue
You're absolutely right, this was an oversight on our part -- we should fail as early as possible. I'm going to open a PR for it. | 2024-03-15 11:00:43+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | [] | ['tests/trainer/test_trainer_seq2seq.py:Seq2seqTrainerTester:test_bad_generation_config_fail_early'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_trainer_seq2seq.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/trainer_seq2seq.py->module->class_definition:Seq2SeqTrainer->function_definition:load_generation_config", "src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:save_pretrained"] |
huggingface/transformers | 29,680 | huggingface__transformers-29680 | ['29551'] | 87e2ea33aab6454be3afbd4f0342b518f15bccef | diff --git a/src/transformers/generation/logits_process.py b/src/transformers/generation/logits_process.py
--- a/src/transformers/generation/logits_process.py
+++ b/src/transformers/generation/logits_process.py
@@ -151,11 +151,13 @@ def __init__(self, min_length: int, eos_token_id: Union[int, List[int]]):
@add_s... | diff --git a/tests/generation/test_logits_process.py b/tests/generation/test_logits_process.py
--- a/tests/generation/test_logits_process.py
+++ b/tests/generation/test_logits_process.py
@@ -157,8 +157,9 @@ def test_temperature_dist_warper(self):
temp_dist_warper_sharper = TemperatureLogitsWarper(temperature=0... | Contrastive decoding "raw" logits and scores are identical
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found... | Hi @dmarx 👋
In theory I agree with the issue -- `scores` should indeed contain the degeneration penalty. However, our API dictates that we return the scores for ALL tokens (and not just the selected tokens at each iteration), and the `contrastive_score` is only computed for the `top_k` tokens. As such, in practice... | 2024-03-15 15:55:40+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_early_stop_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_eta_dist_warper', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_new_min_length_dist_processor_0', 'tests/generation/test_logits_process.py:Logit... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_remove_nan_inf_logits_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_no_repeat_ngram_dist_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_hamming_diversity', 'tests/generation/test_logits_proc... | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/generation/test_logits_process.py /testbed/tests/generation/test_utils.py | Bug Fix | false | true | false | false | 28 | 0 | 28 | false | false | ["src/transformers/generation/logits_process.py->module->class_definition:WhisperTimeStampLogitsProcessor->function_definition:__call__", "src/transformers/generation/logits_process.py->module->class_definition:EpsilonLogitsWarper->function_definition:__call__", "src/transformers/generation/logits_process.py->module->c... |
huggingface/transformers | 29,688 | huggingface__transformers-29688 | ['29685'] | f4dc26d46687f5f4baf3fe64a1d87cafefbeec53 | diff --git a/src/transformers/models/whisper/generation_whisper.py b/src/transformers/models/whisper/generation_whisper.py
--- a/src/transformers/models/whisper/generation_whisper.py
+++ b/src/transformers/models/whisper/generation_whisper.py
@@ -262,7 +262,7 @@ def generate(
synced_gpus: bool = False,
... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -545,10 +545,19 @@ def test_generate_language(self):
# test language code
model.genera... | Support mixed-language batches in `WhisperGenerationMixin`
### Feature request
It is currently not possible to mix multiple languages in a single batch when running [Whisper](https://huggingface.co/docs/transformers/en/model_doc/whisper). The `language` argument only accepts a single string (as opposed to a separate... | null | 2024-03-16 10:17:27+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_language'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Feature | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_prepare_decoder_input_ids", "src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_retrieve_init_tokens->function_definiti... |
huggingface/transformers | 29,838 | huggingface__transformers-29838 | ['29016'] | 76a33a10923ccc1074917f6b6a1e719e626b7dc9 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1048,6 +1048,36 @@ def create_optimizer(self):
return self.optimizer
+ def get_num_trainable_parameters(self):
+ """
+ Get the number of trainable ... | diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py
--- a/tests/trainer/test_trainer.py
+++ b/tests/trainer/test_trainer.py
@@ -3769,3 +3769,41 @@ def test_hyperparameter_search_backends(self):
list(ALL_HYPERPARAMETER_SEARCH_BACKENDS.keys()),
list(HPSearchBackend),
... | Trainer: Functions to inspect model and optimizer status
### Feature request
In huggingface Trainer, are there any functions to inspect model and optimizer status? such as, how many parameters require grad, learning rate of each parameter, which optimizer group each parameter belong...
I didn't find any related f... | cc @muellerzr @pacman100
Hi, can I take over the issue?
@CKeibel Sure! No need to claim on an issue, we prioritise based on PRs open, as we find this helps prevent issues from going stale without being addressed. Once you have something opened, feel free to ping me and @muellerzr for review 🤗
Hey, thanks for the rep... | 2024-03-24 10:58:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_yaml', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_galore_matched_modules', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_predict_with_jit', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_bnb_... | ['tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_num_trainable_parameters', 'tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_learning_rates', 'tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_optimizer_group'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_trainer.py | Feature | false | false | false | true | 3 | 1 | 4 | false | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:get_optimizer_group", "src/transformers/trainer.py->module->class_definition:Trainer", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:get_learning_rates", "src/transformers/trainer.py->module->class... |
huggingface/transformers | 30,556 | huggingface__transformers-30556 | ['30521'] | a3aabc702e1c49243e7b48f22d88362d50e786c5 | diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
--- a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
@@ -122,7 +12... | diff --git a/tests/trainer/test_data_collator.py b/tests/trainer/test_data_collator.py
--- a/tests/trainer/test_data_collator.py
+++ b/tests/trainer/test_data_collator.py
@@ -23,6 +23,7 @@
BertTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
+ DataCollatorForSeq2... | [BUG] DataCollatorForSeq2Seq with PaddingStrategy.MAX_LENGTH may not pad labels
It seems that when padding, if the MAX_LENGTH policy is set, the same padding is not performed on the label.
test case below:
```python
from transformers import DataCollatorForSeq2Seq,
from transformers.utils import PaddingStrategy
... | Thanks for raising this issue! Yea, that seems like a valid bug imo. The padding strategy isn't respected with `max_length`.
I'd change these lines:
https://github.com/huggingface/transformers/blob/73014b561d5f88d728e46a57d346f516fefe3f2d/src/transformers/data/data_collator.py#L591-L592
to something like:
```pyth... | 2024-04-29 21:36:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_language_modeling', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trai... | ['tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_pt', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_seq2seq', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_lists'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_data_collator.py | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py->module->class_definition:ModelArguments", "src/transformers/data/data_collator.py->module->class_definition:DataCollatorForSeq2Seq->function_definition:__call__"] |
huggingface/transformers | 30,567 | huggingface__transformers-30567 | ['30522'] | c712d05aa8fc8ba3ebe465079bd377d2dc9c2e07 | diff --git a/src/transformers/models/clip/image_processing_clip.py b/src/transformers/models/clip/image_processing_clip.py
--- a/src/transformers/models/clip/image_processing_clip.py
+++ b/src/transformers/models/clip/image_processing_clip.py
@@ -143,6 +143,10 @@ def __init__(
# for backwards compatibility of ... | diff --git a/tests/models/kosmos2/test_processor_kosmos2.py b/tests/models/kosmos2/test_processor_kosmos2.py
--- a/tests/models/kosmos2/test_processor_kosmos2.py
+++ b/tests/models/kosmos2/test_processor_kosmos2.py
@@ -17,6 +17,7 @@
import shutil
import tempfile
import unittest
+from tempfile import TemporaryDirecto... | KeyError: 'shortest_edge' when loading Kosmos-2 model from local files
### System Info
- `transformers` version: 4.40.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.14
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not fou... | cc @ydshieh
Hi @Charizhardt
Thank you for reporting this issue. I confirmed it is reproducible:
```python
from transformers import AutoProcessor
model_path = "./models/transformers/"
model_name = "microsoft/kosmos-2-patch14-224"
processor = AutoProcessor.from_pretrained(model_name)
processor.save_pre... | 2024-04-30 09:28:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/kosmos2/test_processor_kosmos2.py:Kosmos2ProcessorTest:test_tokenizer', 'tests/models/kosmos2/test_processor_kosmos2.py:Kosmos2ProcessorTest:test_model_input_names', 'tests/models/kosmos2/test_processor_kosmos2.py:Kosmos2ProcessorTest:test_image_processor', 'tests/models/kosmos2/test_processor_kosmos2.py... | ['tests/models/kosmos2/test_processor_kosmos2.py:Kosmos2ProcessorTest:test_image_procesor_load_save_reload'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/kosmos2/test_processor_kosmos2.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/models/clip/image_processing_clip.py->module->class_definition:CLIPImageProcessor->function_definition:__init__"] |
huggingface/transformers | 30,602 | huggingface__transformers-30602 | ['30601'] | c681b58b06f6fb8b5c331f380548af3b4b33f881 | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3263,8 +3263,8 @@ def from_pretrained(
)
else:
raise EnvironmentError(
- ... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1001,6 +1001,26 @@ def test_use_safetensors(self):
self.assertTrue(any(f.endswith("safetensors") for f in all_downloaded_files))
self.assertFalse(a... | `model.safetensors` missing in model file not found error in default case
### System Info
System info isn't super relevant here since the confusion is really just an just an error message string. I just reproduced in a CPU instance but this is applicable whenever model loading is needed.
- `transformers` version: 4.4... | null | 2024-05-01 19:16:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_safetensors_torch_from_torch_sharded', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:AttentionMaskTester:test_torch_compile_fullgraph', 'tests/test_modeling_utils.py:ModelUtilsTest:test_tied_weights_reload', ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_use_safetensors'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:from_pretrained"] |
huggingface/transformers | 30,627 | huggingface__transformers-30627 | ['30527'] | eed9ed679878ada2f6d2eefccdbda368cabc88b1 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -919,25 +919,36 @@ def _get_eval_sampler(self, eval_dataset: Dataset) -> Optional[torch.utils.data.
else:
return None
- def get_eval_dataloader(self, e... | diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py
--- a/tests/trainer/test_trainer.py
+++ b/tests/trainer/test_trainer.py
@@ -1231,6 +1231,97 @@ def test_dataloader_without_dataset(self):
trainer.train()
trainer.evaluate()
+ def test_get_eval_dataloader_without_pe... | Multiple validation datasets unsupported with `dataloader_persistent_workers=True`
### System Info
- `transformers` version: 4.40.1
- Platform: Linux-6.8.0-76060800daily20240311-generic-x86_64-with-glibc2.35
- Python version: 3.11.8
- Huggingface_hub version: 0.22.2
- Safetensors version: 0.4.3
- Accelerate vers... | @bastienlc feel free to open a PR to support this!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingfa... | 2024-05-02 20:25:18+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_load_best_model_at_end', 'tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_learning_rates', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_flos_extraction', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_checkpoint... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_get_eval_dataloader_with_persistent_workers', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_get_eval_dataloader_without_persistent_workers'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_trainer.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:evaluate", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:get_eval_dataloader"] |
huggingface/transformers | 30,772 | huggingface__transformers-30772 | ['30685'] | 04c7c176d7f70ec4b43c8c2a0327ff8d193f5c1d | diff --git a/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py b/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
--- a/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
+++ b/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
@@ -3... | diff --git a/tests/models/layoutxlm/test_tokenization_layoutxlm.py b/tests/models/layoutxlm/test_tokenization_layoutxlm.py
--- a/tests/models/layoutxlm/test_tokenization_layoutxlm.py
+++ b/tests/models/layoutxlm/test_tokenization_layoutxlm.py
@@ -150,17 +150,40 @@ def test_save_sentencepiece_tokenizer(self) -> None:
... | `PreTrainedTokenizerFast._batch_encode_plus()` got an unexpected keyword argument `'split_special_tokens'`
### System Info
Transformer version: 4.38.1
Platform: Ubuntu
Python version: 3.10.13
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My ow... | null | 2024-05-13 09:58:38+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/udop/test_tokenization_udop.py:UdopTokenizationTest:test_tokenizer_mismatch_warning', 'tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_chat_template_dict_saving', 'tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_tokenizer_mismatch_... | ['tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_split_special_tokens', 'tests/models/udop/test_tokenization_udop.py:UdopTokenizationTest:test_split_special_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/layoutxlm/test_tokenization_layoutxlm.py /testbed/tests/models/udop/test_tokenization_udop.py /testbed/tests/test_tokenization_common.py | Bug Fix | false | false | false | true | 13 | 1 | 14 | false | false | ["src/transformers/tokenization_utils_base.py->module->class_definition:PreTrainedTokenizerBase->function_definition:batch_encode_plus", "src/transformers/tokenization_utils_base.py->module->class_definition:PreTrainedTokenizerBase->function_definition:_batch_encode_plus", "src/transformers/tokenization_utils.py->modul... |
huggingface/transformers | 30,899 | huggingface__transformers-30899 | ['30892'] | 481a95781404e48b1c80940be17e8279dec82fe8 | diff --git a/src/transformers/generation/utils.py b/src/transformers/generation/utils.py
--- a/src/transformers/generation/utils.py
+++ b/src/transformers/generation/utils.py
@@ -1354,6 +1354,23 @@ def _get_static_cache(self, max_batch_size: int, max_cache_len: int) -> StaticCa
self._static_cache.reset() ... | diff --git a/tests/generation/test_utils.py b/tests/generation/test_utils.py
--- a/tests/generation/test_utils.py
+++ b/tests/generation/test_utils.py
@@ -65,6 +65,7 @@
GenerateBeamEncoderDecoderOutput,
GenerateDecoderOnlyOutput,
GenerateEncoderDecoderOutput,
+ GenerationConfig,
... | transformers 4.41.0 breaks generate() for T5
### System Info
- `transformers` version: 4.41.0
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accelerate version: 0.30.0
- Accelerate config: not found
- PyTorch v... | null | 2024-05-19 13:18:57+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_generated_length_assisted_generation', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_assisted_decoding_num_assistant_tokens_heuristic_schedule', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_custom_stopping_criteria', '... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_decoder_start_id_from_config'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_utils.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/generation/utils.py->module->class_definition:GenerationMixin->function_definition:_prepare_special_tokens", "src/transformers/generation/utils.py->module->class_definition:GenerationMixin", "src/transformers/generation/utils.py->module->class_definition:GenerationMixin->function_definition:_get_deco... |
huggingface/transformers | 30,934 | huggingface__transformers-30934 | ['30922'] | a755745546779ae5c42510bc02a859bdac82b3b7 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -14,6 +14,7 @@
# limitations under the License.
import warnings
+from math import ceil
from typing import Iterable, List, Optional, Tuple... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -369,6 +369,10 @@ def test_center_crop(self):
self.assertEqual(cropped_image.shape, (300, 260, 3))
self.assertTrue(np.allclose(cropped_image, expect... | `center_crop` outputs wrong sized array if provided with odd-numbered dimensions smaller than requested crop size
### System Info
transformers 4.40.1, python 3.12
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially support... | I believe the issue is more accurately caused by odd-numbered difference between original size and new size. Rounding up rather than down when calculating the padding fixes the above test cases. | 2024-05-21 10:22:57+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_flip_channel_order', 'tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_resize', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_u... | ['tests/test_image_transforms.py:ImageTransformsTester:test_center_crop'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/test_image_transforms.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:center_crop"] |
huggingface/transformers | 30,964 | huggingface__transformers-30964 | ['29625'] | 6739e1d261f80caec34b8c8ac7a030907a4f75a2 | diff --git a/src/transformers/models/llama/tokenization_llama_fast.py b/src/transformers/models/llama/tokenization_llama_fast.py
--- a/src/transformers/models/llama/tokenization_llama_fast.py
+++ b/src/transformers/models/llama/tokenization_llama_fast.py
@@ -163,6 +163,7 @@ def __init__(
add_bos_token=add_... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -602,6 +602,10 @@ def test_special_token_special_word(self):
self.assertEqual(decoded_tokens, "he... | `add_prefix_space` won't be respected by Llama tokenizer
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.21.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not fo... | Hey, I took a peek under the hood and looks like setting `add_prefix_true` is only changing `kwargs[slow]=True` (in [tokenization_llama_fast.py](https://github.com/huggingface/transformers/blob/5011908e10d9592eeb634f4940e0bc130d3edc69/src/transformers/models/llama/tokenization_llama_fast.py#L127C9-L132C1). The `super()... | 2024-05-22 13:01:20+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_offsets_mapping', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_number_of_added_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_mask_output', 'tests/models/llama/test_tokenization_ll... | ['tests/models/llama/test_tokenization_llama.py:LlamaIntegrationTest:test_no_prefix_space'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/llama/test_tokenization_llama.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/models/llama/tokenization_llama_fast.py->module->class_definition:LlamaTokenizerFast->function_definition:__init__"] |
huggingface/transformers | 31,095 | huggingface__transformers-31095 | ['31033'] | a564d10afe1a78c31934f0492422700f61a0ffc0 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -2306,6 +2306,8 @@ def _inner_training_loop(
self.optimizer.step()
+ self.control = self.callback_handler.on_optimizer_step(args, self... | diff --git a/tests/trainer/test_trainer_callback.py b/tests/trainer/test_trainer_callback.py
--- a/tests/trainer/test_trainer_callback.py
+++ b/tests/trainer/test_trainer_callback.py
@@ -78,6 +78,9 @@ def on_epoch_end(self, args, state, control, **kwargs):
def on_step_begin(self, args, state, control, **kwargs):
... | Add per-parameter gradient logging (and before optimizer step callback)
@RylanSchaeffer
### Feature request
I wish to log (in wandb) the norm of the gradient of each parameter in my transformer. Currently, supplying a max grad norm value will automatically log the gradient norm for the whole model, but there is no ... | cc @muellerzr @younesbelkada
Great feature @dhruvbpai - feel free to open a PoC PR and we'll take it from there! | 2024-05-28 21:30:20+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_stateful_mixed_callbacks', 'tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_stateful_duplicate_callbacks', 'tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_missing_stateful_callback', 'tests/trainer/test_trainer_callback.p... | ['tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_event_flow'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/trainer/test_trainer_callback.py | Feature | false | false | false | true | 3 | 2 | 5 | false | false | ["src/transformers/trainer_callback.py->module->class_definition:TrainerCallback", "src/transformers/trainer_callback.py->module->class_definition:TrainerCallback->function_definition:on_optimizer_step", "src/transformers/trainer_callback.py->module->class_definition:CallbackHandler", "src/transformers/trainer.py->modu... |
huggingface/transformers | 31,128 | huggingface__transformers-31128 | ['31085'] | 2b9e252b16396c926dad0e3c31802b4af8004e93 | diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py
--- a/src/transformers/optimization.py
+++ b/src/transformers/optimization.py
@@ -540,6 +540,9 @@ def scheduler_hook(param):
if name == SchedulerType.INVERSE_SQRT:
return schedule_func(optimizer, num_warmup_steps=num_warmup_s... | diff --git a/tests/optimization/test_optimization.py b/tests/optimization/test_optimization.py
--- a/tests/optimization/test_optimization.py
+++ b/tests/optimization/test_optimization.py
@@ -36,6 +36,7 @@
get_inverse_sqrt_schedule,
get_linear_schedule_with_warmup,
get_polynomial_decay_schedul... | get_wsd_schedule gets passed num_training_steps because not handled
getting:
```
TypeError: get_wsd_schedule() got an unexpected keyword argument 'num_training_steps'
```
because there's not a handling of ```WARMUP_STABLE_DECAY```, get_wsd_schedule gets passed default params.
https://github.com/huggingface/trans... | cc @muellerzr | 2024-05-30 03:10:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_schedulers', 'tests/optimization/test_optimization.py:OptimizationTest:test_adam_w', 'tests/optimization/test_optimization.py:OptimizationTest:test_adafactor'] | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_get_scheduler'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/optimization/test_optimization.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/optimization.py->module->function_definition:get_scheduler"] |
huggingface/transformers | 31,217 | huggingface__transformers-31217 | ['31216'] | c73ee1333dc4dc63a71cb6180d0f35fdf4b44958 | diff --git a/src/transformers/pipelines/visual_question_answering.py b/src/transformers/pipelines/visual_question_answering.py
--- a/src/transformers/pipelines/visual_question_answering.py
+++ b/src/transformers/pipelines/visual_question_answering.py
@@ -1,4 +1,4 @@
-from typing import Union
+from typing import List, U... | diff --git a/tests/pipelines/test_pipelines_visual_question_answering.py b/tests/pipelines/test_pipelines_visual_question_answering.py
--- a/tests/pipelines/test_pipelines_visual_question_answering.py
+++ b/tests/pipelines/test_pipelines_visual_question_answering.py
@@ -14,6 +14,8 @@
import unittest
+from datasets... | [pipeline] VQA pipeline does not accept list as input
### System Info
- `transformers` version: 4.42.0.dev0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accelerate version: not installed
- A... | null | 2024-06-03 23:53:41+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt'] | ['tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt_image_list', 'tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt_both_list', 'tests/pipelines/test_pipelines_visual_question_answering.... | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/pipelines/test_pipelines_visual_question_answering.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/pipelines/visual_question_answering.py->module->class_definition:VisualQuestionAnsweringPipeline->function_definition:__call__", "src/transformers/pipelines/visual_question_answering.py->module->class_definition:VisualQuestionAnsweringPipeline->function_definition:preprocess"] |
huggingface/transformers | 31,247 | huggingface__transformers-31247 | ['31246'] | 6f40a213eb10e38a5f242d0645519d413d32d798 | diff --git a/src/transformers/cache_utils.py b/src/transformers/cache_utils.py
--- a/src/transformers/cache_utils.py
+++ b/src/transformers/cache_utils.py
@@ -1249,3 +1249,77 @@ def reset(self):
# In-place ops prevent breaking the static address
self.key_cache[layer_idx].zero_()
s... | diff --git a/tests/models/mamba/test_modeling_mamba.py b/tests/models/mamba/test_modeling_mamba.py
--- a/tests/models/mamba/test_modeling_mamba.py
+++ b/tests/models/mamba/test_modeling_mamba.py
@@ -187,11 +187,20 @@ def create_and_check_state_equivalency(self, config, input_ids, *args):
outputs = model(input_... | We Need Compile Support For Mamba!
### Feature request
This feature adds `torch.compile` support for mamba archtecture
### Motivation
The motivation is that by supporting compile on mamba, we can get faster inference speed and better throughput even if we don't have high performance specified mamba kernels installed... | null | 2024-06-04 22:36:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_training', 'tests/models/mamba/test_modeling_mamba.py:MambaIntegrationTests:test_simple_generate_0_cpu', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_beam_sample_generate_dict_output', 'tests/models/mamba/test_modeling_mamba.py:MambaModel... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_state_equivalency', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_mamba_cached_slow_forward_and_backwards'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/mamba/test_modeling_mamba.py | Feature | false | false | false | true | 13 | 4 | 17 | false | false | ["src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaForCausalLM->function_definition:prepare_inputs_for_generation", "src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaForCausalLM->function_definition:_update_model_kwargs_for_generation", "src/transformers/model... |
huggingface/transformers | 31,448 | huggingface__transformers-31448 | ['31435'] | cd71f9381b86b0dc1fd60e8b87fb5bade35aa0cd | diff --git a/src/transformers/generation/stopping_criteria.py b/src/transformers/generation/stopping_criteria.py
--- a/src/transformers/generation/stopping_criteria.py
+++ b/src/transformers/generation/stopping_criteria.py
@@ -372,10 +372,11 @@ def _stop_string_create_embedding_vec(token_list, token_indices, stop_strin... | diff --git a/tests/generation/test_stopping_criteria.py b/tests/generation/test_stopping_criteria.py
--- a/tests/generation/test_stopping_criteria.py
+++ b/tests/generation/test_stopping_criteria.py
@@ -208,6 +208,24 @@ def test_stop_string_embedding_vecs(self):
token_lengths = embedding_vec[:, 2].tolist()
... | `stop_strings` Argument in `model.generate()` Results in Exception if Generation Completes Without `stop_string` Being Generated
### System Info
`transformers==4.41.2`
### Who can help?
@gante any thoughts here?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task... | Might be a duplicate of https://github.com/huggingface/transformers/issues/31435
It looks like this line sets the `tokenizer` to `None` automatically, creates a related but not identical issue.
https://github.com/huggingface/transformers/blob/eed9ed67987/src/transformers/generation/utils.py#L1643
@ahmed-moubtahij... | 2024-06-17 13:14:50+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_max_time_criteria', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_criterias_per_row', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_stop_string_criteria', 'tests/generation/test_stopping_cr... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_single_letter_stop_string'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_stopping_criteria.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/generation/stopping_criteria.py->module->class_definition:StopStringCriteria->function_definition:_stop_string_create_embedding_vec"] |
huggingface/transformers | 31,646 | huggingface__transformers-31646 | ['31642'] | 1f9f57ab4c8c30964360a2ba697c339f6d31f03f | diff --git a/src/transformers/models/encodec/modeling_encodec.py b/src/transformers/models/encodec/modeling_encodec.py
--- a/src/transformers/models/encodec/modeling_encodec.py
+++ b/src/transformers/models/encodec/modeling_encodec.py
@@ -729,7 +729,7 @@ def decode(
Whether or not to return a [`~utils.... | diff --git a/tests/models/encodec/test_modeling_encodec.py b/tests/models/encodec/test_modeling_encodec.py
--- a/tests/models/encodec/test_modeling_encodec.py
+++ b/tests/models/encodec/test_modeling_encodec.py
@@ -19,7 +19,6 @@
import os
import tempfile
import unittest
-from typing import Dict, List, Tuple
impor... | return_dict in encodec is always set to True:
### System Info
- `transformers` version: 4.42.0.dev0
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.29.1
- Accelerate config: not found... | https://github.com/huggingface/transformers/blob/dfaadfdcda8d2c2f564c94121d4618309c1ecdd5/src/transformers/models/encodec/modeling_encodec.py#L789
@kamilakesbi
by default self.config.return_dict is true so the or condition is always maintained and the function returns a dict. | 2024-06-26 18:49:53+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_forward_signature', 'tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_config', 'tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_from_pretrained_no_checkpoint', 'tests/models/encodec/test_modeling_encodec.py:Enco... | ['tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_model_outputs_equivalence'] | null | python -m pytest /testbed/tests/models/encodec/test_modeling_encodec.py --json-report --json-report-file=test_output.json -v | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/encodec/modeling_encodec.py->module->class_definition:EncodecModel->function_definition:forward", "src/transformers/models/encodec/modeling_encodec.py->module->class_definition:EncodecModel->function_definition:decode"] |
huggingface/transformers | 31,654 | huggingface__transformers-31654 | ['29554'] | cee768d97e42c6fcf744ba4d2a4dc8a8e78da4c1 | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -104,6 +104,8 @@
XLA_USE_BF16 = os.environ.get("XLA_USE_BF16", "0").upper()
XLA_DOWNCAST_BF16 = os.environ.get("XLA_DOWNCAST_BF16", "0").upper()
+... | diff --git a/tests/utils/test_modeling_utils.py b/tests/utils/test_modeling_utils.py
--- a/tests/utils/test_modeling_utils.py
+++ b/tests/utils/test_modeling_utils.py
@@ -1499,6 +1499,57 @@ def test_model_from_pretrained_from_mlx(self):
outputs_from_saved = new_model(input_ids)
self.assertTrue... | Can't load models with a gamma or beta parameter
It seems that you cannot create parameters with the string `gamma` or `beta` in any modules you write if you intend to save/load them with the transformers library. There is a small function called `_fix_keys` implemented in the model loading ([link](https://github.co... | Yes that's correct, it's a bug I pointed out in my [video series](https://www.youtube.com/watch?v=wElORCdXHTU&t=1s&ab_channel=NielsRogge) on contributing to Transformers.
This is due to these lines: https://github.com/huggingface/transformers/blob/0290ec19c901adc0f1230ebdccad11c40af026f5/src/transformers/modeling_ut... | 2024-06-27 11:06:08+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_modeling_utils.py:ModelUtilsTest:test_safetensors_load_from_hub', 'tests/utils/test_modeling_utils.py:ModelUtilsTest:test_torch_dtype_byte_sizes', 'tests/utils/test_modeling_utils.py:TestOffline:test_offline', 'tests/utils/test_modeling_utils.py:ModelUtilsTest:test_model_from_pretrained_hub_subfolder... | ['tests/utils/test_modeling_utils.py:ModelUtilsTest:test_warning_for_beta_gamma_parameters'] | null | python -m pytest /testbed/tests/utils/test_modeling_utils.py --json-report --json-report-file=test_output.json -v | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/modeling_utils.py->module->function_definition:_load_state_dict_into_meta_model", "src/transformers/modeling_utils.py->module->function_definition:_load_state_dict_into_model"] |
langchain-ai/langchain | 676 | langchain-ai__langchain-676 | ['674', '674'] | 236ae93610a8538d3d0044fc29379c481acc6789 | diff --git a/langchain/vectorstores/faiss.py b/langchain/vectorstores/faiss.py
--- a/langchain/vectorstores/faiss.py
+++ b/langchain/vectorstores/faiss.py
@@ -14,6 +14,19 @@
from langchain.vectorstores.utils import maximal_marginal_relevance
+def dependable_faiss_import() -> Any:
+ """Import faiss if available,... | diff --git a/tests/integration_tests/vectorstores/test_faiss.py b/tests/integration_tests/vectorstores/test_faiss.py
--- a/tests/integration_tests/vectorstores/test_faiss.py
+++ b/tests/integration_tests/vectorstores/test_faiss.py
@@ -1,4 +1,5 @@
"""Test FAISS functionality."""
+import tempfile
from typing import Lis... | test_faiss_with_metadatas: key mismatch in assert
https://github.com/hwchase17/langchain/blob/236ae93610a8538d3d0044fc29379c481acc6789/tests/integration_tests/vectorstores/test_faiss.py#L54
This test will fail because `FAISS.from_texts` will assign uuid4s as keys in its docstore, while `expected_docstore` has strin... | 2023-01-21 16:51:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN apt-get update && apt-get install -y gcc
RUN pip install --no-cache-dir poetry pytest==7.3.1 pytest-mock requests faiss-cpu wikipedia
RUN poetry config virtua... | ['tests/integration_tests/vectorstores/test_faiss.py:None:test_faiss_with_metadatas', 'tests/integration_tests/vectorstores/test_faiss.py:None:test_faiss_search_not_found', 'tests/integration_tests/vectorstores/test_faiss.py:None:test_faiss_add_texts_not_supported', 'tests/integration_tests/vectorstores/test_faiss.py:N... | ['tests/integration_tests/vectorstores/test_faiss.py:None:test_faiss_local_save_load'] | null | pytest -v /testbed/tests/integration_tests/vectorstores/test_faiss.py | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["langchain/vectorstores/faiss.py->module->class_definition:FAISS->function_definition:save_local", "langchain/vectorstores/faiss.py->module->class_definition:FAISS->function_definition:from_texts", "langchain/vectorstores/faiss.py->module->function_definition:dependable_faiss_import", "langchain/vectorstores/faiss.py-... | |
langchain-ai/langchain | 3,367 | langchain-ai__langchain-3367 | ['3365'] | 3a1bdce3f51e302d468807e980455d676c0f5fd6 | diff --git a/langchain/agents/mrkl/output_parser.py b/langchain/agents/mrkl/output_parser.py
--- a/langchain/agents/mrkl/output_parser.py
+++ b/langchain/agents/mrkl/output_parser.py
@@ -18,7 +18,9 @@ def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
{"output": text.split(FINAL_ANSWER_ACTI... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -50,6 +50,27 @@ def test_get_action_and_input_newline() -> None:
assert action_input == "```\nimport unittest\n\nunittest.main()\n```"
... | Terminal tool gives `ValueError: Could not parse LLM output:` when there is a new libe before action string.
While playing with the LLaMA models I noticed what parse exception was thrown even output looked good.
### Screenshot
 based on langchain.
A few months ago, I used it with fine-tuned (FT) models.
We added a token usage counter later, and I haven't tried fine-tuned models again since then.
Recently we ... | null | 2023-05-02 22:52:00+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end'] | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end_custom_model'] | null | pytest /testbed/tests/unit_tests/callbacks/test_openai_info.py -v --json-report | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["langchain/callbacks/openai_info.py->module->function_definition:get_openai_model_cost_per_1k_tokens", "langchain/callbacks/openai_info.py->module->function_definition:get_openai_token_cost_for_model", "langchain/callbacks/openai_info.py->module->class_definition:OpenAICallbackHandler->function_definition:on_llm_end"] |
langchain-ai/langchain | 4,103 | langchain-ai__langchain-4103 | ['4087'] | 624554a43a1ab0113f3d79ebcbc9e726faecb339 | diff --git a/langchain/document_loaders/csv_loader.py b/langchain/document_loaders/csv_loader.py
--- a/langchain/document_loaders/csv_loader.py
+++ b/langchain/document_loaders/csv_loader.py
@@ -36,13 +36,7 @@ def __init__(
self.file_path = file_path
self.source_column = source_column
self.en... | diff --git a/tests/unit_tests/document_loader/test_csv_loader.py b/tests/unit_tests/document_loader/test_csv_loader.py
--- a/tests/unit_tests/document_loader/test_csv_loader.py
+++ b/tests/unit_tests/document_loader/test_csv_loader.py
@@ -1,4 +1,4 @@
-from pytest_mock import MockerFixture
+from pathlib import Path
f... | CSVLoader TypeError: "delimiter" must be string, not NoneType
it seems that the source code for initializing a CSVLoader doesn't put an appropriate if condition here:
```
def __init__(
self,
file_path: str,
source_column: Optional[str] = None,
csv_args: Optional[Dict] = None,... | Is there a work around for this?
I'm using it in a directory loader like this:
csv_directory_loader = DirectoryLoader(csv_folder_path, glob="**/*.csv", loader_cls=CSVLoader, show_progress=True)
and it gives me the same error.
> Is there a work around for this?
>
> I'm using it in a directory loader like th... | 2023-05-04 11:28:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_valid_data', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_row_file', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_column_file', 'te... | null | pytest /testbed/tests/unit_tests/document_loader/test_csv_loader.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/csv_loader.py->module->class_definition:CSVLoader->function_definition:__init__"] |
langchain-ai/langchain | 4,186 | langchain-ai__langchain-4186 | ['4153'] | 7dcc698ebf4eb7331d25cec279f402918629472b | diff --git a/langchain/document_loaders/whatsapp_chat.py b/langchain/document_loaders/whatsapp_chat.py
--- a/langchain/document_loaders/whatsapp_chat.py
+++ b/langchain/document_loaders/whatsapp_chat.py
@@ -26,16 +26,31 @@ def load(self) -> List[Document]:
with open(p, encoding="utf8") as f:
lines... | diff --git a/tests/integration_tests/document_loaders/test_whatsapp_chat.py b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
new file mode 100644
--- /dev/null
+++ b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
@@ -0,0 +1,19 @@
+from pathlib import Path
+
+from langchain.document_loade... | WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info
langchain 0.0.158
Mac OS M1
Python 3.11
### Who can help?
@ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Mo... | it also doesn't work on Ukrainian date format, e.g.
```
[05.05.23, 15:45:46] User: text
```
---
I used the following input formats:
```
[05.05.23, 15:48:11] James: Hi here
[11/8/21, 9:41:32 AM] User name: Message 123
1/23/23, 3:19 AM - User 2: Bye!
1/23/23, 3:22_AM - User 1: And let me know if anything ... | 2023-05-05 17:05:02+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/integration_tests/document_loaders/test_whatsapp_chat.py:None:test_whatsapp_chat_loader'] | null | pytest /testbed/tests/integration_tests/document_loaders/test_whatsapp_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/document_loaders/whatsapp_chat.py->module->class_definition:WhatsAppChatLoader->function_definition:load"] |
langchain-ai/langchain | 4,420 | langchain-ai__langchain-4420 | ['4153'] | f2150285a495fc530a7707218ea4980c17a170e5 | diff --git a/langchain/document_loaders/whatsapp_chat.py b/langchain/document_loaders/whatsapp_chat.py
--- a/langchain/document_loaders/whatsapp_chat.py
+++ b/langchain/document_loaders/whatsapp_chat.py
@@ -44,7 +44,7 @@ def load(self) -> List[Document]:
)
\]?
[\s-]*
- ... | diff --git a/tests/integration_tests/document_loaders/test_whatsapp_chat.py b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
--- a/tests/integration_tests/document_loaders/test_whatsapp_chat.py
+++ b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
@@ -16,4 +16,5 @@ def test_whatsapp_chat_... | WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info
langchain 0.0.158
Mac OS M1
Python 3.11
### Who can help?
@ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Mo... | it also doesn't work on Ukrainian date format, e.g.
```
[05.05.23, 15:45:46] User: text
```
---
I used the following input formats:
```
[05.05.23, 15:48:11] James: Hi here
[11/8/21, 9:41:32 AM] User name: Message 123
1/23/23, 3:19 AM - User 2: Bye!
1/23/23, 3:22_AM - User 1: And let me know if anything ... | 2023-05-09 21:23:12+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/integration_tests/document_loaders/test_whatsapp_chat.py:None:test_whatsapp_chat_loader'] | null | pytest /testbed/tests/integration_tests/document_loaders/test_whatsapp_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/document_loaders/whatsapp_chat.py->module->class_definition:WhatsAppChatLoader->function_definition:load"] |
langchain-ai/langchain | 4,579 | langchain-ai__langchain-4579 | ['4167'] | 372a5113ff1cce613f78d58c9e79e7c49aa60fac | diff --git a/langchain/document_loaders/web_base.py b/langchain/document_loaders/web_base.py
--- a/langchain/document_loaders/web_base.py
+++ b/langchain/document_loaders/web_base.py
@@ -68,17 +68,19 @@ def __init__(
"bs4 package not found, please install it with " "`pip install bs4`"
)
... | diff --git a/tests/unit_tests/document_loader/test_web_base.py b/tests/unit_tests/document_loader/test_web_base.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/document_loader/test_web_base.py
@@ -0,0 +1,10 @@
+from langchain.document_loaders.web_base import WebBaseLoader
+
+
+class TestWebBaseLoader:
+ ... | User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info
Hi Team,
When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend.
```
loader = WebBaseLoader(url, header_template={
'User-... | possible fix after setting session
```
self.session = requests.Session()
"""Default headers are set by session and spread them with custom headers when needed"""
if header_template is not None:
self.session.headers = {** self.session.headers, ** header_template}
``` | 2023-05-12 13:07:01+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_web_base.py:TestWebBaseLoader:test_respect_user_specified_user_agent'] | null | pytest /testbed/tests/unit_tests/document_loader/test_web_base.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/web_base.py->module->class_definition:WebBaseLoader->function_definition:__init__"] |
langchain-ai/langchain | 4,646 | langchain-ai__langchain-4646 | ['3709'] | 928cdd57a4531e606f7ca7e34c0b96736ffcce49 | diff --git a/langchain/output_parsers/pydantic.py b/langchain/output_parsers/pydantic.py
--- a/langchain/output_parsers/pydantic.py
+++ b/langchain/output_parsers/pydantic.py
@@ -22,7 +22,7 @@ def parse(self, text: str) -> T:
json_str = ""
if match:
json_str = match.group()
- ... | diff --git a/tests/unit_tests/output_parsers/test_pydantic_parser.py b/tests/unit_tests/output_parsers/test_pydantic_parser.py
--- a/tests/unit_tests/output_parsers/test_pydantic_parser.py
+++ b/tests/unit_tests/output_parsers/test_pydantic_parser.py
@@ -21,6 +21,7 @@ class TestModel(BaseModel):
additional_fields:... | PydanticOutputParser has high chance failing when completion contains new line
## Context
When the completion is of a longer format such as an Email, the text will likely contain new line character `\n`.
If it is not properly escaped like `\\n`, parsing will fail when using PydanticOutputParser as `json.loads` does n... | null | 2023-05-14 01:54:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/output_parsers/test_pydantic_parser.py:None:test_pydantic_output_parser_fail'] | ['tests/unit_tests/output_parsers/test_pydantic_parser.py:None:test_pydantic_output_parser'] | null | pytest /testbed/tests/unit_tests/output_parsers/test_pydantic_parser.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/output_parsers/pydantic.py->module->class_definition:PydanticOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,432 | langchain-ai__langchain-5432 | ['5423'] | ee57054d0596bf3176c73db64ad38f82e8e6f9a6 | diff --git a/langchain/agents/mrkl/output_parser.py b/langchain/agents/mrkl/output_parser.py
--- a/langchain/agents/mrkl/output_parser.py
+++ b/langchain/agents/mrkl/output_parser.py
@@ -44,7 +44,13 @@ def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
raise OutputParserException(f"Could no... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -71,6 +71,23 @@ def test_get_action_and_input_newline_after_keyword() -> None:
assert action_input == "ls -l ~/.bashrc.d/\n"
+def tes... | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL
### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modifi... | Could you include the full prefix and query you're using to generate this error please, I'm having a hard time recreating the issue locally? 🙇 | 2023-05-30 10:43:04+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_final_answer_multiline', 'tests/unit_tests/agents/test_mrkl.py:None:test_bad_action_input_line', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_newline', 'tests/unit_tests/ag... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_sql_query'] | null | poetry run pytest /testbed/tests/unit_tests/agents/test_mrkl.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/agents/mrkl/output_parser.py->module->class_definition:MRKLOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,450 | langchain-ai__langchain-5450 | ['3605'] | 64b4165c8d9b8374295d4629ef57d4d58e9af7c8 | diff --git a/langchain/embeddings/huggingface.py b/langchain/embeddings/huggingface.py
--- a/langchain/embeddings/huggingface.py
+++ b/langchain/embeddings/huggingface.py
@@ -25,7 +25,12 @@ class HuggingFaceEmbeddings(BaseModel, Embeddings):
model_name = "sentence-transformers/all-mpnet-base-v2"
... | diff --git a/tests/integration_tests/embeddings/test_huggingface.py b/tests/integration_tests/embeddings/test_huggingface.py
--- a/tests/integration_tests/embeddings/test_huggingface.py
+++ b/tests/integration_tests/embeddings/test_huggingface.py
@@ -26,7 +26,8 @@ def test_huggingface_embedding_query() -> None:
def te... | Embeddings normalization and similarity metric
I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the m... | null | 2023-05-30 16:11:31+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_query', 'tests/integ... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_normalize'] | null | poetry run pytest /testbed/tests/integration_tests/embeddings/test_huggingface.py -v --json-report-file=test_results.json | Feature | false | false | false | true | 2 | 2 | 4 | false | false | ["langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings->function_definition:embed_documents", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceEmbeddings", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings", "lang... |
langchain-ai/langchain | 5,584 | langchain-ai__langchain-5584 | ['5582'] | 4c572ffe959957b515528a9036b374f56cef027f | diff --git a/langchain/vectorstores/chroma.py b/langchain/vectorstores/chroma.py
--- a/langchain/vectorstores/chroma.py
+++ b/langchain/vectorstores/chroma.py
@@ -356,11 +356,11 @@ def update_document(self, document_id: str, document: Document) -> None:
raise ValueError(
"For update, you m... | diff --git a/tests/integration_tests/vectorstores/test_chroma.py b/tests/integration_tests/vectorstores/test_chroma.py
--- a/tests/integration_tests/vectorstores/test_chroma.py
+++ b/tests/integration_tests/vectorstores/test_chroma.py
@@ -3,7 +3,10 @@
from langchain.docstore.document import Document
from langchain.... | Chroma.update_document bug
### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/l... | null | 2023-06-01 23:21:18+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_persistence', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_include_parameter', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_async', 'tests/integration_tests/vectorstores/test_chroma.py:None... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_update_document', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma'] | null | poetry run pytest /testbed/tests/integration_tests/vectorstores/test_chroma.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/vectorstores/chroma.py->module->class_definition:Chroma->function_definition:update_document"] |
langchain-ai/langchain | 5,609 | langchain-ai__langchain-5609 | ['5601'] | 28d6277396013a16613008647c312bbd6c4623cc | diff --git a/langchain/agents/chat/output_parser.py b/langchain/agents/chat/output_parser.py
--- a/langchain/agents/chat/output_parser.py
+++ b/langchain/agents/chat/output_parser.py
@@ -13,17 +13,24 @@ def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Un... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -90,14 +90,7 @@ def test_get_action_and_input_sql_query() -> None:
def test_get_final_answer() -> None:
"""Test getting final answer."... | OutputParsers currently allows model to hallucinate the output of an action
### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/l... | null | 2023-06-02 10:24:47+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_final_answer_multiline', 'tests/unit_tests/agents/test_mrkl.py:None:test_bad_action_input_line', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_sql_query', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_newline', 'tests/uni... | ['tests/unit_tests/agents/test_mrkl.py:None:test_valid_action_and_answer_raises_exception'] | null | poetry run pytest /testbed/tests/unit_tests/agents/test_mrkl.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["langchain/agents/mrkl/output_parser.py->module->class_definition:MRKLOutputParser->function_definition:parse", "langchain/agents/chat/output_parser.py->module->class_definition:ChatOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,625 | langchain-ai__langchain-5625 | ['5614'] | d0d89d39efb5f292f72e70973f3b70c4ca095047 | diff --git a/langchain/text_splitter.py b/langchain/text_splitter.py
--- a/langchain/text_splitter.py
+++ b/langchain/text_splitter.py
@@ -30,7 +30,9 @@
TS = TypeVar("TS", bound="TextSplitter")
-def _split_text(text: str, separator: str, keep_separator: bool) -> List[str]:
+def _split_text_with_regex(
+ text: s... | diff --git a/tests/unit_tests/test_text_splitter.py b/tests/unit_tests/test_text_splitter.py
--- a/tests/unit_tests/test_text_splitter.py
+++ b/tests/unit_tests/test_text_splitter.py
@@ -275,6 +275,12 @@ def test_rst_code_splitter() -> None:
- Item 1
- Item 2
- Item 3
+
+Comment
+*******
+Not a comment
+
+.. This is... | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2)
### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] ... | null | 2023-06-02 18:06:25+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/test_text_splitter.py:None:test_merge_splits', 'tests/unit_tests/test_text_splitter.py:None:test_swift_code_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_iterative_text_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_character_text_splitter_short_words_first', 'tests/unit_... | ['tests/unit_tests/test_text_splitter.py:None:test_rst_code_splitter'] | null | poetry run pytest /testbed/tests/unit_tests/test_text_splitter.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["langchain/text_splitter.py->module->function_definition:_split_text", "langchain/text_splitter.py->module->class_definition:RecursiveCharacterTextSplitter->function_definition:_split_text", "langchain/text_splitter.py->module->function_definition:_split_text_with_regex", "langchain/text_splitter.py->module->class_def... |
langchain-ai/langchain | 6,456 | langchain-ai__langchain-6456 | ['6431'] | 1300a4bc8cf5ebd30c77668473e178bfb24b6679 | diff --git a/langchain/prompts/chat.py b/langchain/prompts/chat.py
--- a/langchain/prompts/chat.py
+++ b/langchain/prompts/chat.py
@@ -168,6 +168,8 @@ def validate_input_variables(cls, values: dict) -> dict:
for message in messages:
if isinstance(message, BaseMessagePromptTemplate):
... | diff --git a/tests/unit_tests/prompts/test_chat.py b/tests/unit_tests/prompts/test_chat.py
--- a/tests/unit_tests/prompts/test_chat.py
+++ b/tests/unit_tests/prompts/test_chat.py
@@ -162,3 +162,31 @@ def test_infer_variables() -> None:
messages = [HumanMessagePromptTemplate.from_template("{foo}")]
prompt = Ch... | ChatPromptTemplate with partial variables is giving validation error
### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models... | null | 2023-06-20 01:13:27+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | ['tests/unit_tests/prompts/test_chat.py:None:test_create_chat_prompt_template_from_template', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_invalid_input_variables_extra', 'tests/unit_tests/prompts/test_chat.py:None:test_infer_variables', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_prompt_template', '... | ['tests/unit_tests/prompts/test_chat.py:None:test_chat_valid_with_partial_variables', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_valid_infer_variables'] | null | poetry run pytest /testbed/tests/unit_tests/prompts/test_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/prompts/chat.py->module->class_definition:ChatPromptTemplate->function_definition:validate_input_variables"] |
langchain-ai/langchain | 6,483 | langchain-ai__langchain-6483 | ['5456'] | 10adec5f1bc1babbd7f5cbea8290d8b1e62554ba | diff --git a/langchain/tools/base.py b/langchain/tools/base.py
--- a/langchain/tools/base.py
+++ b/langchain/tools/base.py
@@ -82,7 +82,7 @@ def _get_filtered_args(
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
... | diff --git a/tests/unit_tests/tools/test_base.py b/tests/unit_tests/tools/test_base.py
--- a/tests/unit_tests/tools/test_base.py
+++ b/tests/unit_tests/tools/test_base.py
@@ -19,6 +19,7 @@
StructuredTool,
ToolException,
)
+from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
... | Tools: Inconsistent callbacks/run_manager parameter
### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the... | I will gladly help fixing this issue :)
Thanks for raising! I can see how it is confusing that subclasses of the `BaseTool` expect a `run_manager` argument whereas instantiations of the `Tool` or `StructuredTool` using the `{Tool|StructuredTool}.from_function()` expect a `callback` argument.
We won't break backward... | 2023-06-20 15:53:03+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | ['tests/unit_tests/tools/test_base.py:None:test_tool_partial_function_args_schema', 'tests/unit_tests/tools/test_base.py:None:test_async_exception_handling_non_tool_exception', 'tests/unit_tests/tools/test_base.py:None:test_structured_tool_from_function', 'tests/unit_tests/tools/test_base.py:None:test_exception_handlin... | ['tests/unit_tests/tools/test_base.py:None:test_structured_tool_from_function_with_run_manager'] | null | pytest /testbed/tests/unit_tests/tools/test_base.py -v --json-report --json-report-file=report.json --override-ini=addopts= | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["langchain/tools/base.py->module->function_definition:_get_filtered_args", "langchain/tools/base.py->module->function_definition:create_schema_from_function"] |
langchain-ai/langchain | 6,765 | langchain-ai__langchain-6765 | ['6756'] | ba622764cb7ccf4667878289f959857348ef8c19 | diff --git a/langchain/agents/initialize.py b/langchain/agents/initialize.py
--- a/langchain/agents/initialize.py
+++ b/langchain/agents/initialize.py
@@ -51,7 +51,7 @@ def initialize_agent(
f"Got unknown agent type: {agent}. "
f"Valid types are: {AGENT_TO_CLASS.keys()}."
... | diff --git a/tests/unit_tests/agents/test_initialize.py b/tests/unit_tests/agents/test_initialize.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/agents/test_initialize.py
@@ -0,0 +1,23 @@
+"""Test the initialize module."""
+
+from langchain.agents.agent_types import AgentType
+from langchain.agents.initia... | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/... | yes i also got this error too. Apparently we have to use AgentType.ZERO_SHOT_REACT_DESCRIPTION , the old way of using just strings has been changed . At the very least they could have shown an exception error instead of this jargon.
agree!the same to me!
Will land a fix. Thanks for raising this! | 2023-06-26 15:12:34+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | [] | ['tests/unit_tests/agents/test_initialize.py:None:test_initialize_agent_with_str_agent_type'] | null | pytest /testbed/tests/unit_tests/agents/test_initialize.py -v --json-report --json-report-file=report.json --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/agents/initialize.py->module->function_definition:initialize_agent"] |
langchain-ai/langchain | 7,653 | langchain-ai__langchain-7653 | ['7652'] | a673a51efa3e03aaa7c8c7e0004dc5ff9c536f2e | diff --git a/langchain/cache.py b/langchain/cache.py
--- a/langchain/cache.py
+++ b/langchain/cache.py
@@ -180,6 +180,7 @@ def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.query(self.cache_schema).delete()
+ session.commit... | diff --git a/tests/unit_tests/test_cache.py b/tests/unit_tests/test_cache.py
--- a/tests/unit_tests/test_cache.py
+++ b/tests/unit_tests/test_cache.py
@@ -139,6 +139,26 @@ def test_chat_model_caching_params() -> None:
)
+def test_llm_cache_clear() -> None:
+ prompt = "How are you?"
+ response = "Test... | SQLite LLM cache clear does not take effect
### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion datab... | null | 2023-07-13 12:40:16+00:00 | Python | FROM python:3.8.1-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies including PDF-related packages
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
poppler-utils \
python3-pip \
libgl1-mesa-... | ['tests/unit_tests/test_cache.py:None:test_chat_model_caching_params[InMemoryCache]', 'tests/unit_tests/test_cache.py:None:test_old_sqlite_llm_caching[InMemoryCache]', 'tests/unit_tests/test_cache.py:None:test_chat_model_caching[get_sqlite_cache]', 'tests/unit_tests/test_cache.py:None:test_chat_model_caching_params[get... | ['tests/unit_tests/test_cache.py:None:test_llm_cache_clear[get_sqlite_cache]'] | null | pytest /testbed/tests/unit_tests/test_cache.py -v --capture=no --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/cache.py->module->class_definition:SQLAlchemyCache->function_definition:clear"] |
langchain-ai/langchain | 14,350 | langchain-ai__langchain-14350 | ['14342'] | 7bdfc43766e72e4b67512bd85119b1c797035b86 | diff --git a/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb b/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
--- a/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
+++ b/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
@@ -143,7 +143,7 @@
{
"dat... | diff --git a/libs/langchain/tests/unit_tests/indexes/test_indexing.py b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
--- a/libs/langchain/tests/unit_tests/indexes/test_indexing.py
+++ b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
@@ -80,7 +80,7 @@ def add_documents( # type: ignore
*,
... | Error:
### System Info
I try this example code
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = Recursive... | _🤖_
<!-- Greeting -->
Hello @dtthanh1971! I'm [Dosu](https://dosu.dev), a bot here to assist you. I can help with solving bugs, answering questions, and guiding you to become a contributor. While we wait for a human maintainer, feel free to ask me anything. I'm here to help you!
<!-- Answer -->
Based on the error mes... | 2023-12-06 16:47:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
g++ \
git \
postgresql-server-dev-all \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/indexes/test_indexing.py:None:test_indexing_with_no_docs', 'tests/unit_tests/indexes/test_indexing.py:None:test_compatible_vectorstore_documentation', 'tests/unit_tests/indexes/test_indexing.py:None:test_index_simple_delete_full', 'tests/unit_tests/indexes/test_indexing.py:None:test_incremental_fails... | ['libs/langchain/tests/unit_tests/retrievers/test_multi_vector.py:None:test_multi_vector_retriever_initialization', 'libs/langchain/tests/unit_tests/retrievers/test_parent_document.py:None:test_parent_document_retriever_initialization'] | null | pytest /testbed/libs/langchain/tests/unit_tests/indexes/test_indexing.py /testbed/libs/langchain/tests/unit_tests/retrievers/test_multi_vector.py /testbed/libs/langchain/tests/unit_tests/retrievers/test_parent_document.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiVectorRetriever", "libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiVectorRetriever->function_definition:__init__", "libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiV... |
langchain-ai/langchain | 19,331 | langchain-ai__langchain-19331 | ['19276'] | 5fc7bb01e9d6398452d0a7b4a50ce234408ca99c | diff --git a/libs/core/langchain_core/language_models/llms.py b/libs/core/langchain_core/language_models/llms.py
--- a/libs/core/langchain_core/language_models/llms.py
+++ b/libs/core/langchain_core/language_models/llms.py
@@ -115,17 +115,41 @@ def _before_sleep(retry_state: RetryCallState) -> None:
)
+def _re... | diff --git a/libs/core/tests/unit_tests/language_models/llms/test_cache.py b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
new file mode 100644
--- /dev/null
+++ b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
@@ -0,0 +1,105 @@
+from typing import Any, Dict, Optional, Tuple
+
+from langc... | langchain-core: Allow passing local cache to language models
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Goal
Allow instantiating language models with specific caches provided as an init parameter. This will b... | i want try.
Is this test case runnable? If it works fine, what exactly is this issue?
https://github.com/langchain-ai/langchain/blob/40f846e65da37a1c00d72da9ea64ebb0f295b016/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py#L43 | 2024-03-20 11:56:35+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_async', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_sync', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_no_cache_generate_sync', 'libs/core/tests/u... | null | python3 -m pytest /testbed/libs/core/tests/unit_tests/language_models/llms/test_cache.py -v --override-ini=addopts= --junitxml=test-results.xml | Feature | false | true | false | false | 7 | 0 | 7 | false | false | ["libs/core/langchain_core/language_models/llms.py->module->function_definition:aget_prompts", "libs/core/langchain_core/language_models/llms.py->module->class_definition:BaseLLM->function_definition:agenerate", "libs/core/langchain_core/language_models/llms.py->module->function_definition:get_prompts", "libs/core/lang... |
langchain-ai/langchain | 19,717 | langchain-ai__langchain-19717 | ['19646'] | 239dd7c0c03d0430c55c2c41cf56cf0dd537199b | diff --git a/libs/core/langchain_core/output_parsers/json.py b/libs/core/langchain_core/output_parsers/json.py
--- a/libs/core/langchain_core/output_parsers/json.py
+++ b/libs/core/langchain_core/output_parsers/json.py
@@ -137,16 +137,24 @@ def parse_json_markdown(
Returns:
The parsed JSON object as a Pyt... | diff --git a/libs/core/tests/unit_tests/output_parsers/test_json.py b/libs/core/tests/unit_tests/output_parsers/test_json.py
--- a/libs/core/tests/unit_tests/output_parsers/test_json.py
+++ b/libs/core/tests/unit_tests/output_parsers/test_json.py
@@ -69,6 +69,10 @@
}
```"""
+JSON_WITH_PART_MARKDOWN_CODE_BLOCK = """... | JsonOutputParser fails if a json value contains ``` inside it.
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure th... | Let me see. | 2024-03-28 15:50:23+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | ['tests/unit_tests/output_parsers/test_json.py:None:test_parse_partial_json[json_strings6]', 'tests/unit_tests/output_parsers/test_json.py:None:test_partial_text_json_output_parser', 'tests/unit_tests/output_parsers/test_json.py:None:test_parse_json_with_code_blocks_and_newlines', 'tests/unit_tests/output_parsers/test_... | ['libs/core/tests/unit_tests/output_parsers/test_json.py:None:test_parse_json_with_part_code_blocks'] | null | python3 -m pytest /testbed/libs/core/tests/unit_tests/output_parsers/test_json.py -v --override-ini=addopts= --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["libs/core/langchain_core/output_parsers/json.py->module->function_definition:_parse_json", "libs/core/langchain_core/output_parsers/json.py->module->function_definition:parse_json_markdown"] |
langchain-ai/langchain | 20,064 | langchain-ai__langchain-20064 | ['11408'] | ebd24bb5d64078d7567eca4da0297260eb33dc31 | diff --git a/libs/langchain/langchain/output_parsers/boolean.py b/libs/langchain/langchain/output_parsers/boolean.py
--- a/libs/langchain/langchain/output_parsers/boolean.py
+++ b/libs/langchain/langchain/output_parsers/boolean.py
@@ -1,3 +1,5 @@
+import re
+
from langchain_core.output_parsers import BaseOutputParser
... | diff --git a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
--- a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
+++ b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
@@ -1,3 +1,5 ... | BooleanOutputParser expected output value error
### System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about... | _🤖_
<!-- Answer -->
Hello,
The error you're encountering is due to the output of your language model (LLM) not matching the expected values "YES" or "NO" that the `BooleanOutputParser` class is designed to handle.
The `BooleanOutputParser` class in the LangChain codebase is used to parse the output of a language mo... | 2024-04-05 12:56:34+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py:None:test_boolean_output_parser_parse'] | null | python3 -m pytest /testbed/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py -v --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["libs/langchain/langchain/output_parsers/boolean.py->module->class_definition:BooleanOutputParser->function_definition:parse"] |
langchain-ai/langchain | 21,201 | langchain-ai__langchain-21201 | ['21196', '21196'] | df49404794d8f78c50020942497220154ec205ce | diff --git a/libs/partners/mistralai/langchain_mistralai/chat_models.py b/libs/partners/mistralai/langchain_mistralai/chat_models.py
--- a/libs/partners/mistralai/langchain_mistralai/chat_models.py
+++ b/libs/partners/mistralai/langchain_mistralai/chat_models.py
@@ -259,6 +259,7 @@ def _convert_message_to_mistral_chat_... | diff --git a/libs/partners/mistralai/tests/unit_tests/test_chat_models.py b/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
--- a/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
+++ b/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
@@ -55,7 +55,7 @@ def test_mistralai_initializati... | ChatMistralAI with chat history : Assistant message must have either content or tool_calls error
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn'... | 2024-05-02 15:28:34+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install --no-cache-dir -e /testbed/libs/core
RUN pip install --no-cache-dir -e /testbed/libs/partners/mistralai
RUN pip install pytest pytest-asyncio | ['libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_stream_with_callback', 'libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_mistralai_initialization', 'libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_convert_message_to_mistral_chat_message[message1-expe... | ['libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_convert_message_to_mistral_chat_message[message2-expected2]'] | null | pytest /testbed/libs/partners/mistralai/tests/unit_tests/test_chat_models.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["libs/partners/mistralai/langchain_mistralai/chat_models.py->module->function_definition:_convert_message_to_mistral_chat_message"] | |
yt-dlp/yt-dlp | 1,649 | yt-dlp__yt-dlp-1649 | ['3855'] | bfd973ece3369c593b5e82a88cc16de80088a73e | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -546,14 +546,14 @@ You can also fork the project on github and run your fork's [build workflow](.gi
error (default is 3), or "infinite"
--fragment-retries RETRIES Number of retries for a fragment (defaul... | diff --git a/test/test_downloader_http.py b/test/test_downloader_http.py
--- a/test/test_downloader_http.py
+++ b/test/test_downloader_http.py
@@ -95,8 +95,8 @@ def download(self, params, ep):
try_rm(encodeFilename(filename))
self.assertTrue(downloader.real_download(filename, {
'url': 'ht... | Printing download HTTP errors to STDERR
### Checklist
- [X] I'm reporting a feature request
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (speci... | They are not error messages, but only notes about the retry - hence why they are written to stdout. Instead of changing it to stderr, I can instead add the reason for error to the last line (which is written to stderr) like:
ERROR: Giving up after 10 fragment retries - HTTP Error 429: Too Many Requests
Would ... | 2021-11-13 09:51:02+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository content into the container
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install pytest
RUN pip instal... | [] | ['test/test_downloader_http.py:TestHttpFD:test_chunked'] | null | pytest /testbed/test/test_downloader_http.py -v --tb=short --junitxml=test-results.xml | Feature | false | false | false | true | 33 | 4 | 37 | false | false | ["yt_dlp/downloader/fragment.py->module->class_definition:FragmentFD->function_definition:report_retry_fragment", "yt_dlp/downloader/common.py->module->class_definition:FileDownloader->function_definition:wrap_file_access->function_definition:outer", "yt_dlp/downloader/fragment.py->module->class_definition:FragmentFD->... |
yt-dlp/yt-dlp | 3,435 | yt-dlp__yt-dlp-3435 | ['3333'] | afac4caa7db30804bebac33e53c3cb0237958224 | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -840,6 +840,15 @@ You can also fork the project on github and run your fork's [build workflow](.gi
interactively
--ap-list-mso List all supported multiple-system
... | diff --git a/test/test_http.py b/test/test_http.py
--- a/test/test_http.py
+++ b/test/test_http.py
@@ -85,6 +85,50 @@ def test_nocheckcertificate(self):
self.assertEqual(r['entries'][0]['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
+class TestClientCert(unittest.TestCase):
+ def setUp(self):
+ ... | add '--client-certificate some.pem' to authenticate a site user to the remote machine
### Checklist
- [X] I'm reporting a feature request
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://g... | null | 2022-04-15 03:09:29+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository content into the container
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install pytest
RUN pip instal... | ['test/test_http.py:TestProxy:test_proxy_with_idn', 'test/test_http.py:TestProxy:test_proxy', 'test/test_http.py:TestHTTPS:test_nocheckcertificate'] | ['test/test_http.py:TestClientCert:test_certificate_nocombined_nopass', 'test/test_http.py:TestClientCert:test_certificate_combined_pass', 'test/test_http.py:TestClientCert:test_certificate_nocombined_pass', 'test/test_http.py:TestClientCert:test_certificate_combined_nopass'] | null | pytest /testbed/test/test_http.py -v --tb=short --junitxml=test-results/test-results.xml | Feature | false | false | false | true | 3 | 1 | 4 | false | false | ["yt_dlp/__init__.py->module->function_definition:parse_options", "yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL", "yt_dlp/utils.py->module->function_definition:make_HTTPS_handler", "yt_dlp/options.py->module->function_definition:create_parser"] |
yt-dlp/yt-dlp | 4,524 | yt-dlp__yt-dlp-4524 | ['4206', '4206'] | 565a4c594499eb4f2c218e12f8ad1cea3362aedd | diff --git a/yt_dlp/extractor/_extractors.py b/yt_dlp/extractor/_extractors.py
--- a/yt_dlp/extractor/_extractors.py
+++ b/yt_dlp/extractor/_extractors.py
@@ -1395,6 +1395,7 @@
RaiPlaySoundLiveIE,
RaiPlaySoundPlaylistIE,
RaiNewsIE,
+ RaiSudtirolIE,
RaiIE,
)
from .raywenderlich import (
diff --g... | diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -368,6 +368,7 @@ def test_unified_dates(self):
self.assertEqual(unified_strdate('2012/10/11 01:56:38 +0000'), '20121011')
self.assertEqual(unified_strdate('1968 12 10'), '19681210')
self.... | [rai+generic] [Errno 54] Connection reset by peer
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URL... | The `https` version don't seem to actually exist. Does it open in browser for you?
hm, it does not. In fact, it's still behaving strangely.. If you try -F parameter it shows an mp4.. now when I try downloading it - it fails. But used to work when I initially tested this. Looking at the log, for some reason, http downlo... | 2022-08-01 12:12:22+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_utils.py:TestUtil:test_determine_file_encoding', 'tes... | ['test/test_utils.py:TestUtil:test_unified_dates'] | null | pytest /testbed/test/test_utils.py -v --json-report | Feature | false | false | false | true | 1 | 1 | 2 | false | false | ["yt_dlp/extractor/rai.py->module->class_definition:RaiSudtirolIE", "yt_dlp/extractor/rai.py->module->class_definition:RaiSudtirolIE->function_definition:_real_extract"] |
yt-dlp/yt-dlp | 4,841 | yt-dlp__yt-dlp-4841 | ['4187'] | 07a1250e0e90515ff8142161536f9dafa6eaba1b | diff --git a/yt_dlp/utils.py b/yt_dlp/utils.py
--- a/yt_dlp/utils.py
+++ b/yt_dlp/utils.py
@@ -2479,7 +2479,7 @@ def url_basename(url):
def base_url(url):
- return re.match(r'https?://[^?#&]+/', url).group()
+ return re.match(r'https?://[^?#]+/', url).group()
def urljoin(base, path):
| diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -566,6 +566,7 @@ def test_base_url(self):
self.assertEqual(base_url('http://foo.de/bar/'), 'http://foo.de/bar/')
self.assertEqual(base_url('http://foo.de/bar/baz'), 'http://foo.de/bar/')
... | DiscoveryPlusItaly error 403: Forbidden
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser... | I think this related to #3757
Can u try passing the url as referer?
I have already tried to insert in the referer the url of the main page of the series, but nothing has changed.
```shell
[debug] Command-line config: ['-Uv', '--no-geo-bypass', '--referer', 'https://www.discoveryplus.com/it/show/killer-of-the-cosmos... | 2022-09-03 20:29:36+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_unified_dates', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_uti... | ['test/test_utils.py:TestUtil:test_base_url'] | null | pytest /testbed/test/test_utils.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["yt_dlp/utils.py->module->function_definition:base_url"] |
yt-dlp/yt-dlp | 5,195 | yt-dlp__yt-dlp-5195 | ['5186'] | 2c98d998181c81ee49908be03c031204fd66d03d | diff --git a/yt_dlp/cookies.py b/yt_dlp/cookies.py
--- a/yt_dlp/cookies.py
+++ b/yt_dlp/cookies.py
@@ -999,8 +999,9 @@ def _parse_browser_specification(browser_name, profile=None, keyring=None, conta
class LenientSimpleCookie(http.cookies.SimpleCookie):
"""More lenient version of http.cookies.SimpleCookie"""
... | diff --git a/test/test_cookies.py b/test/test_cookies.py
--- a/test/test_cookies.py
+++ b/test/test_cookies.py
@@ -277,9 +277,24 @@ def test_lenient_parsing(self):
"a=b; invalid; Version=1; c=d",
{"a": "b", "c": "d"},
),
+ (
+ "Reset morsel after ... | Downloads from Crunchyroll break if certain Optanon cookies are present
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.10.04... | @Grub4K Isn't lenient cookies supposed to handle this?
I would call this a bug imported from the CPython code, since it clearly allows usage of `)` and `&` in its `_LEGAL_KEY_CHARS` which is used in the compiled regex but does NOT allow them while setting them in the morsel, since that uses `_LegalChars`.
As a worka... | 2022-10-11 00:38:54+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy repository contents
COPY . .
# Install dependencies and package in development mode
RUN pip install -r requirements.txt pytest pytest-json-report
RUN pip install -... | ['test/test_cookies.py:TestCookies:test_get_desktop_environment', 'test/test_cookies.py:TestCookies:test_chrome_cookie_decryptor_linux_derive_key', 'test/test_cookies.py:TestCookies:test_pbkdf2_sha1', 'test/test_cookies.py:TestCookies:test_chrome_cookie_decryptor_linux_v10', 'test/test_cookies.py:TestCookies:test_chrom... | ['test/test_cookies.py:TestLenientSimpleCookie:test_lenient_parsing'] | null | python -m pytest /testbed/test/test_cookies.py -v --json-report --json-report-file=test_results.json | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["yt_dlp/cookies.py->module->class_definition:LenientSimpleCookie->function_definition:load", "yt_dlp/cookies.py->module->class_definition:LenientSimpleCookie"] |
yt-dlp/yt-dlp | 5,933 | yt-dlp__yt-dlp-5933 | ['5953'] | f079514957401f49db30ec4cd25f8c8246b0c1de | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -1119,9 +1119,10 @@ You can configure yt-dlp by placing any supported command line option to a confi
* `yt-dlp.conf` in the home path given by `-P`
* If `-P` is not given, the current directory is searched
1. **User Configuration**:
+ *... | diff --git a/test/test_config.py b/test/test_config.py
new file mode 100644
--- /dev/null
+++ b/test/test_config.py
@@ -0,0 +1,227 @@
+#!/usr/bin/env python3
+
+# Allow direct execution
+import os
+import sys
+import unittest
+import unittest.mock
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__... | [Version 2023.01.02] /etc/yt-dlp.conf is not loaded
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.01.0... | null | 2023-01-03 00:41:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_config.py:TestConfig:test_config__ENVIRON_DEFAULTS_sanity', 'test/test_config.py:TestConfig:test_config_override_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_files'] | ['test/test_config.py:TestConfig:test_config_all_environ_values', 'test/test_config.py:TestConfig:test_config_default_expected_locations', 'test/test_config.py:TestConfig:test_config_override_files', 'test/test_config.py:TestConfig:test_config_default_grouping'] | null | pytest /testbed/test/test_config.py -v --json-report | Bug Fix | false | true | false | false | 11 | 0 | 11 | false | false | ["yt_dlp/options.py->module->function_definition:parseOpts->function_definition:_load_from_config_dirs", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations->function_definition:... |
yt-dlp/yt-dlp | 8,917 | yt-dlp__yt-dlp-8917 | ['3944'] | 95e82347b398d8bb160767cdd975edecd62cbabd | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -1305,7 +1305,8 @@ The available fields are:
- `display_id` (string): An alternative identifier for the video
- `uploader` (string): Full name of the video uploader
- `license` (string): License name the video is licensed under
- - `creator` (s... | diff --git a/test/helper.py b/test/helper.py
--- a/test/helper.py
+++ b/test/helper.py
@@ -223,6 +223,10 @@ def sanitize(key, value):
if test_info_dict.get('display_id') == test_info_dict.get('id'):
test_info_dict.pop('display_id')
+ # Remove deprecated fields
+ for old in YoutubeDL._deprecated_mu... | Use ; as separator for metadata instead of , for vorbis comments and / for ID3
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I'v... | > Not Valid
You talking about the issue? There is a reason the field is mandatory!
@Rexadev that's a regular log. Please run the command with `--verbose` and send a log of it
Also, please explain exactly what tags you are talking about. yt-dlp doesn't add any kind of seperator anywhere. So I have no clue exactly wha... | 2024-01-03 02:11:22+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_YoutubeDL.py:TestYoutubeDL:test_subtitles', 'test/test_YoutubeDL.py:TestYoutubeDL:test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries', 'test/test_YoutubeDL.py:TestYoutubeDL:test_header_cookies', 'test/test_YoutubeDL.py:TestFormatSelection:test_audio_only_extractor_format_selection', 'test/... | ['test/test_YoutubeDL.py:TestYoutubeDL:test_infojson_cookies'] | null | pytest /testbed/test/helper.py /testbed/test/test_YoutubeDL.py -v --json-report | Feature | false | false | false | true | 4 | 3 | 7 | false | false | ["yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:_fill_common_fields", "yt_dlp/postprocessor/ffmpeg.py->module->class_definition:FFmpegMetadataPP->function_definition:_get_metadata_opts->function_definition:add", "yt_dlp/extractor/common.py->module->class_definition:InfoExtractor", "yt_dlp... |
yt-dlp/yt-dlp | 9,856 | yt-dlp__yt-dlp-9856 | ['4962'] | e897bd8292a41999cf51dba91b390db5643c72db | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -2333,6 +2333,7 @@ These options may no longer work as intended
--write-annotations No supported site has annotations now
--no-write-annotations Default
--compat-options seperate-video-versions No longer needed
... | diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -5,6 +5,7 @@
import sys
import unittest
import warnings
+import datetime as dt
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -27,6 +28,7 @@
ExtractorError,
InA... | timestamp field not set on YouTube videos
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.09.01** ([upda... | https://github.com/yt-dlp/yt-dlp/issues/1803
Sorry about making _another_ duplicate issue. GitHub's issue search is awful; do you know any alternatives?
> Sorry about making _another_ duplicate issue. GitHub's issue search is awful; do you know any alternatives?
the official youtube data api
@coletdjnz I actually me... | 2024-05-04 09:14:54+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
# Run the specified test file | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_unified_dates', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_uti... | ['test/test_utils.py:TestUtil:test_locked_file', 'test/test_utils.py:TestUtil:test_parse_iso8601'] | null | pytest /testbed/test/test_utils.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["yt_dlp/options.py->module->function_definition:create_parser", "yt_dlp/extractor/youtube.py->module->class_definition:YoutubeIE", "yt_dlp/extractor/youtube.py->module->class_definition:YoutubeIE->function_definition:_real_extract", "yt_dlp/utils/_utils.py->module->function_definition:parse_iso8601", "yt_dlp/utils/_ut... |
yt-dlp/yt-dlp | 9,862 | yt-dlp__yt-dlp-9862 | ['9843'] | 39bc699d2e6e39b26af028cc09a7b1d460d00e31 | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -2219,6 +2219,7 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenienc... | diff --git a/test/test_YoutubeDL.py b/test/test_YoutubeDL.py
--- a/test/test_YoutubeDL.py
+++ b/test/test_YoutubeDL.py
@@ -4,6 +4,7 @@
import os
import sys
import unittest
+from unittest.mock import patch
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -520,7 +521,33 @@ def te... | `--simulate` doesn't accurately simulate downloading under certain conditions
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've ver... | cc @dirkf
I'm a little hazy as to why one would want to use `--simulate` because all it basically tells you is that the extractor didn't (with luck) crash. If you want to know, say, what format(s) will be selected there is`--get-format` or eqv. Since no video download is being run, it can't tell you anything about any... | 2024-05-05 09:51:35+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
# Run the specified test file | ['test/test_YoutubeDL.py:TestYoutubeDL:test_subtitles', 'test/test_YoutubeDL.py:TestYoutubeDL:test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries', 'test/test_YoutubeDL.py:TestYoutubeDL:test_header_cookies', 'test/test_YoutubeDL.py:TestFormatSelection:test_audio_only_extractor_format_selection', 'test/... | ['test/test_YoutubeDL.py:TestFormatSelection:test_default_format_spec_without_ffmpeg', 'test/test_YoutubeDL.py:TestFormatSelection:test_default_format_spec_with_ffmpeg'] | null | pytest /testbed/test/test_YoutubeDL.py -v | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:process_video_result", "yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:_default_format_spec"] |
yt-dlp/yt-dlp | 10,390 | yt-dlp__yt-dlp-10390 | ['10391'] | 6c056ea7aeb03660281653a9668547f2548f194f | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -3130,7 +3130,8 @@ def _decrypt_nsig(self, s, video_id, player_url):
def _extract_n_function_name(self, jscode):
funcname, idx = self._search_regex(
- ... | diff --git a/test/test_youtube_signature.py b/test/test_youtube_signature.py
--- a/test/test_youtube_signature.py
+++ b/test/test_youtube_signature.py
@@ -167,6 +167,10 @@
'https://www.youtube.com/s/player/590f65a6/player_ias.vflset/en_US/base.js',
'1tm7-g_A9zsI8_Lay_', 'xI4Vem4Put_rOg',
),
+ ... | [youtube] nsig extraction failed: Some formats may be missing
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I... | null | 2024-07-08 20:46:07+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . .... | ['test/test_youtube_signature.py:TestSignature:test_nsig_js_e06dea74', 'test/test_youtube_signature.py:TestSignature:test_nsig_js_dac945fd', 'test/test_youtube_signature.py:TestSignature:test_nsig_js_c81bbb4a', 'test/test_youtube_signature.py:TestSignature:test_signature_js_vflCGk6yw', 'test/test_youtube_signature.py:T... | ['test/test_youtube_signature.py:TestSignature:test_nsig_js_b22ef6e7'] | null | pytest /testbed/test/test_youtube_signature.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["yt_dlp/extractor/youtube.py->module->class_definition:YoutubeIE->function_definition:_extract_n_function_name"] |
tensorflow/models | 2,727 | tensorflow__models-2727 | ['2674'] | 176cf09c2d95f6cd2201e8a7fd215617d6be9453 | diff --git a/research/object_detection/README.md b/research/object_detection/README.md
--- a/research/object_detection/README.md
+++ b/research/object_detection/README.md
@@ -1,3 +1,4 @@
+
# Tensorflow Object Detection API
Creating accurate machine learning models capable of localizing and identifying
multiple objec... | diff --git a/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py b/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py
--- a/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py
+++ b/research/object_detection/anchor_generators/... | Got error when restoring the frozen NAS-Net model for object detection.
Python version: 2.7
CUDA: 8.0
CUDNN 6.0
OS: Ubuntu16.04
TF version: 1.3.0 & 1.4.0rc1
When I test the new "faster-rcnn & nasnet" model using code pieces from the Jupyter-notebook tutorial like this:
```python
detection_graph = tf.Graph()
w... | null | 2017-11-07 19:31:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y git python3-pip protobuf-compiler && rm -rf /var/lib/apt/lists/*
# Copy the research directory
COPY .... | [':test_build_grid_anchor_generator_with_defaults', ':test_construct_multiple_grids_with_clipping', ':test_invalid_box_specs', ':test_construct_anchor_grid_non_square', ':test_build_grid_anchor_generator_with_non_default_parameters', ':test_build_ssd_anchor_generator_with_defaults', ':test_raise_value_error_on_empty_an... | [':test_construct_anchor_grid_normalized:', ':test_build_ssd_anchor_generator_with_custom_interpolated_scale:', ':test_build_ssd_anchor_generator_with_custom_scales:'] | null | python -m unittest /testbed/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py /testbed/research/object_detection/builders/anchor_generator_builder_test.py -v | Bug Fix | false | false | false | true | 3 | 2 | 5 | false | false | ["research/object_detection/anchor_generators/multiple_grid_anchor_generator.py->module->class_definition:MultipleGridAnchorGenerator", "research/object_detection/anchor_generators/multiple_grid_anchor_generator.py->module->class_definition:MultipleGridAnchorGenerator->function_definition:__init__", "research/object_de... |
tensorflow/models | 4,628 | tensorflow__models-4628 | ['3564'] | 7c5c01482f48f9f2532586e679686d821d516ae6 | diff --git a/research/astronet/astronet/data/generate_download_script.py b/research/astronet/astronet/data/generate_download_script.py
--- a/research/astronet/astronet/data/generate_download_script.py
+++ b/research/astronet/astronet/data/generate_download_script.py
@@ -33,6 +33,7 @@
import argparse
import csv
impor... | diff --git a/research/astronet/light_curve_util/periodic_event_test.py b/research/astronet/light_curve_util/periodic_event_test.py
--- a/research/astronet/light_curve_util/periodic_event_test.py
+++ b/research/astronet/light_curve_util/periodic_event_test.py
@@ -25,6 +25,13 @@
class EventTest(absltest.TestCase):
+... | SyntaxError: invalid token
The line throws a SyntaxError: invalid token:
https://github.com/tensorflow/models/blob/3f78f4cfd21c786c62bf321c07830071027ebb5e/research/astronet/astronet/data/generate_download_script.py#L93
| Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
What is the top-level directory of the model you are using
Have I written custom code
OS Platform and Distribution
TensorFlow inst... | 2018-06-25 23:01:51+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
python3-pip \
protobuf-compiler \
&& rm -rf /var/lib/apt/lists/*
# Copy the r... | [':testEquals'] | [':testRepr:', ':testStr:'] | null | python -m unittest /testbed/research/astronet/light_curve_util/periodic_event_test.py -v | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["research/astronet/astronet/ops/dataset_ops.py->module->function_definition:build_dataset->function_definition:_example_parser", "research/astronet/light_curve_util/periodic_event.py->module->class_definition:Event->function_definition:__str__", "research/astronet/light_curve_util/periodic_event.py->module->class_defi... |
keras-team/keras | 1,767 | keras-team__keras-1767 | ['1730'] | b8a9f84fad1be2f27365a25b4e7f188d382d70d0 | diff --git a/keras/layers/containers.py b/keras/layers/containers.py
--- a/keras/layers/containers.py
+++ b/keras/layers/containers.py
@@ -156,9 +156,9 @@ def get_weights(self):
return weights
def set_weights(self, weights):
- for i in range(len(self.layers)):
- nb_param = len(self.lay... | diff --git a/tests/keras/test_models.py b/tests/keras/test_models.py
--- a/tests/keras/test_models.py
+++ b/tests/keras/test_models.py
@@ -125,6 +125,70 @@ def test_sequential():
model = model_from_yaml(yaml_data)
+def test_nested_sequential():
+ (X_train, y_train), (X_test, y_test) = _get_test_data()
+
+ ... | unable to load weights in models with siamese branches
The problem is that the set_weights() function in sequential tries to concatenate trainable_weights and non_trainable together
However if one of your layers is another sequential container, this does not have a non_trainable_weights parameter
This needs to be imple... | +1
I think the actual fix is to change `Sequential.set_weights` to something very similar to `Graph.set_weights`. I'll submit a PR when I get time.
It turns out that this has nothing to do with Siamese models. It happens when you have triple-nested Sequential layers.
| 2016-02-19 20:27:35+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy==1.16.6 scipy==1.2.3 theano==0.8.2 pyyaml==5.4.1 six h5py==2.10.0 | ['tests/keras/test_models.py:None:test_lambda', 'tests/keras/test_models.py:None:test_siamese_1', 'tests/keras/test_models.py:None:test_sequential', 'tests/keras/test_models.py:None:test_merge_overlap', 'tests/keras/test_models.py:None:test_merge_concat', 'tests/keras/test_models.py:None:test_merge_recursivity', 'tests... | ['tests/keras/test_models.py:None:test_nested_sequential'] | null | python -m pytest /testbed/tests/keras/test_models.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/layers/containers.py->module->class_definition:Sequential->function_definition:set_weights"] |
keras-team/keras | 3,907 | keras-team__keras-3907 | ['3905'] | 7df184d3aa8a9790d181c837ab22a31b5aebb5ae | diff --git a/docs/templates/getting-started/sequential-model-guide.md b/docs/templates/getting-started/sequential-model-guide.md
--- a/docs/templates/getting-started/sequential-model-guide.md
+++ b/docs/templates/getting-started/sequential-model-guide.md
@@ -121,7 +121,7 @@ Before training a model, you need to configur... | diff --git a/tests/keras/engine/test_training.py b/tests/keras/engine/test_training.py
--- a/tests/keras/engine/test_training.py
+++ b/tests/keras/engine/test_training.py
@@ -148,15 +148,24 @@ def test_model_methods():
# test with a custom metric function
mse = lambda y_true, y_pred: K.mean(K.pow(y_true - y... | New Feature: Add ability to return more than one metric from metric function
Following discussion in gitter:
Add ability to return dict from metric function. Would be useful for e.g. confusion matrix. Proposed behavior
`r = f(y_true,y_pred)`
1. If `r` is a dict - report every `(key, value)` pair as metric with name `... | null | 2016-09-29 09:31:05+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/engine/test_training.py:None:test_trainable_argument'] | ['tests/keras/engine/test_training.py:None:test_model_methods'] | null | python -m pytest /testbed/tests/keras/engine/test_training.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/engine/training.py->module->class_definition:Model->function_definition:compile->function_definition:append_metric", "keras/engine/training.py->module->class_definition:Model->function_definition:compile"] |
keras-team/keras | 3,983 | keras-team__keras-3983 | ['3942'] | 4de7eaa6a80fd4257b866a6b695450c40b72dd28 | diff --git a/keras/layers/pooling.py b/keras/layers/pooling.py
--- a/keras/layers/pooling.py
+++ b/keras/layers/pooling.py
@@ -519,3 +519,83 @@ def call(self, x, mask=None):
return K.max(x, axis=[1, 2])
else:
return K.max(x, axis=[2, 3])
+
+
+class _GlobalPooling3D(Layer):
+
+ def ... | diff --git a/tests/keras/layers/test_convolutional.py b/tests/keras/layers/test_convolutional.py
--- a/tests/keras/layers/test_convolutional.py
+++ b/tests/keras/layers/test_convolutional.py
@@ -269,6 +269,22 @@ def test_globalpooling_2d():
input_shape=(3, 5, 6, 4))
+@keras_test
+def test_globalpool... | GlobalPooling for 3D inputs
Hello,
I was wondering why there is [GlobalMaxPooling2D](https://keras.io/layers/pooling/#globalmaxpooling2d) and [GlobalAveragePooling2D](https://keras.io/layers/pooling/#globalaveragepooling2d), but no 3D versions of both.
Looking at the code, one could easily extend both to work with 3... | Feel free to make a PR.
| 2016-10-06 12:10:06+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/layers/test_convolutional.py:None:test_convolution_3d', 'tests/keras/layers/test_convolutional.py:None:test_maxpooling_2d', 'tests/keras/layers/test_convolutional.py:None:test_globalpooling_1d', 'tests/keras/layers/test_convolutional.py:None:test_averagepooling_3d', 'tests/keras/layers/test_convolutional.... | ['tests/keras/layers/test_convolutional.py:None:test_globalpooling_3d'] | null | python -m pytest /testbed/tests/keras/layers/test_convolutional.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Feature | false | false | false | true | 5 | 4 | 9 | false | false | ["keras/layers/pooling.py->module->class_definition:_GlobalPooling3D", "keras/layers/pooling.py->module->class_definition:GlobalMaxPooling3D", "keras/layers/pooling.py->module->class_definition:_GlobalPooling3D->function_definition:__init__", "keras/layers/pooling.py->module->class_definition:GlobalMaxPooling3D->functi... |
keras-team/keras | 4,739 | keras-team__keras-4739 | ['3891'] | e9b8424839ecceb106deb77df0b4230b97b06261 | diff --git a/keras/backend/tensorflow_backend.py b/keras/backend/tensorflow_backend.py
--- a/keras/backend/tensorflow_backend.py
+++ b/keras/backend/tensorflow_backend.py
@@ -12,7 +12,7 @@
import os
import copy
import warnings
-from .common import _FLOATX, _EPSILON, image_dim_ordering, reset_uids
+from .common impor... | diff --git a/tests/keras/backend/test_backends.py b/tests/keras/backend/test_backends.py
--- a/tests/keras/backend/test_backends.py
+++ b/tests/keras/backend/test_backends.py
@@ -3,11 +3,19 @@
import numpy as np
import scipy.sparse as sparse
-from keras.backend import theano_backend as KTH
+from keras import backen... | set_floatx does not work properly
Once keras and the backend is imported, it is not possible to change the float type using 'set_floatx()'.
So running the following code snippet:
``` python
import keras
print(keras.backend.floatx())
keras.backend.set_floatx('float16')
print(keras.backend.floatx())
# create dummy var... | For example for 'variable()' replacing:
``` python
def variable(value, dtype=_FLOATX, name=None):
'''Instantiate a tensor variable.
'''
...
```
with:
``` python
from .common import floatx
...
def variable(value, dtype=None, name=None):
'''Instantiate a tensor variable.
'''
if dtype is None... | 2016-12-16 07:34:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/backend/test_backends.py:TestBackend:test_foldl', 'tests/keras/backend/test_backends.py:TestBackend:test_elementwise_operations', 'tests/keras/backend/test_backends.py:TestBackend:test_switch', 'tests/keras/backend/test_backends.py:TestBackend:test_foldr', 'tests/keras/backend/test_backends.py:TestBackend... | ['tests/keras/backend/test_backends.py:TestBackend:test_set_floatx'] | null | python -m pytest /testbed/tests/keras/backend/test_backends.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Bug Fix | false | true | false | false | 30 | 0 | 30 | false | false | ["keras/backend/tensorflow_backend.py->module->function_definition:_postprocess_conv3d_output", "keras/backend/tensorflow_backend.py->module->function_definition:_preprocess_conv3d_kernel", "keras/backend/tensorflow_backend.py->module->function_definition:var", "keras/backend/tensorflow_backend.py->module->function_def... |
keras-team/keras | 4,856 | keras-team__keras-4856 | ['4846'] | 50f7f03f6bc373b81ae9407f7857112e062c526f | diff --git a/keras/engine/topology.py b/keras/engine/topology.py
--- a/keras/engine/topology.py
+++ b/keras/engine/topology.py
@@ -927,7 +927,10 @@ def add_update(self, updates, inputs=None):
def get_updates_for(self, inputs):
if not hasattr(self, '_per_input_updates'):
return []
- inp... | diff --git a/tests/keras/engine/test_topology.py b/tests/keras/engine/test_topology.py
--- a/tests/keras/engine/test_topology.py
+++ b/tests/keras/engine/test_topology.py
@@ -9,6 +9,27 @@
from keras.models import model_from_json, model_from_yaml
from keras.utils.test_utils import keras_test
+@keras_test
+def test_g... | Layer regularizers are not shared across models in 1.2.0
If I share a layer with regularizers with another model, the regularizers are not copied correctly. Reusing keras test for regularizers:
```{python}
from keras.models import *
model = Sequential()
model.add(wrappers.TimeDistributed(core.Dense(2, W_regulariz... | null | 2016-12-27 19:00:13+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/engine/test_topology.py:None:test_node_construction', 'tests/keras/engine/test_topology.py:None:test_trainable_weights'] | ['tests/keras/engine/test_topology.py:None:test_get_updates_for', 'tests/keras/engine/test_topology.py:None:test_get_losses_for'] | null | python -m pytest /testbed/tests/keras/engine/test_topology.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/engine/topology.py->module->class_definition:Layer->function_definition:get_losses_for", "keras/engine/topology.py->module->class_definition:Layer->function_definition:get_updates_for"] |
keras-team/keras | 18,553 | keras-team__keras-18553 | ['18535'] | c8a5a8969a8712a9a1939937ce34158e04cfc09d | diff --git a/keras/ops/nn.py b/keras/ops/nn.py
--- a/keras/ops/nn.py
+++ b/keras/ops/nn.py
@@ -592,7 +592,7 @@ def __init__(
super().__init__()
self.pool_size = pool_size
self.strides = strides
- self.padding = padding
+ self.padding = padding.lower()
self.data_format =... | diff --git a/keras/ops/nn_test.py b/keras/ops/nn_test.py
--- a/keras/ops/nn_test.py
+++ b/keras/ops/nn_test.py
@@ -121,12 +121,16 @@ def test_conv(self):
# Test 1D conv.
inputs_1d = KerasTensor([None, 20, 3])
kernel = KerasTensor([4, 3, 2])
- self.assertEqual(
- knn.conv(inp... | depthwise_conv ops padding same is not working in on torch backend
```python
import numpy as np
import os
os.environ["KERAS_BACKEND"] = "jax" # 'tensorflow', 'torch', 'jax'
import keras_core as keras
from keras_core import ops
input = np.ones((1, 613, 696, 3))
kernel = np.ones((1, 5, 3, 1))
```
```pyt... | null | 2023-10-05 20:35:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_relu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_silu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_leaky_relu', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_max_pool', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_one_hot_dtype1', 'keras/ops/nn_test.p... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_depthwise_conv', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_conv'] | null | pytest /testbed/keras/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 6 | 6 | 12 | false | false | ["keras/ops/nn.py->module->function_definition:conv_transpose", "keras/ops/nn.py->module->function_definition:separable_conv", "keras/ops/nn.py->module->class_definition:MaxPool->function_definition:__init__", "keras/ops/nn.py->module->function_definition:conv", "keras/ops/nn.py->module->function_definition:max_pool", ... |
keras-team/keras | 18,649 | keras-team__keras-18649 | ['18409'] | b00065c7878ade450286ad2c298148f50e098f0c | diff --git a/keras/backend/jax/numpy.py b/keras/backend/jax/numpy.py
--- a/keras/backend/jax/numpy.py
+++ b/keras/backend/jax/numpy.py
@@ -440,6 +440,22 @@ def maximum(x1, x2):
return jnp.maximum(x1, x2)
+def median(x, axis=None, keepdims=False):
+ # axis of jnp.median must be hashable
+ if isinstance(ax... | diff --git a/keras/ops/numpy_test.py b/keras/ops/numpy_test.py
--- a/keras/ops/numpy_test.py
+++ b/keras/ops/numpy_test.py
@@ -193,6 +193,22 @@ def test_outer(self):
y = KerasTensor((2, None))
self.assertEqual(knp.outer(x, y).shape, (None, None))
+ def test_quantile(self):
+ x = KerasTenso... | Add Median to `keras_core.ops`
Feature Request for a Median function to keras_core.ops.
It is an important function which is present within [`torch`](https://pytorch.org/docs/stable/generated/torch.median.html) and [`jax.numpy`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.median.html) as well.
| @suvadityamuk Thanks for filing the issue! would you be interested in filing a PR?
Sure, can do! Any chance you can reference a similar example here so I can follow its rubrics?
may be this one - https://github.com/keras-team/keras-core/pull/907 | 2023-10-19 08:50:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_take_sparse_axis_0_float64', 'keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_transpose', 'keras/ops/numpy_test.py:NumpyTwoInputOpsStaticShapeTest:test_less_equal', 'keras/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_squeeze_sparse', ... | ['keras/ops/numpy_test.py:NumpyDtypeTest:test_median_int64', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_quantile_int8', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_quantile_float64', 'keras/ops/numpy_test.py:NumpyTwoInputOpsDynamicShapeTest:test_quantile', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_median_uint32',... | null | pytest /testbed/keras/ops/numpy_test.py -v --junitxml=test-results.xml | Feature | false | false | false | true | 17 | 4 | 21 | false | false | ["keras/backend/jax/numpy.py->module->function_definition:quantile", "keras/backend/torch/numpy.py->module->function_definition:median", "keras/backend/tensorflow/numpy.py->module->function_definition:_quantile->function_definition:_get_indices", "keras/ops/numpy.py->module->class_definition:Quantile", "keras/ops/numpy... |
keras-team/keras | 18,766 | keras-team__keras-18766 | ['18754'] | 4803b5497ad060cce345a323be2546152315ec3d | diff --git a/keras/layers/attention/attention.py b/keras/layers/attention/attention.py
--- a/keras/layers/attention/attention.py
+++ b/keras/layers/attention/attention.py
@@ -27,6 +27,7 @@ class Attention(Layer):
attention scores.
dropout: Float between 0 and 1. Fraction of the units to drop for t... | diff --git a/keras/layers/attention/additive_attention_test.py b/keras/layers/attention/additive_attention_test.py
--- a/keras/layers/attention/additive_attention_test.py
+++ b/keras/layers/attention/additive_attention_test.py
@@ -17,12 +17,12 @@ def test_attention_basics(self):
expected_output_shape=(2, 3... | `noise_shape` Attribute Not Found in Attention Layer
The source of this issue is at training time with the Attention layer. This is where self.noise_shape is referenced, but it is never assigned:
https://github.com/keras-team/keras/blob/d4feb16c82b8e3d47721520e9b45ef4bebc1ead0/keras/layers/attention/attention.py#L17... | @nkovela1 ,
IMO we can set `noise_shape` to `None` here since this is being called inside the function `backend.random.dropout()` which has argument `noise_shape`. I think if the default value for this arg is `None` it will its value infer from inputs.
I have referred legacy dropout API below.
https://github.com... | 2023-11-12 07:42:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_with_mask', 'keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_correctness', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_errors', 'keras/layers/attention/attention_tes... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_basics', 'keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_basics', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_with_dropout'] | null | pytest /testbed/keras/layers/attention/additive_attention_test.py /testbed/keras/layers/attention/attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:_apply_scores", "keras/layers/attention/attention.py->module->class_definition:Attention", "keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:__init__"] |
keras-team/keras | 18,852 | keras-team__keras-18852 | ['18842'] | 9c62839cbb0e54b7bac09ce20471a0dfaa65ff55 | diff --git a/.github/workflows/actions.yml b/.github/workflows/actions.yml
--- a/.github/workflows/actions.yml
+++ b/.github/workflows/actions.yml
@@ -53,7 +53,7 @@ jobs:
- name: Test applications with pytest
if: ${{ steps.filter.outputs.applications == 'true' }}
run: |
- pytest keras/... | diff --git a/keras/activations/activations_test.py b/keras/activations/activations_test.py
--- a/keras/activations/activations_test.py
+++ b/keras/activations/activations_test.py
@@ -40,6 +40,10 @@ def _ref_hard_sigmoid(x):
return z
+def _ref_hard_swish(x):
+ return x * np.minimum(np.maximum(0.0, x + 3.0), ... | Add HardSwish activation
HardSwish has been supported by TFLite for quite some time, but it is still missing in Keras.
I believe adding this activation would be beneficial for those working on INT8 quantized models.
I already have a working implementation and can submit the PR if it sounds good.
References that ... | null | 2023-11-30 01:14:54+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/activations/activations_test.py:ActivationsTest:test_tanh', 'keras/applications/applications_test.py:ApplicationsTest:test_application_pooling_MobileNet_channels_first', 'keras/applications/applications_test.py:ApplicationsTest:test_application_pooling_EfficientNetB1_channels_first', 'keras/applications/applica... | ['keras/activations/activations_test.py:ActivationsTest:test_hard_swish'] | null | pytest /testbed/keras/activations/activations_test.py /testbed/keras/applications/applications_test.py /testbed/keras/applications/imagenet_utils_test.py -v --junitxml=test-results.xml | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/applications/mobilenet_v3.py->module->function_definition:hard_swish", "keras/activations/activations.py->module->function_definition:hard_swish"] |
keras-team/keras | 18,871 | keras-team__keras-18871 | ['18864'] | 10252a9e7d68c6818423deee1c4c8549038e4171 | diff --git a/keras/models/model.py b/keras/models/model.py
--- a/keras/models/model.py
+++ b/keras/models/model.py
@@ -7,7 +7,6 @@
from keras import utils
from keras.api_export import keras_export
from keras.layers.layer import Layer
-from keras.legacy.saving import legacy_h5_format
from keras.models.variable_mappi... | diff --git a/keras/saving/saving_api_test.py b/keras/saving/saving_api_test.py
--- a/keras/saving/saving_api_test.py
+++ b/keras/saving/saving_api_test.py
@@ -171,8 +171,10 @@ def test_h5_deprecation_warning(self):
with mock.patch.object(logging, "warning") as mock_warn:
saving_api.save_model(mode... | Feature duplication on model.save() and keras.saving.save_model()
When I was reading the code of model saving, I got strange feeling.
https://github.com/keras-team/keras/blob/724321c7b39a90f6125b79931284aa9932c673a0/keras/models/model.py#L294-L297
It says `model.save()` is an alias for `keras.saving.save_model()`. ... | Yes, feel free to open a PR to reduce code redundancy. Thanks! | 2023-12-02 09:56:38+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/saving/saving_api_test.py:LoadWeightsTests:test_load_keras_weights', 'keras/saving/saving_api_test.py:LoadModelTests:test_load_model_with_custom_objects', 'keras/saving/saving_api_test.py:LoadWeightsTests:test_load_h5_weights_by_name', 'keras/saving/saving_api_test.py:LoadModelTests:test_basic_load', 'keras/sav... | ['keras/saving/saving_api_test.py:SaveModelTestsWarning:test_h5_deprecation_warning'] | null | pytest /testbed/keras/saving/saving_api_test.py -v --junitxml=test-results.xml | Refactoring | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/saving/saving_api.py->module->function_definition:save_model", "keras/models/model.py->module->class_definition:Model->function_definition:save"] |
keras-team/keras | 18,975 | keras-team__keras-18975 | ['18970'] | 4a4a139c7aada9f4495620e5a1c5f7ef20d84395 | diff --git a/keras/trainers/compile_utils.py b/keras/trainers/compile_utils.py
--- a/keras/trainers/compile_utils.py
+++ b/keras/trainers/compile_utils.py
@@ -468,6 +468,8 @@ def build(self, y_true, y_pred):
"must be a callable. "
f"Received instead:\nloss={loss} of type {type(... | diff --git a/keras/trainers/compile_utils_test.py b/keras/trainers/compile_utils_test.py
--- a/keras/trainers/compile_utils_test.py
+++ b/keras/trainers/compile_utils_test.py
@@ -251,6 +251,21 @@ def test_single_output_case(self):
value = compile_loss(y_true, y_pred)
self.assertAllClose(value, 0.06833... | Setting loss="crossentropy" in the compile method of a model raises an error: 'list' object has no attribute 'shape'
I love the workflow style of Keras so I decide to make some new metric in my own project. I want metrics more general like "accuracy". So when I run some tests like above, I came across that the loss see... | null | 2023-12-20 14:15:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/trainers/compile_utils_test.py:TestCompileLoss:test_list_loss_dict_data', 'keras/trainers/compile_utils_test.py:TestCompileLoss:test_single_output_case', 'keras/trainers/compile_utils_test.py:TestCompileMetrics:test_custom_metric_function', 'keras/trainers/compile_utils_test.py:TestCompileMetrics:test_name_conv... | ['keras/trainers/compile_utils_test.py:TestCompileLoss:test_single_output_case_with_crossentropy_loss'] | null | pytest /testbed/keras/trainers/compile_utils_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/trainers/compile_utils.py->module->class_definition:CompileLoss->function_definition:build"] |
keras-team/keras | 18,977 | keras-team__keras-18977 | ['18976'] | fe2f54aa5bc42fb23a96449cf90434ab9bb6a2cd | diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py
--- a/keras/utils/tracking.py
+++ b/keras/utils/tracking.py
@@ -107,7 +107,6 @@ def add_to_store(self, store_name, value):
class TrackedList(list):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
... | diff --git a/keras/utils/tracking_test.py b/keras/utils/tracking_test.py
--- a/keras/utils/tracking_test.py
+++ b/keras/utils/tracking_test.py
@@ -33,3 +33,24 @@ def test_untracking_in_tracked_list(self):
lst.remove(v2)
self.assertLen(lst, 2)
self.assertLen(tracked_variables, 0)
+
+ ls... | chore: override item removal methods in tracking
Based on the TODO comments in keras/keras/utils/tracking.py
| null | 2023-12-21 07:57:15+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | [] | ['keras/utils/tracking_test.py:TrackingTest:test_untracking_in_tracked_list'] | null | pytest /testbed/keras/utils/tracking_test.py -v --junitxml=test-results.xml | Refactoring | false | false | false | true | 8 | 3 | 11 | false | false | ["keras/utils/tracking.py->module->class_definition:TrackedSet->function_definition:pop", "keras/utils/tracking.py->module->class_definition:TrackedList->function_definition:pop", "keras/utils/tracking.py->module->class_definition:TrackedDict->function_definition:popitem", "keras/utils/tracking.py->module->class_defini... |
keras-team/keras | 19,190 | keras-team__keras-19190 | ['19180'] | 436937dea3d52eecff3cb6f1bd5161f23c825fae | diff --git a/keras/layers/preprocessing/text_vectorization.py b/keras/layers/preprocessing/text_vectorization.py
--- a/keras/layers/preprocessing/text_vectorization.py
+++ b/keras/layers/preprocessing/text_vectorization.py
@@ -492,6 +492,10 @@ def from_config(cls, config):
config["split"] = serialization_l... | diff --git a/keras/layers/preprocessing/text_vectorization_test.py b/keras/layers/preprocessing/text_vectorization_test.py
--- a/keras/layers/preprocessing/text_vectorization_test.py
+++ b/keras/layers/preprocessing/text_vectorization_test.py
@@ -1,11 +1,15 @@
+import os
+
import numpy as np
import pytest
import ten... | `ValueError`: `ngrams` when loading a model with a `TextVectorization` layer
### Describe a bug
Loading a model that contains a `TextVectorization` layer with `ngram` set to a tuple results in a `ValueError`.
### Code to Reproduce
```python
import numpy as np
import tensorflow as tf
from tensorflow import k... | null | 2024-02-16 15:30:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-bullseye
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
libffi-dev \
python3-dev \
gfortran \
libopenblas-dev \
... | ['keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_set_vocabulary', 'keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_ragged_tensor_output_length', 'keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_fixed_vocabulary', 'keras/... | ['keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_save_load_with_ngrams_flow'] | null | pytest /testbed/keras/layers/preprocessing/text_vectorization_test.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/layers/preprocessing/text_vectorization.py->module->class_definition:TextVectorization->function_definition:from_config"] |
keras-team/keras | 19,201 | keras-team__keras-19201 | ['19199'] | ec67b760ba25e1ccc392d288f7d8c6e9e153eea2 | diff --git a/keras/backend/jax/distribution_lib.py b/keras/backend/jax/distribution_lib.py
--- a/keras/backend/jax/distribution_lib.py
+++ b/keras/backend/jax/distribution_lib.py
@@ -200,12 +200,12 @@ def initialize(job_addresses, num_processes, process_id):
f"{len(job_addresses)} jobs, but num_process... | diff --git a/keras/backend/jax/distribution_lib_test.py b/keras/backend/jax/distribution_lib_test.py
--- a/keras/backend/jax/distribution_lib_test.py
+++ b/keras/backend/jax/distribution_lib_test.py
@@ -50,7 +50,7 @@ def test_device_conversion(self):
def test_initialize_with_all_job_addresses(self, mock_jax_initia... | Typo in keras.distribution.initialize()
Hi,
There is a typo when calling `keras.distribution.initialize` due to a typo in the jax backend. The function pass the `corrdinator_address` argument instead of `coordinator_address` to `jax.distributed.initialize`
```log
---> 13 keras.distribution.initialize()
File /... | null | 2024-02-19 18:18:24+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_processes', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_tensor', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_variable', 'keras/backend/jax/distribution_lib_test.py:JaxDi... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_all_job_addresses', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_coordinater_address'] | null | python -m pytest /testbed/keras/backend/jax/distribution_lib_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/backend/jax/distribution_lib.py->module->function_definition:initialize"] |
keras-team/keras | 19,260 | keras-team__keras-19260 | ['19216'] | 3c633f98b96b880ba2ba30464778b05193aed6b8 | diff --git a/.github/workflows/actions.yml b/.github/workflows/actions.yml
--- a/.github/workflows/actions.yml
+++ b/.github/workflows/actions.yml
@@ -49,6 +49,7 @@ jobs:
run: |
pip install -r requirements.txt --progress-bar off --upgrade
pip uninstall -y keras keras-nightly
+ pi... | diff --git a/integration_tests/numerical_test.py b/integration_tests/numerical_test.py
--- a/integration_tests/numerical_test.py
+++ b/integration_tests/numerical_test.py
@@ -3,6 +3,9 @@
import numpy as np
import tf_keras
+keras.backend.set_image_data_format("channels_last")
+tf_keras.backend.set_image_data_format(... | Enable `integration_test/numerical_test.py` for CI
May need to add more layers to the model.
| null | 2024-03-07 00:39:01+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files
COPY . .
# Inst... | [] | ['integration_tests/numerical_test.py:None:numerical_test'] | null | python -m pytest -v /testbed/integration_tests/numerical_test.py -o python_functions=numerical_test --junitxml=test-results.xml | Testing | true | false | false | false | 0 | 0 | 0 | false | false | [] |
keras-team/keras | 19,284 | keras-team__keras-19284 | ['19257'] | 4c356306273153d5dc26fc5772b106b4f750095f | diff --git a/keras/dtype_policies/dtype_policy.py b/keras/dtype_policies/dtype_policy.py
--- a/keras/dtype_policies/dtype_policy.py
+++ b/keras/dtype_policies/dtype_policy.py
@@ -173,9 +173,6 @@ def _parse_name(self, name):
return "float16", "float32"
elif name == "mixed_bfloat16":
re... | diff --git a/keras/layers/attention/attention_test.py b/keras/layers/attention/attention_test.py
--- a/keras/layers/attention/attention_test.py
+++ b/keras/layers/attention/attention_test.py
@@ -342,3 +342,19 @@ def test_attention_compute_mask_with_different_input_shapes(self):
computed_mask = layer.comput... | Keras 3 Attention layer value tensor dimension
hi,
I found the below would not return the proper size output in Keras 3 (but works fine in Keras 2)
Please help to fix it,
Thanks.
```python
import keras
from keras import layers
i = layers.Input((8,4))
xq = layers.Conv1D(5,1)(i)
xk = layers.Conv1D(... | null | 2024-03-11 17:59:37+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files
COPY . .
# Inst... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_with_tolerance_1e_3', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_returns_correct_tensor_with_all_true_mask', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_w... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_output_shape'] | null | python -m pytest /testbed/keras/layers/attention/attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/dtype_policies/dtype_policy.py->module->class_definition:FloatDTypePolicy->function_definition:_parse_name", "keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:compute_output_shape"] |
keras-team/keras | 19,300 | keras-team__keras-19300 | ['19299'] | df705d4fc719ab617705197248804d689ad74767 | diff --git a/keras/ops/nn.py b/keras/ops/nn.py
--- a/keras/ops/nn.py
+++ b/keras/ops/nn.py
@@ -538,10 +538,13 @@ def softmax(x, axis=-1):
array([0.09003057, 0.24472847, 0.66524096], shape=(3,), dtype=float64)
"""
- if isinstance(axis, int) and backend.shape(x)[axis] == 1:
+ # Don't use `backend.shape`... | diff --git a/keras/ops/nn_test.py b/keras/ops/nn_test.py
--- a/keras/ops/nn_test.py
+++ b/keras/ops/nn_test.py
@@ -2,10 +2,12 @@
import pytest
from absl.testing import parameterized
+import keras
from keras import backend
from keras import layers
from keras import losses
from keras import models
+from keras imp... | `keras.ops.softmax` errors out when used in a TensorFlow compiled function
## MRE
```python
import keras
from keras import ops
class SoftmaxLayer(keras.Layer):
def call(self, x):
return ops.softmax(x, axis=-1)
class Model(keras.Model):
def __init__(self):
x = keras.Input(shape=(None,))
y... | null | 2024-03-13 07:57:31+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files
COPY . .
# Inst... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_relu', 'keras/ops/nn_test.py:NNOpsDtypeTest:test_hard_silu_float32', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_silu', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_ctc_loss', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_separable_conv_2d1', 'keras/ops/nn_tes... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_softmax_in_graph'] | null | python -m pytest /testbed/keras/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/ops/nn.py->module->function_definition:softmax"] |
keras-team/keras | 19,459 | keras-team__keras-19459 | ['19437'] | 68e0368c680decbc7c9e1da57b56b3a8212b3ec2 | diff --git a/keras/backend/numpy/random.py b/keras/backend/numpy/random.py
--- a/keras/backend/numpy/random.py
+++ b/keras/backend/numpy/random.py
@@ -67,6 +67,7 @@ def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
def dropout(inputs, rate, noise_shape=None, seed=None):
+ dtype = inputs.... | diff --git a/keras/layers/regularization/alpha_dropout_test.py b/keras/layers/regularization/alpha_dropout_test.py
--- a/keras/layers/regularization/alpha_dropout_test.py
+++ b/keras/layers/regularization/alpha_dropout_test.py
@@ -15,6 +15,7 @@ def test_alpha_dropout_basics(self):
"rate": 0.2,
... | Keras with TF backend GaussianDropout gives error with mixed_bfloat16
When using Keras with 3.1.1 with Tensorflow 2.16.1 backend, using GaussianDropout layer with mixed_bfloat16 results in the following error message:
```
TypeError: Exception encountered when calling GaussianDropout.call().
Input 'y' of 'Mul' Op h... | BTW, I can see that Keras 2.15 uses dtype=inputs.dtype when calling self._random_generator.random_normal function.
Another addition: Keras 3 Documentation suggests setting mixed policy with following line:
`tf.keras.config.set_dtype_policy('mixed_bfloat16')`
instead of the one I supplied above. Still same error. | 2024-04-08 07:27:18+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/random/random_test.py:RandomDTypeTest:test_normal_float64', 'keras/random/random_test.py:RandomDTypeTest:test_categorical_int8', 'keras/random/random_test.py:RandomDTypeTest:test_randint_uint8', 'keras/random/random_test.py:RandomTest:test_truncated_normal1', 'keras/random/random_test.py:RandomTest:test_shuffle... | ['keras/random/random_test.py:RandomDTypeTest:test_binomial_bfloat16', 'keras/layers/regularization/gaussian_dropout_test.py:GaussianDropoutTest:test_gaussian_dropout_basics', 'keras/random/random_test.py:RandomDTypeTest:test_gamma_bfloat16', 'keras/random/random_test.py:RandomDTypeTest:test_beta_bfloat16', 'keras/laye... | null | python -m pytest /testbed/keras/layers/regularization/alpha_dropout_test.py /testbed/keras/layers/regularization/dropout_test.py /testbed/keras/layers/regularization/gaussian_dropout_test.py /testbed/keras/layers/regularization/gaussian_noise_test.py /testbed/keras/random/random_test.py -v --json-report | Bug Fix | false | true | false | false | 7 | 0 | 7 | false | false | ["keras/layers/regularization/gaussian_noise.py->module->class_definition:GaussianNoise->function_definition:call", "keras/backend/tensorflow/random.py->module->function_definition:gamma", "keras/backend/tensorflow/random.py->module->function_definition:binomial", "keras/backend/numpy/random.py->module->function_defini... |
keras-team/keras | 19,466 | keras-team__keras-19466 | ['19407'] | 504716cb71973d4d4e485eb1724a3c4d3b621a69 | diff --git a/keras/ops/numpy.py b/keras/ops/numpy.py
--- a/keras/ops/numpy.py
+++ b/keras/ops/numpy.py
@@ -3992,6 +3992,9 @@ class Nonzero(Operation):
def call(self, x):
return backend.numpy.nonzero(x)
+ def compute_output_spec(self, x):
+ return KerasTensor([None] * len(x.shape))
+
@keras_... | diff --git a/keras/ops/numpy_test.py b/keras/ops/numpy_test.py
--- a/keras/ops/numpy_test.py
+++ b/keras/ops/numpy_test.py
@@ -1311,6 +1311,10 @@ def test_ndim(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ndim(x).shape, (2,))
+ def test_nonzero(self):
+ x = KerasTensor((None, 5, ... | Numpy Ops function nonzero(x) appers to be missing check for symbolic tensors
In updating code from Keras 2 to 3, we noticed that nonzero function continues to throw errors for use of KerasTensor in TF functions, even when run though tf.keras.ops
Digging into the source, it appears that this function does not recei... | null | 2024-04-09 17:23:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_take_sparse_axis_0_float64', 'keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_transpose', 'keras/ops/numpy_test.py:NumpyTwoInputOpsStaticShapeTest:test_less_equal', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_prod_none', 'keras/ops/numpy_test.... | ['keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_nonzero'] | null | python -m pytest /testbed/keras/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/ops/numpy.py->module->function_definition:nonzero", "keras/ops/numpy.py->module->class_definition:Nonzero->function_definition:compute_output_spec"] |
keras-team/keras | 19,484 | keras-team__keras-19484 | ['19411'] | 6a9bc4c051f0e4ee5e4ff48f08fd14230036dc46 | diff --git a/keras/optimizers/base_optimizer.py b/keras/optimizers/base_optimizer.py
--- a/keras/optimizers/base_optimizer.py
+++ b/keras/optimizers/base_optimizer.py
@@ -567,7 +567,7 @@ def _get_current_learning_rate(self):
):
return self._learning_rate(self.iterations)
elif callable(sel... | diff --git a/keras/optimizers/optimizer_test.py b/keras/optimizers/optimizer_test.py
--- a/keras/optimizers/optimizer_test.py
+++ b/keras/optimizers/optimizer_test.py
@@ -243,3 +243,12 @@ def test_tf_checkpointing(self):
checkpoint.restore(save_path)
pred = model.predict(x)
self.assertAllClos... | keras adamw optimizer failed with callable parameters in TensorFlow2.16
When we were working on upgrading keras 2 to keras 3 in TensorFlow plugin, one of our adamw related unit test failed, which is a sub unit test using callable lambda as learning_rate argument. We also found this ut failed in TensorFlow2.16 official... | https://github.com/keras-team/keras/blob/6c591d7d34c3ffaa50e805fd75c83d9c2a23414f/keras/optimizers/base_optimizer.py#L560
Here is the root cause. If learning_rate is a callable object, then it doesn't need any arguments.
I might give this one a stab if no one picks it up.
@kapoor1992 , You can create a PR
@sachinpras... | 2024-04-10 22:45:57+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_set_weights', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_ema', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_get_method', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_clip_args', 'keras/optimizers/optimizer_test.py:OptimizerTest:test... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_callable_learning_rate'] | null | python -m pytest /testbed/keras/optimizers/optimizer_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/optimizers/base_optimizer.py->module->class_definition:BaseOptimizer->function_definition:_get_current_learning_rate"] |
keras-team/keras | 19,636 | keras-team__keras-19636 | ['19629'] | 880f0cdd67591474d8ed98a6b192655322b7ecfc | diff --git a/keras/src/dtype_policies/dtype_policy.py b/keras/src/dtype_policies/dtype_policy.py
--- a/keras/src/dtype_policies/dtype_policy.py
+++ b/keras/src/dtype_policies/dtype_policy.py
@@ -1,5 +1,4 @@
from keras.src import backend
-from keras.src import ops
from keras.src.api_export import keras_export
from ke... | diff --git a/keras/src/layers/layer_test.py b/keras/src/layers/layer_test.py
--- a/keras/src/layers/layer_test.py
+++ b/keras/src/layers/layer_test.py
@@ -437,13 +437,13 @@ def test_mixed_precision(self):
y = layer(x)
self.assertEqual(layer.compute_dtype, "float16")
self.assertEqual(layer.var... | keras autocast casts numpy int types to float
In keras 2 I was using model input tuples with mixed types (some float and some int). This worked nicely with all policies. In keras 3 in case numpy arrays are used used as input np.int32 will be converted into tf.float32 or tf.float16 (depending on policy).
See her... | The expected behavior is that all inputs should be autocasted to `self.input_dtype`, which is what's happening here.
You could just set `input_dtype` to be what you want.
Alternatively, you can make a layer/model that does not cast/convert its inputs at all, by setting `self._convert_input_args = False`. You will... | 2024-04-29 02:11:03+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/layers/layer_test.py:LayerTest:test_training_arg_value_resolution', 'keras/src/layers/layer_test.py:LayerTest:test_rng_seed_tracking', 'keras/src/layers/layer_test.py:LayerTest:test_add_loss', 'keras/src/layers/layer_test.py:LayerTest:test_trainable_setting', 'keras/src/layers/layer_test.py:LayerTest:test_r... | ['keras/src/layers/layer_test.py:LayerTest:test_autocast_with_np_array'] | null | python -m pytest /testbed/keras/src/layers/layer_test.py /testbed/keras/src/layers/normalization/spectral_normalization_test.py /testbed/keras/src/testing/test_case.py -v --json-report | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/dtype_policies/dtype_policy.py->module->class_definition:DTypePolicy->function_definition:_should_cast", "keras/src/dtype_policies/dtype_policy.py->module->class_definition:DTypePolicy->function_definition:convert_input"] |
keras-team/keras | 19,641 | keras-team__keras-19641 | ['19591'] | 9f4da5159a098256dfbccd2c926107953a6812e5 | diff --git a/keras/src/backend/tensorflow/nn.py b/keras/src/backend/tensorflow/nn.py
--- a/keras/src/backend/tensorflow/nn.py
+++ b/keras/src/backend/tensorflow/nn.py
@@ -252,6 +252,12 @@ def _conv_xla():
# If kernel's in_channel does not match input's channels, it indicates
# convolution is broken d... | diff --git a/keras/src/ops/nn_test.py b/keras/src/ops/nn_test.py
--- a/keras/src/ops/nn_test.py
+++ b/keras/src/ops/nn_test.py
@@ -1445,23 +1445,29 @@ def test_conv_2d_group_2(self, strides, dilation_rate):
)
self.assertAllClose(outputs, expected)
- @parameterized.product(strides=(1, (1, 1, 1), 2... | Conv3D crash when the data_format is 'channels_first' and using Tensorflow backend
According to the [document](https://keras.io/api/layers/convolution_layers/convolution3d/) of Conv3D in keras website, Conv3D should accept inputs with data format 'channels_first' or 'channels_last'.
While in this [colab](https://colab... | According to the error message, the lack of support is only on CPU -- GPU should work fine. There's no CPU kernel for channels_first Conv3D. We can't fix that on the Keras side except by doing a transpose/counter-transpose in that case, which would be very inefficient.
Got it. I'll try it on GPU.
@fchollet
Sorry for ... | 2024-04-30 00:14:46+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_depthwise_conv_2d2', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_log_sigmoid', 'keras/src/ops/nn_test.py:NNOpsDtypeTest:test_sigmoid_bfloat16', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_average_pool', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d2', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d4', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d8', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d10', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d6', 'ke... | null | python -m pytest /testbed/keras/src/ops/nn_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/nn.py->module->function_definition:conv"] |
keras-team/keras | 19,773 | keras-team__keras-19773 | ['19770'] | a243d91e43b4c43fe8d184b541b608b6ddd80f71 | diff --git a/keras/src/layers/preprocessing/string_lookup.py b/keras/src/layers/preprocessing/string_lookup.py
--- a/keras/src/layers/preprocessing/string_lookup.py
+++ b/keras/src/layers/preprocessing/string_lookup.py
@@ -316,6 +316,7 @@ def __init__(
raise ValueError(
"`sparse=True` can ... | diff --git a/keras/src/layers/preprocessing/string_lookup_test.py b/keras/src/layers/preprocessing/string_lookup_test.py
--- a/keras/src/layers/preprocessing/string_lookup_test.py
+++ b/keras/src/layers/preprocessing/string_lookup_test.py
@@ -5,6 +5,7 @@
from keras.src import backend
from keras.src import layers
fro... | [BUG] keras.layers.StringLookup and Vocabulary of Tensors
There is a bug in keras.layers.StringLookup when initializing it with a vocabulary of tensors.
```
import tensorflow as tf
vocab = ["a", "b", "c", "d"]
data = [["a", "c", "d"], ["d", "z", "b"]]
layer = tf.keras.layers.StringLookup(vocabulary=tf.convert_... | Hi @rlcauvin ,
Thanks for report. I have reporduced the issue with Keras3 and TF2.15v as well. Tested with Tf2.12v and it works well.[Gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/9b18cf4427067c71060aa3adfcf03873/19770.ipynb)
The rootcause pointed by you seems proper solution. In **TF2.12v** , I can... | 2024-05-29 06:29:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_set_vocabulary', 'keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_config', 'keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_tf_data_compatibility', 'keras/src/layers/preprocessing/string_lo... | ['keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_tensor_as_vocab'] | null | python -m pytest /testbed/keras/src/layers/preprocessing/string_lookup_test.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["keras/src/layers/preprocessing/string_lookup.py->module->class_definition:StringLookup->function_definition:__init__"] |
keras-team/keras | 19,775 | keras-team__keras-19775 | ['19772'] | a243d91e43b4c43fe8d184b541b608b6ddd80f71 | diff --git a/keras/src/backend/tensorflow/numpy.py b/keras/src/backend/tensorflow/numpy.py
--- a/keras/src/backend/tensorflow/numpy.py
+++ b/keras/src/backend/tensorflow/numpy.py
@@ -1310,6 +1310,10 @@ def less_equal(x1, x2):
def linspace(
start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0
):
... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -2488,17 +2488,13 @@ def test_linspace(self):
np.linspace(start, stop, 5, retstep=True)[0],
)
self.assertAllClose(
- backend.convert_to_... | ops.linspace broken in Tensorflow when num is a tf.Tensor
When using ops.linspace with Tensorflow backend, if the `num` argument is a tf.Tensor the code will break here:
https://github.com/keras-team/keras/blob/a243d91e43b4c43fe8d184b541b608b6ddd80f71/keras/src/backend/tensorflow/numpy.py#L1332
Because `start` and ... | Hi @gustavoeb ,
Thanks for the report. I have reproduced the issue and attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/4bab4d097a48b487f32c28a1e89a2d9f/19772.ipynb) here. The Op `linspace` is breaking when the value of `num` is `int` or `float` | 2024-05-29 09:55:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_linspace'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/numpy.py->module->function_definition:linspace"] |
keras-team/keras | 19,799 | keras-team__keras-19799 | ['19792'] | c94663711d738b50af324214d89f895e046a2b66 | diff --git a/keras/src/models/functional.py b/keras/src/models/functional.py
--- a/keras/src/models/functional.py
+++ b/keras/src/models/functional.py
@@ -181,6 +181,10 @@ def compute_output_spec(self, inputs, training=None, mask=None):
# From Function
return super().compute_output_spec(inputs)
+ ... | diff --git a/keras/src/models/functional_test.py b/keras/src/models/functional_test.py
--- a/keras/src/models/functional_test.py
+++ b/keras/src/models/functional_test.py
@@ -118,6 +118,20 @@ def test_basic_flow_dict_io(self):
out_val = model(in_val)
self.assertEqual(out_val.shape, (2, 4))
+ def ... | TimeDistributed layer with nested model no longer working in TensorFlow 2.16.1
With TensorFlow `2.15.1`, the following code works fine:
```python3
import numpy as np
from tensorflow.keras.layers import Input, TimeDistributed, Flatten
from tensorflow.keras.models import Model, Sequential
inputs = [Input((17, 4)... | null | 2024-06-04 05:07:23+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/models/functional_test.py:FunctionalTest:test_rank_standardization', 'keras/src/models/sequential_test.py:SequentialTest:test_dict_inputs', 'keras/src/models/functional_test.py:FunctionalTest:test_basic_flow_multi_output', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_deferred', 'keras... | ['keras/src/models/functional_test.py:FunctionalTest:test_basic_flow_as_a_submodel', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_as_a_submodel', 'keras/src/models/sequential_test.py:SequentialTest:test_compute_output_shape', 'keras/src/ops/function_test.py:FunctionTest:test_dynamic_shape_inferen... | null | python -m pytest /testbed/keras/src/models/functional_test.py /testbed/keras/src/models/sequential_test.py /testbed/keras/src/ops/function_test.py -v --json-report | Bug Fix | false | false | false | true | 3 | 3 | 6 | false | false | ["keras/src/models/sequential.py->module->class_definition:Sequential->function_definition:compute_output_shape", "keras/src/models/sequential.py->module->class_definition:Sequential", "keras/src/models/functional.py->module->class_definition:Functional->function_definition:compute_output_shape", "keras/src/models/func... |
keras-team/keras | 19,826 | keras-team__keras-19826 | ['19821'] | 2305fada8889e86463493bb4893b13ee8a8f0573 | diff --git a/keras/src/ops/numpy.py b/keras/src/ops/numpy.py
--- a/keras/src/ops/numpy.py
+++ b/keras/src/ops/numpy.py
@@ -4345,26 +4345,44 @@ def call(self, x):
def compute_output_spec(self, x):
x_shape = list(x.shape)
+ repeats = self.repeats
+ if isinstance(repeats, int):
+ r... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -1364,7 +1364,7 @@ def test_repeat(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.repeat(x, 2).shape, (None,))
self.assertEqual(knp.repeat(x, 3... | `keras.ops.repeat` cannot return an exptected shape when `x` is a `KerasTensor` and the `axis` is `None`
Hello. Thank you for your contributions and maintenance for the best Keras.
I'm following the instructions of [Conditional GAN (code samples, uses Keras 3)](https://keras.io/examples/generative/conditional_gan/)... | I can look into this and report my findings in a few hours
This is due to an oversight caused by the different ways Keras and other backends handle the `repeats` parameter.
You can submit a PR after you solve it.
Edited: [Was confused about the expected dimensions of the output but I found the mistake in my logic] | 2024-06-10 15:05:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyOneInputOpsStaticShapeTest:test_repeat', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_repeat'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/ops/numpy.py->module->class_definition:Repeat->function_definition:compute_output_spec"] |
keras-team/keras | 19,838 | keras-team__keras-19838 | ['19825'] | 26abe697a8802de40cb2761fc98b843fe1b2d5f6 | diff --git a/keras/src/losses/losses.py b/keras/src/losses/losses.py
--- a/keras/src/losses/losses.py
+++ b/keras/src/losses/losses.py
@@ -1711,6 +1711,9 @@ def sparse_categorical_crossentropy(
array([0.0513, 2.303], dtype=float32)
"""
+ if len(y_true.shape) == len(y_pred.shape) and y_true.shape[-1] == 1... | diff --git a/keras/src/losses/losses_test.py b/keras/src/losses/losses_test.py
--- a/keras/src/losses/losses_test.py
+++ b/keras/src/losses/losses_test.py
@@ -1055,7 +1055,7 @@ def test_no_reduction(self):
from_logits=True, reduction=None
)
loss = cce_obj(y_true, logits)
- self.ass... | sparse_categorical_crossentropy with ignore_class fails for 4D inputs
Using `ignore_class` with `keras.losses.sparse_categorical_crossentropy` and 4D inputs (Batch x Height x Width x Class) fails with a ValueError indicating wrong shapes.
Minimal example to reproduce:
```
import numpy as np
import tensorflow as t... | > y_true = np.zeros((1, 224, 224, 1))
=> `y_true = np.zeros((1, 224, 224))`
Shouldn't `y_true` has one dimension less than `y_pred`?
Oh, you are right, with `y_true = np.zeros((1, 224, 224))` it seems to work...
However, when omitting `ignore_class` from `sparse_categorical_crossentropy`, `y_true = np.zeros((1... | 2024-06-11 16:45:49+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy repository contents
COPY . .
# In... | ['keras/src/losses/losses_test.py:CategoricalFocalCrossentropyTest:test_label_smoothing', 'keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_unweighted', 'keras/src/losses/losses_test.py:MeanAbsoluteErrorTest:test_zero_weighted', 'keras/src/losses/losses_test.py:CategoricalCrossentropyTest:test_con... | ['keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_ignore_class'] | null | pytest /testbed/keras/src/losses/losses_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/losses/losses.py->module->function_definition:sparse_categorical_crossentropy"] |
keras-team/keras | 19,844 | keras-team__keras-19844 | ['19828'] | 1c60668f6bdd05dab619806e7b2dc25d3ed4ccbf | diff --git a/keras/src/initializers/__init__.py b/keras/src/initializers/__init__.py
--- a/keras/src/initializers/__init__.py
+++ b/keras/src/initializers/__init__.py
@@ -49,6 +49,7 @@
"uniform": RandomUniform,
"normal": RandomNormal,
"orthogonal": OrthogonalInitializer,
+ "Orthogonal"... | diff --git a/keras/src/initializers/random_initializers_test.py b/keras/src/initializers/random_initializers_test.py
--- a/keras/src/initializers/random_initializers_test.py
+++ b/keras/src/initializers/random_initializers_test.py
@@ -147,6 +147,10 @@ def test_orthogonal_initializer(self):
self.run_class_ser... | Keras 3.0 load h5 model with Orthogonal initializer fails
Hi guys,
I'm trying to load an h5 model that was working in earlier versions.
* This is a small part of the h5 file, where you can see (last part of the snippet) a recurrent initializer with a classname of **Orthogonal**.
```
{"name": "decoder_gru0", ... | Hi @mahnehsilla -
Thanks for raising the issue. Can you share the code snippet and h5 model with me where you are getting this error ? So I can reproduce it and try to help you on this.
| 2024-06-12 08:33:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential
# Copy the entire repository
COPY . .
# Install tensorflow and other backend dependencies first
RUN pip install tensorflow n... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_pass_initial_state', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling', 'keras/src/layers/rnn/gru_test.py:GRUTest:test_statefulness', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling_inval... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_legacy_implementation_argument', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_orthogonal_initializer'] | null | pytest /testbed/keras/src/initializers/random_initializers_test.py /testbed/keras/src/layers/rnn/gru_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["keras/src/layers/rnn/gru.py->module->class_definition:GRU->function_definition:__init__"] |
Subsets and Splits
Top Repos by Test Count
Lists the top 1000 repositories by the number of entries, providing a basic count of entries per repository.
Unique Repo Selection
Lists unique repository names from the dataset, providing a basic overview of the repositories present.