repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | fixed_commit stringclasses 20
values | patch stringlengths 400 56.6k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels listlengths 0 7 ⌀ | category stringclasses 4
values | edit_functions listlengths 1 10 | added_functions listlengths 0 19 | edit_functions_length int64 1 10 | __index_level_0__ int64 1 659 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
UXARRAY/uxarray | UXARRAY__uxarray-1117 | fe4cae1311db7fb21187b505e06018334a015c48 | null | diff --git a/uxarray/grid/connectivity.py b/uxarray/grid/connectivity.py
index 78e936117..54bd1017e 100644
--- a/uxarray/grid/connectivity.py
+++ b/uxarray/grid/connectivity.py
@@ -146,13 +146,14 @@ def _build_n_nodes_per_face(face_nodes, n_face, n_max_face_nodes):
"""Constructs ``n_nodes_per_face``, which contain... | Optimize Face Centroid Calculations
If `Grid.face_lon` does not exist, `_populate_face_centroids()`, actually `_construct_face_centroids()` in it, takes extremely long for large datasets. For instance, the benchmark/profiling below is for a ~4GB SCREAM dataset, around 5 mins:
@rajeeja FYI: I'm already working on this ... | 1,734,798,627,000 | [
"run-benchmark"
] | Performance Issue | [
"uxarray/grid/connectivity.py:_build_n_nodes_per_face",
"uxarray/grid/coordinates.py:_construct_face_centroids"
] | [] | 2 | 1 | ||
ultralytics/ultralytics | ultralytics__ultralytics-17810 | d8c43874ae830a36d2adeac4a44a8ce5697e972c | null | diff --git a/ultralytics/utils/ops.py b/ultralytics/utils/ops.py
index 25e83c61c3a..ac53546ed1b 100644
--- a/ultralytics/utils/ops.py
+++ b/ultralytics/utils/ops.py
@@ -75,9 +75,8 @@ def segment2box(segment, width=640, height=640):
(np.ndarray): the minimum and maximum x and y values of the segment.
"""
... | Training labels not applied properly to training data
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
# Bug
Labels are not included in the gener... | 👋 Hello @TheOfficialOzone, thank you for bringing this to our attention 🚀! We understand that you're encountering an issue with labels not being applied correctly during the training of a segmentation model on the Ultralytics repository.
For us to assist you effectively, please ensure that you've provided a [minimum... | 1,732,632,265,000 | [
"enhancement",
"segment"
] | Bug Report | [
"ultralytics/utils/ops.py:segment2box"
] | [] | 1 | 4 | |
Chainlit/chainlit | Chainlit__chainlit-1575 | 8b2d4bacfd4fa2c8af72e2d140d527d20125b07b | null | diff --git a/backend/chainlit/config.py b/backend/chainlit/config.py
index b90f162f07..18ee6be8db 100644
--- a/backend/chainlit/config.py
+++ b/backend/chainlit/config.py
@@ -311,6 +311,8 @@ class CodeSettings:
@dataclass()
class ProjectSettings(DataClassJsonMixin):
allow_origins: List[str] = Field(default_facto... | diff --git a/cypress/e2e/copilot/.chainlit/config.toml b/cypress/e2e/copilot/.chainlit/config.toml
index e2a93af08f..9c42755715 100644
--- a/cypress/e2e/copilot/.chainlit/config.toml
+++ b/cypress/e2e/copilot/.chainlit/config.toml
@@ -13,7 +13,7 @@ session_timeout = 3600
cache = false
# Authorized origins
-allow_or... | Security: allowed origins should not be * by default
CORS headers should be restricted to the current domain at least, by default.
| @dosu Where do we have to look in the settings/code to set this to a sensible/safe default value?
<!-- Answer -->
To set the allowed origins for CORS headers to a sensible/safe default value, you need to look at the `allow_origins` setting in the `config.toml` file.
```toml
# Authorized origins
allow_origins = ["*"]
`... | 1,733,733,602,000 | [
"size:M"
] | Security Vulnerability | [
"backend/chainlit/server.py:get_html_template",
"backend/chainlit/socket.py:build_anon_user_identifier",
"backend/chainlit/socket.py:connect",
"backend/chainlit/socket.py:connection_successful",
"backend/chainlit/socket.py:window_message"
] | [] | 5 | 5 |
huggingface/transformers | huggingface__transformers-22496 | 41d47db90fbe9937c0941f2f9cdb2ddd83e49a2e | null | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
index 91de6810b17e..96f91a0a43dd 100644
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -34,7 +34,12 @@
SequenceClassifierOut... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
index 883a2021b9bb..98bbbb3214a7 100644
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -1013,6 +1013,48 @@ def test_mask_time_prob(self):
en... | Whisper Prompting
### Feature request
Add prompting for the Whisper model to control the style/formatting of the generated text.
### Motivation
During training, Whisper can be fed a "previous context window" to condition on longer passages of text.
The original OpenAI Whisper implementation provides the use... | cc @hollance
Hello, I'd like to pick up this issue! | 1,680,278,096,000 | [] | Feature Request | [
"src/transformers/models/whisper/modeling_whisper.py:WhisperForConditionalGeneration.generate",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._decode",
"src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast._decode"
] | [
"src/transformers/models/whisper/processing_whisper.py:WhisperProcessor.get_prompt_ids",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer.get_prompt_ids",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._strip_prompt",
"src/transformers/models/whisper/tokeniz... | 3 | 8 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-24145 | 55af30d981ea2f72346ff93602f0b3b740cfe8d6 | e5c65906ee7fdab81c2950fab0fe4d04ed7ad522 | diff --git a/doc/whats_new/v1.3.rst b/doc/whats_new/v1.3.rst
index 9cab0db995c5d..ec1301844b877 100644
--- a/doc/whats_new/v1.3.rst
+++ b/doc/whats_new/v1.3.rst
@@ -487,6 +487,11 @@ Changelog
categorical encoding based on target mean conditioned on the value of the
category. :pr:`25334` by `Thomas Fan`_.
+- |En... | diff --git a/sklearn/preprocessing/tests/test_polynomial.py b/sklearn/preprocessing/tests/test_polynomial.py
index 727b31b793b1d..1062a3da820e7 100644
--- a/sklearn/preprocessing/tests/test_polynomial.py
+++ b/sklearn/preprocessing/tests/test_polynomial.py
@@ -35,6 +35,22 @@ def is_c_contiguous(a):
assert np.isfor... | Add sparse matrix output to SplineTransformer
### Describe the workflow you want to enable
As B-splines naturally have a sparse structure, I'd like to have the option that `SplineTransformer` returns a sparse matrix instead of always an ndarray.
```python
import numpy as np
from sklearn.preprocessing import SplineT... | 1,659,969,522,000 | [
"module:preprocessing"
] | Feature Request | [
"sklearn/preprocessing/_polynomial.py:SplineTransformer.__init__",
"sklearn/preprocessing/_polynomial.py:SplineTransformer.fit",
"sklearn/preprocessing/_polynomial.py:SplineTransformer.transform"
] | [] | 3 | 9 | |
avantifellows/quiz-backend | avantifellows__quiz-backend-84 | f970b54634a9a9ba000aaf76d05338a5d77b0d60 | null | diff --git a/app/models.py b/app/models.py
index cfb9644b..80f94b94 100644
--- a/app/models.py
+++ b/app/models.py
@@ -365,6 +365,11 @@ class Config:
schema_extra = {"example": {"answer": [0, 1, 2], "visited": True}}
+"""
+Note : The below model is not being used currently anywhere
+"""
+
+
class SessionA... | diff --git a/app/tests/test_session_answers.py b/app/tests/test_session_answers.py
index a2c04a1e..2d05a9be 100644
--- a/app/tests/test_session_answers.py
+++ b/app/tests/test_session_answers.py
@@ -7,12 +7,13 @@ class SessionAnswerTestCase(SessionsBaseTestCase):
def setUp(self):
super().setUp()
... | At some places we're updating just one key of an object or one element of an array but we send the whole object to MongoDB to update which is inefficient.
| 1,680,002,295,000 | [] | Performance Issue | [
"app/routers/session_answers.py:update_session_answer",
"app/routers/session_answers.py:get_session_answer",
"app/routers/sessions.py:create_session",
"app/routers/sessions.py:update_session"
] | [
"app/routers/session_answers.py:update_session_answer_in_a_session",
"app/routers/session_answers.py:get_session_answer_from_a_session"
] | 4 | 11 | |
internetarchive/openlibrary | internetarchive__openlibrary-7929 | dc49fddb78a3cb25138922790ddd6a5dd2b5741c | null | diff --git a/openlibrary/core/lending.py b/openlibrary/core/lending.py
index 6162ed5b081..d7e2a1949cb 100644
--- a/openlibrary/core/lending.py
+++ b/openlibrary/core/lending.py
@@ -511,13 +511,53 @@ def _get_ia_loan(identifier, userid):
def get_loans_of_user(user_key):
"""TODO: Remove inclusion of local data; s... | Cache Patron's Active Loans
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
On several pages (e.g. LoanStatus) we fetch the patron's active loans (which can be expensive) to see if they've borrowed a book (e.g. on the book page).
Ideally, we'd cache this every... | 1,685,797,378,000 | [
"Priority: 1",
"Needs: Patch Deploy"
] | Performance Issue | [
"openlibrary/core/lending.py:get_loans_of_user",
"openlibrary/core/models.py:User.get_loan_for",
"openlibrary/core/models.py:User.get_waiting_loan_for",
"openlibrary/plugins/upstream/borrow.py:borrow.POST"
] | [
"openlibrary/core/lending.py:get_user_waiting_loans",
"openlibrary/core/models.py:User.get_user_waiting_loans"
] | 4 | 13 | ||
rwth-i6/sisyphus | rwth-i6__sisyphus-191 | a5ddfaa5257beafb5fdce28d96e6ae1e574ee9fe | null | diff --git a/sisyphus/aws_batch_engine.py b/sisyphus/aws_batch_engine.py
index 4b0173f..80f454e 100644
--- a/sisyphus/aws_batch_engine.py
+++ b/sisyphus/aws_batch_engine.py
@@ -1,4 +1,4 @@
-""" This is an experimental implementation for the aws batch engine.
+"""This is an experimental implementation for the aws batch ... | Too many open file descriptors
Hi, I was using sisyphus today for a big recipe and I got an error in my worker which claimed `too many open files`:
```
OSError: [Errno 24] Unable to synchronously open file (unable to open file: name = <filename>, errno = 24, error message = 'Too many open files', flags = 0, o_flags... | I had this happen again. It was with a relatively big setup, but I'm not sure what causes the issue yet since my manager shouldn't be opening many files, if any. Please find attached the corresponding stack trace from the manager
[here](https://github.com/user-attachments/files/15509563/manager_too_many_open_files_gi... | 1,717,145,069,000 | [] | Performance Issue | [
"sisyphus/aws_batch_engine.py:AWSBatchEngine.system_call",
"sisyphus/load_sharing_facility_engine.py:LoadSharingFacilityEngine.system_call",
"sisyphus/simple_linux_utility_for_resource_management_engine.py:SimpleLinuxUtilityForResourceManagementEngine.system_call",
"sisyphus/son_of_grid_engine.py:SonOfGridEng... | [] | 4 | 14 | |
sympy/sympy | sympy__sympy-27223 | d293133e81194adc11177729af91c970f092a6e7 | null | diff --git a/sympy/utilities/lambdify.py b/sympy/utilities/lambdify.py
index a84d1a1c26c1..518f5cb67bf5 100644
--- a/sympy/utilities/lambdify.py
+++ b/sympy/utilities/lambdify.py
@@ -11,6 +11,7 @@
import keyword
import textwrap
import linecache
+import weakref
# Required despite static analysis claiming it is not... | diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py
index 4a82290569ea..428cbaed92b6 100644
--- a/sympy/utilities/tests/test_lambdify.py
+++ b/sympy/utilities/tests/test_lambdify.py
@@ -1,6 +1,8 @@
from itertools import product
import math
import inspect
+import linecache
+im... | Memory Leak in `sympy.lambdify`
Hi there,
I'm working with an [algorithm](https://github.com/SymposiumOrganization/NeuralSymbolicRegressionThatScales) that relies on calling `sympy.lambdify` hundreds of millions of times (~200M) and noticed the memory usage of the process steadily creeping up and eventually crashing t... | > The memory usage increases by about +390KB per lambdified equation
I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation.
My guess is that each call to lambdify creates a Dummy and then something creates a polynomial ring with that dummy and the polynomial ring never gets ... | 1,730,837,688,000 | [
"utilities.lambdify"
] | Bug Report | [
"sympy/utilities/lambdify.py:lambdify"
] | [] | 1 | 16 |
modin-project/modin | modin-project__modin-6836 | 097ea527c8e3f099e1f252b067a1d5eb055ad0b5 | null | diff --git a/modin/core/dataframe/algebra/binary.py b/modin/core/dataframe/algebra/binary.py
index f19040cc104..af0c6ee7e8e 100644
--- a/modin/core/dataframe/algebra/binary.py
+++ b/modin/core/dataframe/algebra/binary.py
@@ -415,7 +415,9 @@ def caller(
):
shape_hint = "colu... | FEAT: Do not put binary functions to the Ray storage multiple times.
Currently, the binary operations are wrapped into lambdas which are put into the Ray storage on each operation.
| 1,703,167,333,000 | [] | Feature Request | [
"modin/core/dataframe/algebra/binary.py:Binary.register",
"modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe.map",
"modin/core/dataframe/pandas/partitioning/partition_manager.py:PandasDataframePartitionManager.map_partitions",
"modin/core/dataframe/pandas/partitioning/partition_manager.py:Pan... | [] | 5 | 17 | ||
Open-MSS/MSS | Open-MSS__MSS-1967 | 56e9528b552a9d8f2e267661473b8f0e724fd093 | null | diff --git a/.github/workflows/python-flake8.yml b/.github/workflows/python-flake8.yml
index b578708e4..0e9003135 100644
--- a/.github/workflows/python-flake8.yml
+++ b/.github/workflows/python-flake8.yml
@@ -19,10 +19,10 @@ jobs:
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- - name: Set up... | diff --git a/conftest.py b/conftest.py
index be546d782..83f33ca85 100644
--- a/conftest.py
+++ b/conftest.py
@@ -211,9 +211,8 @@ def _load_module(module_name, path):
@pytest.fixture(autouse=True)
-def close_open_windows():
- """
- Closes all windows after every test
+def fail_if_open_message_boxes_left():
+ ... | What to do with "UserWarning: An unhandled message box popped up during your test!"?
There are many of these warnings in the CI logs basically spamming the output and drowning out other more interesting warnings.
These warnings are originating from https://github.com/Open-MSS/MSS/blob/1327ede1dbe3f4eb26bf3889934fa76... | see here, the warning comes from the fixture
https://github.com/Open-MSS/MSS/blob/develop/conftest.py#L214
tests better should fail instead of hiding one cause, some of the tests showing that have to do a second turn.
Sometimes functionality gets added but not the test improved e.g.
```
call(<mslib.msui.msc... | 1,693,392,061,000 | [] | Performance Issue | [
"mslib/msui/mscolab.py:MSUIMscolab.add_operation",
"mslib/msui/mscolab.py:MSUIMscolab.change_category_handler",
"mslib/msui/mscolab.py:MSUIMscolab.change_description_handler",
"mslib/msui/mscolab.py:MSUIMscolab.rename_operation_handler",
"mslib/msui/mscolab.py:MSUIMscolab.logout",
"mslib/msui/socket_contr... | [] | 7 | 18 |
End of preview. Expand in Data Studio
This dataset is derived from czlll/Loc-Bench_V1. We added a "fixed_commit" parameter for some of the issues we used for our work.
- Downloads last month
- 299