repo
stringlengths
5
51
instance_id
stringlengths
11
56
base_commit
stringlengths
40
40
fixed_commit
stringclasses
20 values
patch
stringlengths
400
56.6k
test_patch
stringlengths
0
895k
problem_statement
stringlengths
27
55.6k
hints_text
stringlengths
0
72k
created_at
int64
1,447B
1,739B
labels
listlengths
0
7
category
stringclasses
4 values
edit_functions
listlengths
1
10
added_functions
listlengths
0
19
edit_functions_length
int64
1
10
__index_level_0__
int64
1
659
UXARRAY/uxarray
UXARRAY__uxarray-1117
fe4cae1311db7fb21187b505e06018334a015c48
null
diff --git a/uxarray/grid/connectivity.py b/uxarray/grid/connectivity.py index 78e936117..54bd1017e 100644 --- a/uxarray/grid/connectivity.py +++ b/uxarray/grid/connectivity.py @@ -146,13 +146,14 @@ def _build_n_nodes_per_face(face_nodes, n_face, n_max_face_nodes): """Constructs ``n_nodes_per_face``, which contain...
Optimize Face Centroid Calculations If `Grid.face_lon` does not exist, `_populate_face_centroids()`, actually `_construct_face_centroids()` in it, takes extremely long for large datasets. For instance, the benchmark/profiling below is for a ~4GB SCREAM dataset, around 5 mins: @rajeeja FYI: I'm already working on this ...
1,734,798,627,000
[ "run-benchmark" ]
Performance Issue
[ "uxarray/grid/connectivity.py:_build_n_nodes_per_face", "uxarray/grid/coordinates.py:_construct_face_centroids" ]
[]
2
1
ultralytics/ultralytics
ultralytics__ultralytics-17810
d8c43874ae830a36d2adeac4a44a8ce5697e972c
null
diff --git a/ultralytics/utils/ops.py b/ultralytics/utils/ops.py index 25e83c61c3a..ac53546ed1b 100644 --- a/ultralytics/utils/ops.py +++ b/ultralytics/utils/ops.py @@ -75,9 +75,8 @@ def segment2box(segment, width=640, height=640): (np.ndarray): the minimum and maximum x and y values of the segment. """ ...
Training labels not applied properly to training data ### Search before asking - [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Train ### Bug # Bug Labels are not included in the gener...
👋 Hello @TheOfficialOzone, thank you for bringing this to our attention 🚀! We understand that you're encountering an issue with labels not being applied correctly during the training of a segmentation model on the Ultralytics repository. For us to assist you effectively, please ensure that you've provided a [minimum...
1,732,632,265,000
[ "enhancement", "segment" ]
Bug Report
[ "ultralytics/utils/ops.py:segment2box" ]
[]
1
4
Chainlit/chainlit
Chainlit__chainlit-1575
8b2d4bacfd4fa2c8af72e2d140d527d20125b07b
null
diff --git a/backend/chainlit/config.py b/backend/chainlit/config.py index b90f162f07..18ee6be8db 100644 --- a/backend/chainlit/config.py +++ b/backend/chainlit/config.py @@ -311,6 +311,8 @@ class CodeSettings: @dataclass() class ProjectSettings(DataClassJsonMixin): allow_origins: List[str] = Field(default_facto...
diff --git a/cypress/e2e/copilot/.chainlit/config.toml b/cypress/e2e/copilot/.chainlit/config.toml index e2a93af08f..9c42755715 100644 --- a/cypress/e2e/copilot/.chainlit/config.toml +++ b/cypress/e2e/copilot/.chainlit/config.toml @@ -13,7 +13,7 @@ session_timeout = 3600 cache = false # Authorized origins -allow_or...
Security: allowed origins should not be * by default CORS headers should be restricted to the current domain at least, by default.
@dosu Where do we have to look in the settings/code to set this to a sensible/safe default value? <!-- Answer --> To set the allowed origins for CORS headers to a sensible/safe default value, you need to look at the `allow_origins` setting in the `config.toml` file. ```toml # Authorized origins allow_origins = ["*"] `...
1,733,733,602,000
[ "size:M" ]
Security Vulnerability
[ "backend/chainlit/server.py:get_html_template", "backend/chainlit/socket.py:build_anon_user_identifier", "backend/chainlit/socket.py:connect", "backend/chainlit/socket.py:connection_successful", "backend/chainlit/socket.py:window_message" ]
[]
5
5
huggingface/transformers
huggingface__transformers-22496
41d47db90fbe9937c0941f2f9cdb2ddd83e49a2e
null
diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py index 91de6810b17e..96f91a0a43dd 100644 --- a/src/transformers/models/whisper/modeling_whisper.py +++ b/src/transformers/models/whisper/modeling_whisper.py @@ -34,7 +34,12 @@ SequenceClassifierOut...
diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py index 883a2021b9bb..98bbbb3214a7 100644 --- a/tests/models/whisper/test_modeling_whisper.py +++ b/tests/models/whisper/test_modeling_whisper.py @@ -1013,6 +1013,48 @@ def test_mask_time_prob(self): en...
Whisper Prompting ### Feature request Add prompting for the Whisper model to control the style/formatting of the generated text. ### Motivation During training, Whisper can be fed a "previous context window" to condition on longer passages of text. The original OpenAI Whisper implementation provides the use...
cc @hollance Hello, I'd like to pick up this issue!
1,680,278,096,000
[]
Feature Request
[ "src/transformers/models/whisper/modeling_whisper.py:WhisperForConditionalGeneration.generate", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._decode", "src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast._decode" ]
[ "src/transformers/models/whisper/processing_whisper.py:WhisperProcessor.get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer.get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._strip_prompt", "src/transformers/models/whisper/tokeniz...
3
8
scikit-learn/scikit-learn
scikit-learn__scikit-learn-24145
55af30d981ea2f72346ff93602f0b3b740cfe8d6
e5c65906ee7fdab81c2950fab0fe4d04ed7ad522
diff --git a/doc/whats_new/v1.3.rst b/doc/whats_new/v1.3.rst index 9cab0db995c5d..ec1301844b877 100644 --- a/doc/whats_new/v1.3.rst +++ b/doc/whats_new/v1.3.rst @@ -487,6 +487,11 @@ Changelog categorical encoding based on target mean conditioned on the value of the category. :pr:`25334` by `Thomas Fan`_. +- |En...
diff --git a/sklearn/preprocessing/tests/test_polynomial.py b/sklearn/preprocessing/tests/test_polynomial.py index 727b31b793b1d..1062a3da820e7 100644 --- a/sklearn/preprocessing/tests/test_polynomial.py +++ b/sklearn/preprocessing/tests/test_polynomial.py @@ -35,6 +35,22 @@ def is_c_contiguous(a): assert np.isfor...
Add sparse matrix output to SplineTransformer ### Describe the workflow you want to enable As B-splines naturally have a sparse structure, I'd like to have the option that `SplineTransformer` returns a sparse matrix instead of always an ndarray. ```python import numpy as np from sklearn.preprocessing import SplineT...
1,659,969,522,000
[ "module:preprocessing" ]
Feature Request
[ "sklearn/preprocessing/_polynomial.py:SplineTransformer.__init__", "sklearn/preprocessing/_polynomial.py:SplineTransformer.fit", "sklearn/preprocessing/_polynomial.py:SplineTransformer.transform" ]
[]
3
9
avantifellows/quiz-backend
avantifellows__quiz-backend-84
f970b54634a9a9ba000aaf76d05338a5d77b0d60
null
diff --git a/app/models.py b/app/models.py index cfb9644b..80f94b94 100644 --- a/app/models.py +++ b/app/models.py @@ -365,6 +365,11 @@ class Config: schema_extra = {"example": {"answer": [0, 1, 2], "visited": True}} +""" +Note : The below model is not being used currently anywhere +""" + + class SessionA...
diff --git a/app/tests/test_session_answers.py b/app/tests/test_session_answers.py index a2c04a1e..2d05a9be 100644 --- a/app/tests/test_session_answers.py +++ b/app/tests/test_session_answers.py @@ -7,12 +7,13 @@ class SessionAnswerTestCase(SessionsBaseTestCase): def setUp(self): super().setUp() ...
At some places we're updating just one key of an object or one element of an array but we send the whole object to MongoDB to update which is inefficient.
1,680,002,295,000
[]
Performance Issue
[ "app/routers/session_answers.py:update_session_answer", "app/routers/session_answers.py:get_session_answer", "app/routers/sessions.py:create_session", "app/routers/sessions.py:update_session" ]
[ "app/routers/session_answers.py:update_session_answer_in_a_session", "app/routers/session_answers.py:get_session_answer_from_a_session" ]
4
11
internetarchive/openlibrary
internetarchive__openlibrary-7929
dc49fddb78a3cb25138922790ddd6a5dd2b5741c
null
diff --git a/openlibrary/core/lending.py b/openlibrary/core/lending.py index 6162ed5b081..d7e2a1949cb 100644 --- a/openlibrary/core/lending.py +++ b/openlibrary/core/lending.py @@ -511,13 +511,53 @@ def _get_ia_loan(identifier, userid): def get_loans_of_user(user_key): """TODO: Remove inclusion of local data; s...
Cache Patron's Active Loans <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> On several pages (e.g. LoanStatus) we fetch the patron's active loans (which can be expensive) to see if they've borrowed a book (e.g. on the book page). Ideally, we'd cache this every...
1,685,797,378,000
[ "Priority: 1", "Needs: Patch Deploy" ]
Performance Issue
[ "openlibrary/core/lending.py:get_loans_of_user", "openlibrary/core/models.py:User.get_loan_for", "openlibrary/core/models.py:User.get_waiting_loan_for", "openlibrary/plugins/upstream/borrow.py:borrow.POST" ]
[ "openlibrary/core/lending.py:get_user_waiting_loans", "openlibrary/core/models.py:User.get_user_waiting_loans" ]
4
13
rwth-i6/sisyphus
rwth-i6__sisyphus-191
a5ddfaa5257beafb5fdce28d96e6ae1e574ee9fe
null
diff --git a/sisyphus/aws_batch_engine.py b/sisyphus/aws_batch_engine.py index 4b0173f..80f454e 100644 --- a/sisyphus/aws_batch_engine.py +++ b/sisyphus/aws_batch_engine.py @@ -1,4 +1,4 @@ -""" This is an experimental implementation for the aws batch engine. +"""This is an experimental implementation for the aws batch ...
Too many open file descriptors Hi, I was using sisyphus today for a big recipe and I got an error in my worker which claimed `too many open files`: ``` OSError: [Errno 24] Unable to synchronously open file (unable to open file: name = <filename>, errno = 24, error message = 'Too many open files', flags = 0, o_flags...
I had this happen again. It was with a relatively big setup, but I'm not sure what causes the issue yet since my manager shouldn't be opening many files, if any. Please find attached the corresponding stack trace from the manager [here](https://github.com/user-attachments/files/15509563/manager_too_many_open_files_gi...
1,717,145,069,000
[]
Performance Issue
[ "sisyphus/aws_batch_engine.py:AWSBatchEngine.system_call", "sisyphus/load_sharing_facility_engine.py:LoadSharingFacilityEngine.system_call", "sisyphus/simple_linux_utility_for_resource_management_engine.py:SimpleLinuxUtilityForResourceManagementEngine.system_call", "sisyphus/son_of_grid_engine.py:SonOfGridEng...
[]
4
14
sympy/sympy
sympy__sympy-27223
d293133e81194adc11177729af91c970f092a6e7
null
diff --git a/sympy/utilities/lambdify.py b/sympy/utilities/lambdify.py index a84d1a1c26c1..518f5cb67bf5 100644 --- a/sympy/utilities/lambdify.py +++ b/sympy/utilities/lambdify.py @@ -11,6 +11,7 @@ import keyword import textwrap import linecache +import weakref # Required despite static analysis claiming it is not...
diff --git a/sympy/utilities/tests/test_lambdify.py b/sympy/utilities/tests/test_lambdify.py index 4a82290569ea..428cbaed92b6 100644 --- a/sympy/utilities/tests/test_lambdify.py +++ b/sympy/utilities/tests/test_lambdify.py @@ -1,6 +1,8 @@ from itertools import product import math import inspect +import linecache +im...
Memory Leak in `sympy.lambdify` Hi there, I'm working with an [algorithm](https://github.com/SymposiumOrganization/NeuralSymbolicRegressionThatScales) that relies on calling `sympy.lambdify` hundreds of millions of times (~200M) and noticed the memory usage of the process steadily creeping up and eventually crashing t...
> The memory usage increases by about +390KB per lambdified equation I assume you mean per 10000 lambdified equations so it is about 400 bytes per lambdified equation. My guess is that each call to lambdify creates a Dummy and then something creates a polynomial ring with that dummy and the polynomial ring never gets ...
1,730,837,688,000
[ "utilities.lambdify" ]
Bug Report
[ "sympy/utilities/lambdify.py:lambdify" ]
[]
1
16
modin-project/modin
modin-project__modin-6836
097ea527c8e3f099e1f252b067a1d5eb055ad0b5
null
diff --git a/modin/core/dataframe/algebra/binary.py b/modin/core/dataframe/algebra/binary.py index f19040cc104..af0c6ee7e8e 100644 --- a/modin/core/dataframe/algebra/binary.py +++ b/modin/core/dataframe/algebra/binary.py @@ -415,7 +415,9 @@ def caller( ): shape_hint = "colu...
FEAT: Do not put binary functions to the Ray storage multiple times. Currently, the binary operations are wrapped into lambdas which are put into the Ray storage on each operation.
1,703,167,333,000
[]
Feature Request
[ "modin/core/dataframe/algebra/binary.py:Binary.register", "modin/core/dataframe/pandas/dataframe/dataframe.py:PandasDataframe.map", "modin/core/dataframe/pandas/partitioning/partition_manager.py:PandasDataframePartitionManager.map_partitions", "modin/core/dataframe/pandas/partitioning/partition_manager.py:Pan...
[]
5
17
Open-MSS/MSS
Open-MSS__MSS-1967
56e9528b552a9d8f2e267661473b8f0e724fd093
null
diff --git a/.github/workflows/python-flake8.yml b/.github/workflows/python-flake8.yml index b578708e4..0e9003135 100644 --- a/.github/workflows/python-flake8.yml +++ b/.github/workflows/python-flake8.yml @@ -19,10 +19,10 @@ jobs: timeout-minutes: 10 steps: - uses: actions/checkout@v3 - - name: Set up...
diff --git a/conftest.py b/conftest.py index be546d782..83f33ca85 100644 --- a/conftest.py +++ b/conftest.py @@ -211,9 +211,8 @@ def _load_module(module_name, path): @pytest.fixture(autouse=True) -def close_open_windows(): - """ - Closes all windows after every test +def fail_if_open_message_boxes_left(): + ...
What to do with "UserWarning: An unhandled message box popped up during your test!"? There are many of these warnings in the CI logs basically spamming the output and drowning out other more interesting warnings. These warnings are originating from https://github.com/Open-MSS/MSS/blob/1327ede1dbe3f4eb26bf3889934fa76...
see here, the warning comes from the fixture https://github.com/Open-MSS/MSS/blob/develop/conftest.py#L214 tests better should fail instead of hiding one cause, some of the tests showing that have to do a second turn. Sometimes functionality gets added but not the test improved e.g. ``` call(<mslib.msui.msc...
1,693,392,061,000
[]
Performance Issue
[ "mslib/msui/mscolab.py:MSUIMscolab.add_operation", "mslib/msui/mscolab.py:MSUIMscolab.change_category_handler", "mslib/msui/mscolab.py:MSUIMscolab.change_description_handler", "mslib/msui/mscolab.py:MSUIMscolab.rename_operation_handler", "mslib/msui/mscolab.py:MSUIMscolab.logout", "mslib/msui/socket_contr...
[]
7
18
vllm-project/vllm
vllm-project__vllm-5473
7d19de2e9c9a94658c36b55011b803a7991d0335
null
diff --git a/vllm/config.py b/vllm/config.py index 2513d43ce8e6b..a0bd6b0975a16 100644 --- a/vllm/config.py +++ b/vllm/config.py @@ -11,7 +11,8 @@ from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS from vllm.model_executor.models import ModelRegistry from vllm.transformers_utils.config import g...
diff --git a/.buildkite/test-pipeline.yaml b/.buildkite/test-pipeline.yaml index 6b12d19ba611f..6a2932db9f2dc 100644 --- a/.buildkite/test-pipeline.yaml +++ b/.buildkite/test-pipeline.yaml @@ -48,6 +48,7 @@ steps: - TEST_DIST_MODEL=meta-llama/Llama-2-7b-hf DISTRIBUTED_EXECUTOR_BACKEND=mp pytest -v -s distributed/tes...
[Usage]: Is it possible to start 8 tp=1 LLMEngine on a 8-GPU machine? ### Your current environment ```text Collecting environment information... /home/corvo/.local/lib/python3.10/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transfor...
Hi @sfc-gh-zhwang, I don't think this is easily doable in vLLM at the moment within a single python process. Possibly you could construct each model on GPU 0 and move each to GPU X before moving on. I would recommend starting a separate process for each LLM and specifying `CUDA_VISIBLE_DEVICES` for each i.e. `CUDA_VI...
1,718,233,771,000
[]
Feature Request
[ "vllm/config.py:ParallelConfig.__init__", "vllm/distributed/device_communicators/custom_all_reduce.py:CustomAllreduce.__init__", "vllm/distributed/device_communicators/custom_all_reduce_utils.py:gpu_p2p_access_check", "vllm/executor/multiproc_gpu_executor.py:MultiprocessingGPUExecutor._init_executor" ]
[ "vllm/utils.py:_cuda_device_count_stateless", "vllm/utils.py:cuda_device_count_stateless" ]
4
19
tardis-sn/tardis
tardis-sn__tardis-2782
e1aa88723a6836e8a25cc1afb24b578b1b78651f
null
diff --git a/benchmarks/opacities_opacity.py b/benchmarks/opacities_opacity.py index 5352ceccc89..7589632aabd 100644 --- a/benchmarks/opacities_opacity.py +++ b/benchmarks/opacities_opacity.py @@ -28,7 +28,7 @@ def time_photoabsorption_opacity_calculation(self): ) def time_pair_creation_opacity_calculat...
No need to need benchmarks for different values of length for generate_rpacket_last_interaction_tracker_list. @officialasishkumar Currently the said benchmark runs for different values of length, which is not required. The related code: https://github.com/tardis-sn/tardis/blob/e1aa88723a6836e8a25cc1afb24b578b1b78651f...
Yeah it can be removed and the repeat can be set to 3 or 4 for better accuracy.
1,722,951,264,000
[ "benchmarks" ]
Performance Issue
[ "benchmarks/opacities_opacity.py:BenchmarkMontecarloMontecarloNumbaOpacities.time_pair_creation_opacity_calculation", "benchmarks/transport_montecarlo_packet_trackers.py:BenchmarkTransportMontecarloPacketTrackers.setup", "benchmarks/transport_montecarlo_packet_trackers.py:BenchmarkTransportMontecarloPacketTrack...
[]
5
20
sopel-irc/sopel
sopel-irc__sopel-2285
51300a1ab854d6ec82d90df1bc876188c03335ff
null
diff --git a/pyproject.toml b/pyproject.toml index 5746b069a..60bac8dd7 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -47,7 +47,6 @@ dependencies = [ "xmltodict>=0.12,<0.14", "pytz", "requests>=2.24.0,<3.0.0", - "dnspython<3.0", "sqlalchemy>=1.4,<1.5", "importlib_metadata>=3.6", "...
diff --git a/test/builtins/test_builtins_url.py b/test/builtins/test_builtins_url.py index 54db06f24..eedce5411 100644 --- a/test/builtins/test_builtins_url.py +++ b/test/builtins/test_builtins_url.py @@ -22,6 +22,12 @@ "http://example..com/", # empty label "http://?", # no host ) +PRIVATE_URLS = ( + # ...
url: private_resolution/dns_resolution useless ### Description The url.enable_private_resolution and url.enable_dns_resolution settings do not work as advertised, and the concept of the latter is fatally flawed. The current `url.py` private-address protection logic is as follows: ```python if not enable_private_r...
1,653,509,364,000
[ "Bugfix", "Security" ]
Security Vulnerability
[ "sopel/builtins/url.py:configure", "sopel/builtins/url.py:title_command", "sopel/builtins/url.py:title_auto", "sopel/builtins/url.py:process_urls", "sopel/builtins/url.py:find_title", "sopel/builtins/url.py:get_tinyurl" ]
[]
6
23
mesonbuild/meson
mesonbuild__meson-11366
088727164de8496c4bada040c2f4690e42f66b69
null
diff --git a/docs/markdown/Installing.md b/docs/markdown/Installing.md index a692afe7deea..2d18c178fccd 100644 --- a/docs/markdown/Installing.md +++ b/docs/markdown/Installing.md @@ -102,6 +102,22 @@ Telling Meson to run this script at install time is a one-liner. The argument is the name of the script file relative t...
'ninja install' attempts to gain elevated privileges It is disturbing for 'ninja install' to prompt for a sudo password with polkit. This breaks user expectations, and introduces a security risk. This has evidently been meson's behavior for a very long time, but it's distressing, and it's surprising because meson...
What system are you using? Asking because it helps use determine how to fix this. Also would help to say what version of Meson and Ninja you’re using. This is on Linux (ubuntu 20.04), and using meson from either the system or up-to-date git. It's not a bug; it's a feature. It just happens to be a feature that gives...
1,675,834,691,000
[]
Security Vulnerability
[ "mesonbuild/minstall.py:Installer.do_install", "mesonbuild/minstall.py:rebuild_all" ]
[]
2
24
dmlc/dgl
dmlc__dgl-5240
f0b7cc96bd679eba40632a0383535e7a9d3295c5
null
diff --git a/python/dgl/frame.py b/python/dgl/frame.py index 46c52a20a99b..707964c5abc6 100644 --- a/python/dgl/frame.py +++ b/python/dgl/frame.py @@ -5,7 +5,7 @@ from collections.abc import MutableMapping from . import backend as F -from .base import DGLError, dgl_warning +from .base import dgl_warning, DGLError ...
[Bug][Performance] Significant overhead when `record_stream` for subgraph in dataloading ## 🐛 Bug It seems there are many overheads observed in dataloader during feature prefetching when `use_alternate_streams=True` . When it calls `_record_stream` for every subgraph (batch), like the following: https://github.co...
1,675,111,529,000
[]
Performance Issue
[ "python/dgl/frame.py:Column.record_stream" ]
[ "python/dgl/frame.py:_LazyIndex.record_stream" ]
1
25
pypa/pip
pypa__pip-13085
fe0925b3c00bf8956a0d33408df692ac364217d4
null
diff --git a/news/13079.bugfix.rst b/news/13079.bugfix.rst new file mode 100644 index 00000000000..5b297f5a12e --- /dev/null +++ b/news/13079.bugfix.rst @@ -0,0 +1,1 @@ +This change fixes a security bug allowing a wheel to execute code during installation. diff --git a/src/pip/_internal/commands/install.py b/src/pip/_i...
Lazy import allows wheel to execute code on install. ### Description Versions of pip since 24.1b1 allow someone to run arbitrary code after a specially crafted bdist whl file is installed. When installing wheel files pip does not constrain the directories the wheel contents are written into, except for checks t...
Just a couple more additions: - this behavior is related to `pip` installing into the same location that `pip` is running from - this may have security implications based on the usage of `pip` by users. For example, `pip install --only-binary :all:` could be used in a trusted context, before using the installed p...
1,731,971,313,000
[ "bot:chronographer:provided" ]
Security Vulnerability
[ "src/pip/_internal/commands/install.py:InstallCommand.run" ]
[]
1
27
gammasim/simtools
gammasim__simtools-1183
5dcc802561e21122783af829aede24c0a411b4a2
null
diff --git a/simtools/applications/db_get_parameter_from_db.py b/simtools/applications/db_get_parameter_from_db.py index 259f7689f..ff4f407fd 100644 --- a/simtools/applications/db_get_parameter_from_db.py +++ b/simtools/applications/db_get_parameter_from_db.py @@ -85,8 +85,11 @@ def main(): # noqa: D103 db = db_h...
diff --git a/tests/unit_tests/db/test_db_array_elements.py b/tests/unit_tests/db/test_db_array_elements.py index 2a1bc5570..dcc693abf 100644 --- a/tests/unit_tests/db/test_db_array_elements.py +++ b/tests/unit_tests/db/test_db_array_elements.py @@ -1,14 +1,28 @@ #!/usr/bin/python3 +import time + import pytest fr...
test_get_corsika_telescope_list takes too long The `test_get_corsika_telescope_list` test takes roughly 20 seconds to run. Profiling it points to excessive time spent initializing the array model, perhaps related to reading from the DB too many times. ``` ncalls tottime percall cumtime percall filename:lineno...
The `array_model` consist of the site model, the model of all telescopes and of all calibration devices. So this is naturally that all DB calls are initiated from the array_model_class. The number of calls to the DB are reduced by having the `DatabaseHandler.*cached` dicts, where the idea is to not query the same pa...
1,727,870,821,000
[]
Performance Issue
[ "simtools/applications/db_get_parameter_from_db.py:main", "simtools/db/db_array_elements.py:get_array_elements", "simtools/db/db_array_elements.py:get_array_element_list_for_db_query", "simtools/db/db_array_elements.py:get_array_elements_of_type", "simtools/db/db_handler.py:DatabaseHandler.get_model_paramet...
[]
8
28
matplotlib/matplotlib
matplotlib__matplotlib-25887
09eab5b1c471410d238b449ebbac63f70759fc21
null
diff --git a/lib/matplotlib/cbook.py b/lib/matplotlib/cbook.py index ff6b2a15ec35..9d03752c666c 100644 --- a/lib/matplotlib/cbook.py +++ b/lib/matplotlib/cbook.py @@ -2344,6 +2344,30 @@ def _picklable_class_constructor(mixin_class, fmt, attr_name, base_class): return cls.__new__(cls) +def _is_torch_array(x): +...
diff --git a/lib/matplotlib/tests/test_cbook.py b/lib/matplotlib/tests/test_cbook.py index 1f4f96324e9e..24fd02e65a5f 100644 --- a/lib/matplotlib/tests/test_cbook.py +++ b/lib/matplotlib/tests/test_cbook.py @@ -1,5 +1,6 @@ from __future__ import annotations +import sys import itertools import pickle @@ -16,6 +17...
[Bug]: plt.hist takes significantly more time with torch and jax arrays ### Bug summary Hi, Time taken to plot `plt.hist` directly on `jax` or `torch` arrays is significantly more than **combined time taken to first convert them to `numpy` and then using `plt.hist`**. Shouldn't `matplotlib` internally convert the...
The unpacking happens here: https://github.com/matplotlib/matplotlib/blob/b61bb0b6392c23d38cd45c658bfcd44df145830d/lib/matplotlib/cbook.py#L2237-L2251 The pytorch tensor does not support any of the conversion methods, so Matplotlib doesn't really know what to do with it. There is a discussion in https://github.com/...
1,684,077,060,000
[ "third-party integration" ]
Performance Issue
[ "lib/matplotlib/cbook.py:_unpack_to_numpy" ]
[ "lib/matplotlib/cbook.py:_is_torch_array", "lib/matplotlib/cbook.py:_is_jax_array" ]
1
30
vllm-project/vllm
vllm-project__vllm-9390
83450458339b07765b0e72a822e5fe93eeaf5258
null
diff --git a/benchmarks/benchmark_serving.py b/benchmarks/benchmark_serving.py index c1a396c81f666..4580729fa4767 100644 --- a/benchmarks/benchmark_serving.py +++ b/benchmarks/benchmark_serving.py @@ -397,6 +397,7 @@ async def benchmark( selected_percentile_metrics: List[str], selected_percentiles: List[str],...
Benchmarking script does not limit the maximum concurrency The current benchmarking script if specified with `INF` arrivals, will not limit the maximum concurrency level as shown [here](https://github.com/vllm-project/vllm/blob/703e42ee4b3efed3c71e7ae7d15f0f96e05722d4/benchmarks/benchmark_serving.py#L191). If we can c...
Good point. PR welcomed! @wangchen615 Please correct me if I misunderstood, but is this for testing the case where you have another layer on top of the model deployment with concurrency control? I recently made https://github.com/vllm-project/vllm/pull/3194 to add prefix caching benchmark - @wangchen615 let me know if...
1,729,025,741,000
[ "ready" ]
Feature Request
[ "benchmarks/benchmark_serving.py:benchmark", "benchmarks/benchmark_serving.py:main" ]
[]
2
31
ctc-oss/fapolicy-analyzer
ctc-oss__fapolicy-analyzer-809
9199ca7edad1481d9602e981ba6b1f0716f48d87
null
diff --git a/Cargo.lock b/Cargo.lock index 3b02a7962..126cb87a7 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -287,7 +287,6 @@ dependencies = [ "fapolicy-trust", "pyo3", "similar", - "tempfile", ] [[package]] diff --git a/crates/daemon/src/error.rs b/crates/daemon/src/error.rs index b289208db..044582d5e 100644 ...
Detect systemd service start/stop events to replace hard coded delays The original implementation used conservative hard coded delays to sync fapolicyd starts/stops {via dbus} with the invocation and termination of the Profiling target executables. The focus at that time was correct functionality however it's time for ...
This should be able to be done on the backend with a few aditions. 1. Listen for dbus events; would look similar to [this](https://github.com/diwic/dbus-rs/blob/master/dbus/examples/unity_focused_window.rs) 2. Register a Python callback with the backend 3. Callback when state changes We don't have any examples ...
1,678,129,184,000
[ "bug" ]
Performance Issue
[ "fapolicy_analyzer/ui/features/profiler_feature.py:create_profiler_feature", "fapolicy_analyzer/ui/profiler_page.py:ProfilerPage.handle_done", "fapolicy_analyzer/ui/reducers/profiler_reducer.py:handle_start_profiling_request" ]
[]
3
33
django/django
django__django-16421
70f39e46f86b946c273340d52109824c776ffb4c
null
diff --git a/django/utils/text.py b/django/utils/text.py index 374fd78f927d..9560ebc67840 100644 --- a/django/utils/text.py +++ b/django/utils/text.py @@ -2,12 +2,20 @@ import re import secrets import unicodedata +from collections import deque from gzip import GzipFile from gzip import compress as gzip_compress +f...
diff --git a/tests/template_tests/filter_tests/test_truncatewords_html.py b/tests/template_tests/filter_tests/test_truncatewords_html.py index 32b7c81a7626..0cf41d83aeef 100644 --- a/tests/template_tests/filter_tests/test_truncatewords_html.py +++ b/tests/template_tests/filter_tests/test_truncatewords_html.py @@ -24,7 ...
Improve utils.text.Truncator &co to use a full HTML parser. Description (last modified by Carlton Gibson) Original description: I'm using Truncator.chars to truncate wikis, and it sometimes truncates in the middle of &quot; entities, resulting in '<p>some text &qu</p>' This is a limitation of the regex based imple...
['Hi Thomas. Any chance of an example string (hopefully minimal) that creates the behaviour so we can have a look?', 1565074082.0] ["I think now that the security release are out let's just add bleach as dependency on master and be done with it?", 1565076701.0] ["Here's an example \u200bhttps://repl.it/@tdhooper/Django...
1,672,779,147,000
[]
Feature Request
[ "django/utils/text.py:Truncator.chars", "django/utils/text.py:Truncator._text_chars", "django/utils/text.py:Truncator.words", "django/utils/text.py:Truncator._truncate_html" ]
[ "django/utils/text.py:calculate_truncate_chars_length", "django/utils/text.py:TruncateHTMLParser.__init__", "django/utils/text.py:TruncateHTMLParser.void_elements", "django/utils/text.py:TruncateHTMLParser.handle_startendtag", "django/utils/text.py:TruncateHTMLParser.handle_starttag", "django/utils/text.p...
4
34
scikit-learn/scikit-learn
scikit-learn__scikit-learn-25186
b2fe9746a862272a60ffc7d2c6563d28dd13a6c6
null
diff --git a/doc/whats_new/v1.3.rst b/doc/whats_new/v1.3.rst index 937c9c1448030..5290399310dcf 100644 --- a/doc/whats_new/v1.3.rst +++ b/doc/whats_new/v1.3.rst @@ -109,6 +109,13 @@ Changelog out-of-bag scores via the `oob_scores_` or `oob_score_` attributes. :pr:`24882` by :user:`Ashwin Mathur <awinml>`. +- |E...
Improving IsolationForest predict time ### Discussed in https://github.com/scikit-learn/scikit-learn/discussions/25142 <div type='discussions-op-text'> <sup>Originally posted by **fsiola** December 8, 2022</sup> Hi, When using [IsolationForest predict](https://github.com/scikit-learn/scikit-learn/blob/main/...
I will take this one them
1,670,936,394,000
[ "module:ensemble" ]
Performance Issue
[ "sklearn/ensemble/_iforest.py:IsolationForest.fit", "sklearn/ensemble/_iforest.py:IsolationForest.score_samples", "sklearn/ensemble/_iforest.py:IsolationForest._compute_score_samples" ]
[]
3
38
okta/okta-jwt-verifier-python
okta__okta-jwt-verifier-python-59
cb4e6780c55c234690299fa4ccef5ad33746c2c2
null
diff --git a/okta_jwt_verifier/jwt_utils.py b/okta_jwt_verifier/jwt_utils.py index af187ed..dd1e3d1 100644 --- a/okta_jwt_verifier/jwt_utils.py +++ b/okta_jwt_verifier/jwt_utils.py @@ -1,7 +1,8 @@ import json -from jose import jwt, jws -from jose.exceptions import ExpiredSignatureError +import jwt +from jwt.exceptio...
diff --git a/tests/unit/test_jwt_verifier.py b/tests/unit/test_jwt_verifier.py index 3693b63..a3d8680 100644 --- a/tests/unit/test_jwt_verifier.py +++ b/tests/unit/test_jwt_verifier.py @@ -1,7 +1,7 @@ import pytest import time -from jose.exceptions import JWTClaimsError +from jwt.exceptions import InvalidTokenError...
Dependency python-Jose appears to be unmaintained Hey - just a heads-up that it appears this library is using `python-jose` as a dependency, which hasn't been updated in ~2 years. Maintainers haven't shown any activity in GitHub for issues or pull requests in quite a while, either. It would probably be prudent to pivot...
> before CVEs start dropping up against the abandoned library. Looks like that's now, found in [python-ecdsa](https://github.com/tlsfuzzer/python-ecdsa/security/advisories/GHSA-wj6h-64fc-37mp). Are there any plans to use the `cryptography` build of `python-jose`, or migrate? @bretterer any updates on this? Does Okta...
1,714,487,268,000
[]
Security Vulnerability
[ "okta_jwt_verifier/jwt_utils.py:JWTUtils.parse_token", "okta_jwt_verifier/jwt_utils.py:JWTUtils.verify_claims", "okta_jwt_verifier/jwt_utils.py:JWTUtils.verify_signature" ]
[]
3
39
ray-project/ray
ray-project__ray-26818
1b06e7a83acd1c7b721327cef73bd5d3cb66faa2
null
diff --git a/dashboard/client/src/api.ts b/dashboard/client/src/api.ts index 42467df8e7e2d..e532d0829b98b 100644 --- a/dashboard/client/src/api.ts +++ b/dashboard/client/src/api.ts @@ -1,4 +1,4 @@ -import { formatUrl } from "./service/requestHandlers"; +import { formatUrl, get as getV2 } from "./service/requestHandlers...
diff --git a/dashboard/modules/actor/tests/test_actor.py b/dashboard/modules/actor/tests/test_actor.py index 04a04a6fd1389..8cad11a58722b 100644 --- a/dashboard/modules/actor/tests/test_actor.py +++ b/dashboard/modules/actor/tests/test_actor.py @@ -342,5 +342,98 @@ class InfeasibleActor: raise Exceptio...
[Bug] Dashboard takes up a lot of Memory ### Search before asking - [X] I searched the [issues](https://github.com/ray-project/ray/issues) and found no similar issues. ### Ray Component Dashboard ### Issue Severity Medium: It contributes to significant difficulty to complete my task but I work arounds and get it ...
I belive we made some relevant fixes to the master. Is it possible to try out Ray master to see if the issue still exists? Also cc @alanwguo @rkooo567 this issue still exists for ray version 1.12.0 Here it takes up to 6.5 Gb. process running since 27 hours. /home/ray/anaconda3/bin/python -u /home/ray/...
1,658,370,033,000
[ "@author-action-required" ]
Performance Issue
[ "dashboard/datacenter.py:DataOrganizer.get_node_workers", "dashboard/datacenter.py:DataOrganizer.get_node_info", "dashboard/modules/actor/actor_head.py:ActorHead.__init__", "dashboard/modules/actor/actor_head.py:ActorHead._update_actors", "dashboard/modules/actor/actor_head.py:ActorHead.run", "dashboard/m...
[ "dashboard/modules/actor/actor_head.py:ActorHead._cleanup_actors" ]
8
40
gitpython-developers/GitPython
gitpython-developers__GitPython-1636
e19abe75bbd7d766d7f06171c6524431e4653545
null
diff --git a/git/cmd.py b/git/cmd.py index 3d170facd..3665eb029 100644 --- a/git/cmd.py +++ b/git/cmd.py @@ -5,7 +5,7 @@ # the BSD License: http://www.opensource.org/licenses/bsd-license.php from __future__ import annotations import re -from contextlib import contextmanager +import contextlib import io import logg...
diff --git a/test/test_git.py b/test/test_git.py index c5d871f08..540ea9f41 100644 --- a/test/test_git.py +++ b/test/test_git.py @@ -4,10 +4,12 @@ # # This module is part of GitPython and is released under # the BSD License: http://www.opensource.org/licenses/bsd-license.php +import contextlib import os +import shu...
CVE-2023-40590: Untrusted search path on Windows systems leading to arbitrary code execution This appeared in the CVE additional information here https://github.com/gitpython-developers/GitPython/security/advisories/GHSA-wfm5-v35h-vwf4. I found it reported already. I am reporting it here just in case.
Thanks. This advisory originated in this repository and is thus known: https://github.com/gitpython-developers/GitPython/security/advisories/GHSA-wfm5-v35h-vwf4 . However, it seems hard to communicate using an advisory, so we can keep this issue open to collect comments.
1,693,402,739,000
[]
Security Vulnerability
[ "git/cmd.py:Git.execute" ]
[]
1
41
huggingface/optimum-benchmark
huggingface__optimum-benchmark-266
1992de306378f7d5848d36ddc73f15ba711c8d70
null
diff --git a/optimum_benchmark/trackers/latency.py b/optimum_benchmark/trackers/latency.py index cb236413..1e0f1e95 100644 --- a/optimum_benchmark/trackers/latency.py +++ b/optimum_benchmark/trackers/latency.py @@ -121,7 +121,8 @@ def __init__(self, device: str, backend: str): self.device = device sel...
Timeout with multiple AMD GPUs tensor parallelism in vLLM ## Problem Description When attempting to run Optimum Benchmark in vLLM using tensor parallelism across multiple AMD GPUs (MI210), I encounter a timeout error from NCCL watchdog. However, the benchmark works fine with a single AMD GPU in vLLM, and the vLLM AP...
Yeah I still haven't had time to fix the vLLM multi-gpu support, it fails for some obscure reason that I can't put my finger on, because the only difference is that optimum-benchmark runs the engine in a separate process. did you try using the ray backend ? Yes, ray backend leads to the same timeout, logs are similar @...
1,726,822,597,000
[]
Performance Issue
[ "optimum_benchmark/trackers/latency.py:LatencyTracker.__init__" ]
[]
1
42
vllm-project/vllm
vllm-project__vllm-6050
c87ebc3ef9ae6e8d6babbca782510ff924b3abc7
null
diff --git a/vllm/config.py b/vllm/config.py index 9854f175065a2..23c03bcb4da5d 100644 --- a/vllm/config.py +++ b/vllm/config.py @@ -957,12 +957,6 @@ def maybe_create_spec_config( ) draft_hf_config = draft_model_config.hf_config - if (draft_hf_config.model_type == "mlp_speculator"...
diff --git a/tests/spec_decode/e2e/test_integration_dist_tp2.py b/tests/spec_decode/e2e/test_integration_dist_tp2.py index 5534b80c0aaa0..859d4234c458f 100644 --- a/tests/spec_decode/e2e/test_integration_dist_tp2.py +++ b/tests/spec_decode/e2e/test_integration_dist_tp2.py @@ -70,10 +70,6 @@ def test_target_model_tp_gt_...
[Feature]: MLPSpeculator Tensor Parallel support ### 🚀 The feature, motivation and pitch `MLPSpeculator`-based speculative decoding was recently added in https://github.com/vllm-project/vllm/pull/4947, but the initial integration only covers single GPU usage. There will soon be "speculator" models available for ...
initial thought: * https://github.com/vllm-project/vllm/pull/5414 may be a bad fit for this; we should keep eyes open for best solution for MLPSpeculator * the goal for this issue should be to get MLPSpeculator on TP==1 working with target model on TP>1. we can start today with a small model (don't have to wait for n...
1,719,874,141,000
[]
Feature Request
[ "vllm/config.py:SpeculativeConfig.maybe_create_spec_config", "vllm/spec_decode/spec_decode_worker.py:SpecDecodeWorker.create_worker" ]
[]
2
43
PrefectHQ/prefect
PrefectHQ__prefect-12019
276da8438e84fb7daf05255d1f9d87379d2d647b
null
diff --git a/src/prefect/_internal/concurrency/calls.py b/src/prefect/_internal/concurrency/calls.py index 13e9e9bbfc61..6ca4dc45567e 100644 --- a/src/prefect/_internal/concurrency/calls.py +++ b/src/prefect/_internal/concurrency/calls.py @@ -10,6 +10,7 @@ import dataclasses import inspect import threading +import w...
Task input persisted leading to memory not being released (same for output). ### First check - [X] I added a descriptive title to this issue. - [X] I used the GitHub search to find a similar request and didn't find it. - [X] I searched the Prefect documentation for this feature. ### Prefect Version 2.x ##...
Hi @sibbiii, thanks for the well written issue and investigation. I can reproduce the behavior with your MRE. I think it is unlikely that the references are maintained as part of results as you're not returning anything from your function. I think this likely has to do with the dependency tracking or as your sai...
1,708,123,117,000
[]
Performance Issue
[ "src/prefect/_internal/concurrency/calls.py:get_current_call", "src/prefect/_internal/concurrency/calls.py:set_current_call", "src/prefect/_internal/concurrency/cancellation.py:AsyncCancelScope.__exit__" ]
[ "src/prefect/_internal/concurrency/calls.py:Future._invoke_callbacks" ]
3
44
streamlit/streamlit
streamlit__streamlit-6617
cec1c141f7858e31f8200a9cdc42b8ddd09ab0e7
null
diff --git a/lib/streamlit/runtime/app_session.py b/lib/streamlit/runtime/app_session.py index 6fae7653b807..36e681c344f7 100644 --- a/lib/streamlit/runtime/app_session.py +++ b/lib/streamlit/runtime/app_session.py @@ -225,8 +225,10 @@ def shutdown(self) -> None: self._uploaded_file_mgr.remove_session_file...
diff --git a/lib/tests/streamlit/runtime/app_session_test.py b/lib/tests/streamlit/runtime/app_session_test.py index 2ee2e5d8cd40..448118736e9a 100644 --- a/lib/tests/streamlit/runtime/app_session_test.py +++ b/lib/tests/streamlit/runtime/app_session_test.py @@ -101,15 +101,19 @@ def test_shutdown(self, patched_disconn...
Updating images will increase memory usage ### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [X] I added a very descriptive title to this issue. - [X] I have provided sufficient information below to help reproduce this issue. ### Summary Wh...
@MathijsNL Thanks for reporting this. I was able to reproduce this issue. We do have something that is actually supposed to clean up the media file storage after a session. But this somehow might not work correctly. I also tested with older Streamlit versions (e.g. 1.15), and this does seem to behave the same way....
1,683,075,857,000
[ "security-assessment-completed" ]
Performance Issue
[ "lib/streamlit/runtime/app_session.py:AppSession.shutdown", "lib/streamlit/runtime/forward_msg_cache.py:ForwardMsgCache.remove_expired_session_entries", "lib/streamlit/runtime/runtime.py:Runtime._send_message" ]
[ "lib/streamlit/runtime/forward_msg_cache.py:ForwardMsgCache.remove_refs_for_session", "lib/streamlit/runtime/forward_msg_cache.py:ForwardMsgCache.remove_expired_entries_for_session" ]
3
46
JoinMarket-Org/joinmarket-clientserver
JoinMarket-Org__joinmarket-clientserver-1180
aabfd3f7c2ec33391b89369bd1fe354659f10f0f
null
diff --git a/jmclient/jmclient/blockchaininterface.py b/jmclient/jmclient/blockchaininterface.py index 2402ff7d3..65a4cfcc4 100644 --- a/jmclient/jmclient/blockchaininterface.py +++ b/jmclient/jmclient/blockchaininterface.py @@ -292,18 +292,36 @@ def import_addresses_if_needed(self, addresses, wallet_name): ...
yg-privacyenhanced.py: excessive CPU usage At first startup of the Yield Generator, CPU usage seems acceptable, but as it sits running for a few days, it starts using excessive amounts of CPU time. It seems to depend on how many CoinJoins it has participated in. Case in point: I have two instances of `yg-privacyenhance...
> it is obsessively (many times per second!) calling `gettransaction`, all for the same transaction ID. The responses from Bitcoin Core show that the specified transaction has over 200 confirmations, so why is JoinMarket obsessively hammering requests for it? It shouldn't, sounds like a bug somewhere. > RPC call an...
1,644,723,706,000
[]
Performance Issue
[ "jmclient/jmclient/blockchaininterface.py:BitcoinCoreInterface._yield_transactions", "jmclient/jmclient/blockchaininterface.py:BitcoinCoreNoHistoryInterface._yield_transactions", "jmclient/jmclient/wallet_service.py:WalletService.__init__", "jmclient/jmclient/wallet_service.py:WalletService.register_callbacks...
[ "jmclient/jmclient/wallet_service.py:WalletService._yield_new_transactions" ]
9
47
TagStudioDev/TagStudio
TagStudioDev__TagStudio-735
921a8875de22f7c136316edfb5d3038236414b57
null
diff --git a/tagstudio/src/qt/modals/tag_search.py b/tagstudio/src/qt/modals/tag_search.py index d35065675..fdd986ab5 100644 --- a/tagstudio/src/qt/modals/tag_search.py +++ b/tagstudio/src/qt/modals/tag_search.py @@ -52,7 +52,7 @@ def __init__(self, library: Library, exclude: list[int] = None, is_tag_chooser: ...
[Bug]: Adding (or searching for) tags gets increasingly slower in the same session (running process) ### Checklist - [x] I am using an up-to-date version. - [x] I have read the [documentation](https://github.com/TagStudioDev/TagStudio/blob/main/docs/index.md). - [x] I have searched existing [issues](https://github.com...
I've been able to reproduce this, and it does indeed look like memory usage is increasing the more searches are preformed After *much* debugging, I believe I've narrowed this down to the `translate_qobject()` calls inside the `TagWidget` objects that are use for the context menus. When removing these translation calls ...
1,737,872,423,000
[ "Priority: Critical", "Type: UI/UX", "TagStudio: Tags" ]
Performance Issue
[ "tagstudio/src/qt/modals/tag_search.py:TagSearchPanel.__init__", "tagstudio/src/qt/modals/tag_search.py:TagSearchPanel.construct_tag_button", "tagstudio/src/qt/modals/tag_search.py:TagSearchPanel.update_tags", "tagstudio/src/qt/widgets/tag.py:TagWidget.__init__" ]
[ "tagstudio/src/qt/modals/tag_search.py:TagSearchPanel.build_create_tag_button" ]
4
48
micropython/micropython-lib
micropython__micropython-lib-947
e4cf09527bce7569f5db742cf6ae9db68d50c6a9
null
diff --git a/python-ecosys/requests/requests/__init__.py b/python-ecosys/requests/requests/__init__.py index a9a183619..2951035f7 100644 --- a/python-ecosys/requests/requests/__init__.py +++ b/python-ecosys/requests/requests/__init__.py @@ -46,6 +46,8 @@ def request( ): if headers is None: headers = {} +...
diff --git a/python-ecosys/requests/test_requests.py b/python-ecosys/requests/test_requests.py index 513e533a3..ac77291b0 100644 --- a/python-ecosys/requests/test_requests.py +++ b/python-ecosys/requests/test_requests.py @@ -102,11 +102,11 @@ def chunks(): def test_overwrite_get_headers(): response = requests.r...
SECURITY: Requests module leaks passwords & usernames for HTTP Basic Auth While looking at the MicroPython `requests` module (on the git HEAD), I noticed this: If you make a request with HTTP basic auth (a username/password) and did not specify a headers dict, then I believe the username and password would be added ...
The MicroPython way aiui would be to mirror CPython's solution to this problem, which uses a `None` default value and then sets it to an empty dict at runtime: https://github.com/psf/requests/blob/0e322af87745eff34caffe4df68456ebc20d9068/src/requests/models.py#L258-L276 And I see this is exactly what #823 does....
1,733,959,795,000
[]
Security Vulnerability
[ "python-ecosys/requests/requests/__init__.py:request" ]
[]
1
49
pm4-graders/3ES
pm4-graders__3ES-72
5b3fc277a8cacc1af4750fd8b541e20f27bc8487
null
diff --git a/.gitignore b/.gitignore index c1a2a16..0f72b94 100644 --- a/.gitignore +++ b/.gitignore @@ -5,5 +5,7 @@ /cv/testImages/kanti_img2.jpg /cv/testImages/kanti_img1.jpeg /cv/testImages/kanti_img2.jpeg +/cv/testImages/kanti_telegram_compressed_1.jpg +/cv/testImages/kanti_telegram_compressed_2.jpg /cv/__pycac...
diff --git a/cv/testImages/crooked.jpg b/cv/testImages/crooked.jpg new file mode 100644 index 0000000..2fade6f Binary files /dev/null and b/cv/testImages/crooked.jpg differ diff --git a/cv/testImages/crunched.jpg b/cv/testImages/crunched.jpg new file mode 100644 index 0000000..3c214c5 Binary files /dev/null and b/cv/te...
Fix CV - test on more samples and make adjustments to make it more robust
1,683,218,934,000
[]
Performance Issue
[ "backend/app/core/cv_result.py:Exam.__init__", "cv/DigitRecognizer.py:DigitRecognizer.recognize_digits_in_photo", "cv/DigitRecognizer.py:DigitRecognizer.predict_double_number", "cv/DigitRecognizer.py:DigitRecognizer.predict_handwritten_cell", "cv/DigitRecognizer.py:DigitRecognizer.find_grid_in_image", "cv...
[]
7
50
fortra/impacket
fortra__impacket-1636
33058eb2fde6976ea62e04bc7d6b629d64d44712
null
diff --git a/examples/ntlmrelayx.py b/examples/ntlmrelayx.py index 5377a695e..e3efbbf81 100755 --- a/examples/ntlmrelayx.py +++ b/examples/ntlmrelayx.py @@ -57,10 +57,11 @@ RELAY_SERVERS = [] class MiniShell(cmd.Cmd): - def __init__(self, relayConfig, threads): + def __init__(self, relayConfig, threads, api_a...
SOCKS server listens on all interfaces by default ### Configuration impacket version: v0.10.1.dev1+20230207.182628.6cd68a05 Python version: 3.9.2 Target OS: NA By default, the SOCKS server used by ntlmrelayx.py listens on all interfaces (0.0.0.0)1080, which is dangerous. Please see: https://github.com/f...
1,698,135,613,000
[ "medium" ]
Security Vulnerability
[ "examples/ntlmrelayx.py:MiniShell.__init__", "examples/ntlmrelayx.py:MiniShell.do_socks", "impacket/examples/ntlmrelayx/servers/socksserver.py:webService", "impacket/examples/ntlmrelayx/servers/socksserver.py:SOCKS.__init__" ]
[]
4
52
AzureAD/microsoft-authentication-library-for-python
AzureAD__microsoft-authentication-library-for-python-454
66a1c5a935e59c66281ccf73a3931681eeedee23
null
diff --git a/msal/application.py b/msal/application.py index 5d1406af..a06df303 100644 --- a/msal/application.py +++ b/msal/application.py @@ -11,8 +11,6 @@ from threading import Lock import os -import requests - from .oauth2cli import Client, JwtAssertionCreator from .oauth2cli.oidc import decode_part from .aut...
diff --git a/tests/test_authority_patch.py b/tests/test_authority_patch.py deleted file mode 100644 index 1feca62d..00000000 --- a/tests/test_authority_patch.py +++ /dev/null @@ -1,32 +0,0 @@ -import unittest - -import msal -from tests.http_client import MinimalHttpClient - - -class DummyHttpClient(object): - def ge...
MSAL can consider using lazy import for `request`, `jwt` **Describe the bug** Importing MSAL is can cost ~300ms on Windows due to some heavy libraries like `request` and `jwt`: ``` python -X importtime -c "import msal" 2>perf.log; tuna .\perf.log ``` ![image](https://user-images.githubusercontent.com/40039...
1,642,623,018,000
[]
Performance Issue
[ "msal/authority.py:Authority.http_client", "msal/authority.py:Authority.__init__", "msal/authority.py:Authority.user_realm_discovery" ]
[]
3
54
Chainlit/chainlit
Chainlit__chainlit-1441
beb44ca74f18b352fa078baf7038c2dc6b729e0c
null
diff --git a/backend/chainlit/server.py b/backend/chainlit/server.py index b4bf0dd989..7c4a824b68 100644 --- a/backend/chainlit/server.py +++ b/backend/chainlit/server.py @@ -41,6 +41,7 @@ APIRouter, Depends, FastAPI, + File, Form, HTTPException, Query, @@ -839,11 +840,9 @@ async def de...
diff --git a/backend/tests/conftest.py b/backend/tests/conftest.py index 94ae4a596a..7cc4266371 100644 --- a/backend/tests/conftest.py +++ b/backend/tests/conftest.py @@ -1,5 +1,6 @@ import datetime from contextlib import asynccontextmanager +from typing import Callable from unittest.mock import AsyncMock, Mock i...
fix(security): add auth to /project/file get endpoint fixes https://github.com/Chainlit/chainlit/issues/1101
Thanks @qvalentin for the report & fix! We'd like to take this along in the next release. Any chance you could add a regression unittest demonstrating the issue and it's resolution?
1,729,093,492,000
[ "bug", "backend", "security", "review-me", "size:L" ]
Security Vulnerability
[ "backend/chainlit/server.py:upload_file", "backend/chainlit/server.py:get_file" ]
[]
2
56
scikit-learn/scikit-learn
scikit-learn__scikit-learn-29130
abbaed326c8f0e4a8083979701f01ce581612713
null
diff --git a/.github/workflows/cuda-gpu-ci.yml b/.github/workflows/cuda-gpu-ci.yml new file mode 100644 index 0000000000000..d962145cfbbc7 --- /dev/null +++ b/.github/workflows/cuda-gpu-ci.yml @@ -0,0 +1,47 @@ +name: CUDA GPU +on: + workflow_dispatch: + inputs: + pr_id: + description: Test the contents ...
diff --git a/build_tools/github/pylatest_conda_forge_cuda_array-api_linux-64_conda.lock b/build_tools/github/pylatest_conda_forge_cuda_array-api_linux-64_conda.lock new file mode 100644 index 0000000000000..38742e34cb4ea --- /dev/null +++ b/build_tools/github/pylatest_conda_forge_cuda_array-api_linux-64_conda.lock @@ -...
Weekly CI run with NVidia GPU hardware Now that #22554 was merged in `main`, it would be great to find a a way to run a weekly scheduled job to run the scikit-learn `main` test on a CI worker with an NVidia GPU and CuPy. In case of failure, it could create a report as [dedicated issues](https://github.com/scikit-le...
Other opensource projects like dask and numba have accounts on https://gpuci.gpuopenanalytics.com/ for instance. Not sure what is the procedure to get scikit-learn accepted there (even with a limited quota of a few GPU hours per month). Ive started looking into this As discussed during the triaging meeting, we could us...
1,716,971,019,000
[ "Build / CI", "Array API" ]
Feature Request
[ "build_tools/update_environments_and_lock_files.py:get_conda_environment_content" ]
[]
1
57
django/django
django__django-18616
b9aa3239ab1328c915684d89b87a49459cabd30b
null
diff --git a/django/http/response.py b/django/http/response.py index 1dbaf46adda4..4a0ea6701375 100644 --- a/django/http/response.py +++ b/django/http/response.py @@ -627,10 +627,12 @@ def set_headers(self, filelike): class HttpResponseRedirectBase(HttpResponse): allowed_schemes = ["http", "https", "ftp"] - ...
diff --git a/tests/httpwrappers/tests.py b/tests/httpwrappers/tests.py index 3774ff2d6727..f85d33e82338 100644 --- a/tests/httpwrappers/tests.py +++ b/tests/httpwrappers/tests.py @@ -566,6 +566,27 @@ def test_redirect_lazy(self): r = HttpResponseRedirect(lazystr("/redirected/")) self.assertEqual(r.url...
Add 307 and 308 redirect response codes to django.shortcuts.redirect Description Other than 301 and 302 response codes for redirects, there is also: ​https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307 ​https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/308 Currently, Django is unaware of these. Propo...
['\u200bhttps://github.com/django/django/pull/18616 Proof of concept until the ticket itself is approved. After that tests and documentation will be updated.', 1727144938.0] ['\u200bhttps://github.com/django/django/pull/18616 Proof of concept until the ticket itself is approved. After that tests and documentation will ...
1,727,162,493,000
[]
Feature Request
[ "django/http/response.py:HttpResponseRedirectBase.__init__", "django/shortcuts.py:redirect" ]
[]
2
59
Bears-R-Us/arkouda
Bears-R-Us__arkouda-1969
6e3da833ae55173cfc7488faa1e1951c8506a255
null
diff --git a/arkouda/dataframe.py b/arkouda/dataframe.py index 10c98b1c69..6291b22ff8 100644 --- a/arkouda/dataframe.py +++ b/arkouda/dataframe.py @@ -1457,7 +1457,7 @@ def _prep_data(self, index=False, columns=None): data = {c: self.data[c] for c in columns} if index: - data["Index"]...
diff --git a/tests/dataframe_test.py b/tests/dataframe_test.py index 06c9f83e8c..23ad865b98 100644 --- a/tests/dataframe_test.py +++ b/tests/dataframe_test.py @@ -528,7 +528,7 @@ def test_save(self): akdf.to_parquet(f"{tmp_dirname}/testName") ak_loaded = ak.DataFrame.load(f"{tmp_dirname}/tes...
Multi-column Parquet write inefficiencies Passing along a comment from another developer. " I've noticed that Parquet output from Arkouoda is currently very slow. The Arkouda client library saves multi-column dataframes by sending a separate writeParquet request to the server for each column, adding them to the file on...
Thanks for bringing this issue to our attention. When the Parquet work was implemented, it was before the dataframe functionality had been implemented, so that use case wasn't considered in the initial design. As you have seen, the Parquet append functionality is very sub-optimal and that is because Parquet does not su...
1,671,202,682,000
[]
Performance Issue
[ "arkouda/dataframe.py:DataFrame._prep_data", "arkouda/io.py:_parse_errors", "arkouda/io.py:to_parquet", "arkouda/io.py:read" ]
[]
4
60
django/django
django__django-18435
95827452571eb976c4f0d5e9ac46843948dd5fe6
null
diff --git a/django/core/management/commands/runserver.py b/django/core/management/commands/runserver.py index 132ee4c0795a..3795809a1226 100644 --- a/django/core/management/commands/runserver.py +++ b/django/core/management/commands/runserver.py @@ -188,3 +188,12 @@ def on_bind(self, server_port): f"Quit ...
diff --git a/tests/admin_scripts/tests.py b/tests/admin_scripts/tests.py index 2e77f2c97a62..67362460a99d 100644 --- a/tests/admin_scripts/tests.py +++ b/tests/admin_scripts/tests.py @@ -1597,6 +1597,13 @@ def test_zero_ip_addr(self): "Starting development server at http://0.0.0.0:8000/", self...
Add warning to runserver that it should not be used for production Description As per this discussion on the ​forum, I think adding a warning to the start of runserver would be valuable to those new to Django and a healthy reminder to those coming back to Django. The wording of the warning is: WARNING: This is a de...
['I think it\'s worth highlighting that it does say "development server": Starting development server at http://127.0.0.1:8000/ Quit the server with CTRL-BREAK. We also have a warning in the \u200brunserver docs and a \u200bwarning in the tutorial (note that the tutorial runserver output would need to be updated if we ...
1,722,443,076,000
[]
Feature Request
[ "django/core/management/commands/runserver.py:Command.on_bind" ]
[]
1
61
sqlfluff/sqlfluff
sqlfluff__sqlfluff-6399
80f4fc2d3bbc7839e41a9018ae4918118c309656
null
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index e8480e8d0e3..bfc033f0990 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -45,7 +45,6 @@ repos: [ types-toml, types-chardet, - types-appdirs, types-colorama, ...
diff --git a/test/cli/commands_test.py b/test/cli/commands_test.py index 498b2cd82c3..f9751c96ea1 100644 --- a/test/cli/commands_test.py +++ b/test/cli/commands_test.py @@ -228,8 +228,8 @@ def test__cli__command_extra_config_fail(): ], ], assert_output_contains=( - "Extra confi...
replace deprecated appdirs dependency with platformdirs ### Search before asking - [X] I searched the [issues](https://github.com/sqlfluff/sqlfluff/issues) and found no similar issues. ### Description https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068011 python3-appdirs is dead upstream[1] and its Debian mai...
1,729,789,307,000
[]
Feature Request
[ "src/sqlfluff/core/config/loader.py:_get_user_config_dir_path", "src/sqlfluff/core/config/loader.py:_load_user_appdir_config", "src/sqlfluff/core/config/loader.py:load_config_up_to_path" ]
[]
3
62
aiortc/aiortc
aiortc__aiortc-795
f4e3049875142a18fe32ad5f2c052b84a3112e30
null
diff --git a/src/aiortc/rtcsctptransport.py b/src/aiortc/rtcsctptransport.py index de5d0968d..c9439fc2f 100644 --- a/src/aiortc/rtcsctptransport.py +++ b/src/aiortc/rtcsctptransport.py @@ -1324,8 +1324,7 @@ async def _send( self._outbound_stream_seq[stream_id] = uint16_add(stream_seq, 1) # trans...
Slow sending messages with data channels When I try to send 10 messages per second in datachannel I observe that messages are sent much slower, at 5 message per second. I did a docker example to reproduce: https://gist.github.com/le-chat/844272e8d0f91dcbb61ba98ca635cd6b From logs: ``` 08:32:51.808407 sent №100,...
We have the same issue: We try to send data for every frame which fails at 30fps. For us there is an additional issue: we want the latency to be as low as possible; so the current code has an issue where even after applying your fix it will only send out batches of messages 10 times per second
1,668,887,941,000
[]
Performance Issue
[ "src/aiortc/rtcsctptransport.py:RTCSctpTransport._send" ]
[]
1
63
mathesar-foundation/mathesar
mathesar-foundation__mathesar-3117
e7b175bc2f7db0ae6e69c723c6a054c6dc2152d8
null
diff --git a/db/install.py b/db/install.py index 43440b398d..1372659c63 100644 --- a/db/install.py +++ b/db/install.py @@ -1,5 +1,6 @@ +from psycopg.errors import InsufficientPrivilege from sqlalchemy import text -from sqlalchemy.exc import OperationalError +from sqlalchemy.exc import OperationalError, ProgrammingErro...
The requirement of superuser postgresql access is problematic ## Problem Mathesar needs a Postgres superuser to function correctly, from the docs at https://docs.mathesar.org/installation/build-from-source/ ## Proposed solution The mathesar user should not require superuser access. ## Additional context The ...
Thanks for reporting this, @spapas. This is already a high-priority issue for us to resolve, requiring a superuser is one of the compromises we made to get our alpha version out of the door. Hello friends, any news on this issue? This is very important for us and we can't actually use mathesar in procution until the su...
1,690,823,334,000
[ "pr-status: revision" ]
Security Vulnerability
[ "db/install.py:_create_database" ]
[]
1
64
huggingface/transformers
huggingface__transformers-22498
6fc44656b43f1de939a1e62dd59c45d1fec9f1aa
null
diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py index 9a6c29c27bdf..27faa252788d 100644 --- a/src/transformers/modeling_utils.py +++ b/src/transformers/modeling_utils.py @@ -336,7 +336,7 @@ def shard_checkpoint( return shards, index -def load_sharded_checkpoint(model, folde...
diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py index 310842713bde..78b6afeacd4e 100644 --- a/tests/trainer/test_trainer.py +++ b/tests/trainer/test_trainer.py @@ -25,6 +25,7 @@ import tempfile import time import unittest +from itertools import product from pathlib import Path from unitt...
Implement safetensors checkpoint loading for Trainer ### Feature request At the moment, Trainer loads models with `torch.load` method directly onto the cpu: (`Trainer._load_from_checkpoint` method) ```python ... # We load the model state dict on the CPU to avoid an OOM error. ...
The checkpoints are not saved in that format so there is no `model.safetensors` file to load from. We could add a training argument to use this format instead of the PyTorch format indeed.
1,680,282,522,000
[]
Feature Request
[ "src/transformers/modeling_utils.py:load_sharded_checkpoint", "src/transformers/trainer.py:Trainer._load_from_checkpoint", "src/transformers/trainer.py:Trainer._load_best_model", "src/transformers/trainer.py:Trainer._save", "src/transformers/trainer.py:Trainer._push_from_checkpoint", "src/transformers/tra...
[]
6
65
JackPlowman/repo_standards_validator
JackPlowman__repo_standards_validator-137
35962a9fbb5711ab436edca91026428aed1e2d1c
null
diff --git a/poetry.lock b/poetry.lock index 4c114dc..aec2415 100644 --- a/poetry.lock +++ b/poetry.lock @@ -938,31 +938,31 @@ jinja2 = ["ruamel.yaml.jinja2 (>=0.2)"] [[package]] name = "ruff" -version = "0.9.3" +version = "0.9.5" description = "An extremely fast Python linter and code formatter, written in Rust."...
diff --git a/validator/tests/test_repository_checks.py b/validator/tests/test_repository_checks.py index aed318f..739a97d 100644 --- a/validator/tests/test_repository_checks.py +++ b/validator/tests/test_repository_checks.py @@ -11,6 +11,7 @@ def test_check_repository() -> None: # Arrange + configuration = M...
Fix private vulnerability disclosure incorrect value
1,738,967,972,000
[ "validator", "size/M", "python", "dependencies" ]
Security Vulnerability
[ "validator/__main__.py:main", "validator/repository_checks.py:check_repository", "validator/repository_checks.py:check_repository_security_details" ]
[ "validator/repository_checks.py:get_private_vulnerability_disclosures" ]
3
66
huggingface/transformers
huggingface__transformers-35453
6b550462139655d488d4c663086a63e98713c6b9
null
diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py index 0ca5d36d0f40..d00c65925ef2 100644 --- a/src/transformers/optimization.py +++ b/src/transformers/optimization.py @@ -393,45 +393,71 @@ def _get_wsd_scheduler_lambda( num_warmup_steps: int, num_stable_steps: int, num_dec...
diff --git a/tests/optimization/test_optimization.py b/tests/optimization/test_optimization.py index 6982583d2bec..4ab248e75a9a 100644 --- a/tests/optimization/test_optimization.py +++ b/tests/optimization/test_optimization.py @@ -153,8 +153,8 @@ def test_schedulers(self): [0.0, 5.0, 10.0, 8.165, 7.071...
Support Constant Learning Rate with Cooldown ### Feature request In `transformers.optimization` support `constant learning rate with cooldown` functions. ### Motivation This method will implement that scaling experiments can be performed with significantly reduced compute and GPU hours by utilizing fewer but reusa...
1,735,486,425,000
[]
Feature Request
[ "src/transformers/optimization.py:_get_wsd_scheduler_lambda", "src/transformers/optimization.py:get_wsd_schedule", "src/transformers/optimization.py:get_scheduler" ]
[]
3
67
una-auxme/paf
una-auxme__paf-565
d63c492cb30b3091789dfd2b7179c65ef8c4a55a
null
diff --git a/code/perception/src/lidar_distance.py b/code/perception/src/lidar_distance.py old mode 100644 new mode 100755 index 939224c5..5c937be0 --- a/code/perception/src/lidar_distance.py +++ b/code/perception/src/lidar_distance.py @@ -3,7 +3,7 @@ import rospy import ros_numpy import numpy as np -import lidar_fi...
Seperate tasks from lidar_distance in seperate nodes for better performance ### Feature Description The tasks of image calculation and clustering data points should be seperated into seperate nodes to improve the node performance which should reduce the latency in the visualization. ### Definition of Done - Create l...
1,734,020,557,000
[]
Performance Issue
[ "code/perception/src/lidar_distance.py:LidarDistance.calculate_image", "code/perception/src/lidar_distance.py:LidarDistance.reconstruct_img_from_lidar" ]
[]
2
69
django/django
django__django-19043
0cabed9efa2c7abd1693860069f20ec5db41fcd8
null
diff --git a/django/forms/fields.py b/django/forms/fields.py index 202a6d72c878..4bd9c352f270 100644 --- a/django/forms/fields.py +++ b/django/forms/fields.py @@ -95,6 +95,7 @@ class Field: "required": _("This field is required."), } empty_values = list(validators.EMPTY_VALUES) + bound_field_class...
diff --git a/tests/forms_tests/tests/test_forms.py b/tests/forms_tests/tests/test_forms.py index b41424d43d64..20c86754c6e5 100644 --- a/tests/forms_tests/tests/test_forms.py +++ b/tests/forms_tests/tests/test_forms.py @@ -8,6 +8,7 @@ from django.core.validators import MaxValueValidator, RegexValidator from django.fo...
Allow overriding BoundField class on forms and fields Description It would be useful if there was an easy way to add CSS classes to the HTML element which is generated when rendering a BoundField. I propose adding field_css_class, similar to required_css_class and error_css_class. ​https://docs.djangoproject.com/en/...
1,736,837,596,000
[]
Feature Request
[ "django/forms/fields.py:Field.__init__", "django/forms/fields.py:Field.get_bound_field", "django/forms/forms.py:BaseForm.__init__" ]
[]
3
71
python/cpython
python__cpython-7926
8463cb55dabb78571e32d8c8c7de8679ab421c2c
null
diff --git a/Lib/idlelib/macosx.py b/Lib/idlelib/macosx.py index 89b645702d334d..2ea02ec04d661a 100644 --- a/Lib/idlelib/macosx.py +++ b/Lib/idlelib/macosx.py @@ -174,9 +174,8 @@ def overrideRootMenu(root, flist): del mainmenu.menudefs[-3][1][0:2] menubar = Menu(root) root.configure(menu=menubar) - me...
IDLE maxosc.overrideRootMenu: remove unused menudict BPO | [33964](https://bugs.python.org/issue33964) --- | :--- Nosy | @terryjreedy, @csabella PRs | <li>python/cpython#7926</li> Dependencies | <li>bpo-33963: IDLE macosx: add tests.</li> <sup>*Note: these values reflect the state of the issue at the time it was migra...
Function local name 'menudict' is initialized empty, two key value pairs are added, and it is never touched again.
1,529,986,082,000
[ "skip news" ]
Performance Issue
[ "Lib/idlelib/macosx.py:overrideRootMenu" ]
[]
1
73
numpy/numpy
numpy__numpy-10615
e20f11036bb7ce9f8de91eb4240e49ea4e41ef17
null
diff --git a/doc/release/upcoming_changes/10615.deprecation.rst b/doc/release/upcoming_changes/10615.deprecation.rst new file mode 100644 index 000000000000..7fa948ea85be --- /dev/null +++ b/doc/release/upcoming_changes/10615.deprecation.rst @@ -0,0 +1,14 @@ +Only ndim-0 arrays are treated as scalars +-----------------...
diff --git a/numpy/core/tests/test_deprecations.py b/numpy/core/tests/test_deprecations.py index b92a20a12cea..e47a24995067 100644 --- a/numpy/core/tests/test_deprecations.py +++ b/numpy/core/tests/test_deprecations.py @@ -822,6 +822,18 @@ def test_deprecated_raised(self, dtype): assert isinstance(e.__...
deprecate scalar conversions for rank>0 arrays Numpy allows for conversion of arrays into scalars if they are size-1, i.e., ```python float(numpy.array(1.0)) # 1.0 float(numpy.array([1.0])) # 1.0 float(numpy.array([[[[1.0]]]])) # 1.0 # TypeError: only size-1 arrays can be converted to Python scalars float(n...
Personally agree with you (there is a reason we got rid of the `operator.index` equivalent thing, where it was causing obvious problems) and would encourage anyone to try it. But… I expect there are some annoying cases to solve and it might have a pretty large downstream impact which would make it a very slow change i...
1,518,819,855,000
[ "component: numpy._core", "07 - Deprecation", "62 - Python API" ]
Feature Request
[ "numpy/core/function_base.py:linspace" ]
[]
1
74
pandas-dev/pandas
pandas-dev__pandas-29944
aae9234e816469391512910e8265552f215d6263
297c59abbe218f3ccb89fb25bee48f619c1e0d2d
diff --git a/doc/source/whatsnew/v1.5.0.rst b/doc/source/whatsnew/v1.5.0.rst index 45c32d689bd5b..fccfbb98f2591 100644 --- a/doc/source/whatsnew/v1.5.0.rst +++ b/doc/source/whatsnew/v1.5.0.rst @@ -133,6 +133,7 @@ Other enhancements - :meth:`MultiIndex.to_frame` now supports the argument ``allow_duplicates`` and raises...
diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py index c4ce0b256cd41..3ec3744e43653 100644 --- a/pandas/tests/plotting/frame/test_frame.py +++ b/pandas/tests/plotting/frame/test_frame.py @@ -2071,6 +2071,90 @@ def test_plot_no_numeric_data(self): with pytest.ra...
ENH: Group some columns in subplots with DataFrame.plot `df.plot(subplots=True)` will create one subplot per column. Is there a way to group multiple columns on the same subplot (and leave the rest of the column separated)? I'd be happy to submit a PR if that's something you'd consider? In terms of API `subplot` c...
Just to make sure I understand, `df.plot(subplots=[('a', 'b'), ('d', 'e', 'f')])` would have two axes? I'm not sure, but I'm guessing it'd be somewhat complicated to work that into our current `subplots` logic. At glance, it looks like we assume `subplots=True` -> one axes per column in several places. > `df.plot(su...
1,575,230,687,000
[ "Enhancement", "Visualization" ]
Feature Request
[ "pandas/plotting/_matplotlib/core.py:MPLPlot.__init__", "pandas/plotting/_matplotlib/core.py:MPLPlot._setup_subplots", "pandas/plotting/_matplotlib/core.py:MPLPlot._get_ax" ]
[ "pandas/plotting/_matplotlib/core.py:MPLPlot._validate_subplots_kwarg", "pandas/plotting/_matplotlib/core.py:MPLPlot._col_idx_to_axis_idx" ]
3
75
scikit-learn/scikit-learn
scikit-learn__scikit-learn-16948
7d93ca87ea14ef8f16be43d75cff72d05f48f7ed
69132ebbd39f070590ca01813340b5b12c0d02ab
diff --git a/doc/computing/scaling_strategies.rst b/doc/computing/scaling_strategies.rst index 5eee5728e4b9a..277d499f4cc13 100644 --- a/doc/computing/scaling_strategies.rst +++ b/doc/computing/scaling_strategies.rst @@ -80,6 +80,7 @@ Here is a list of incremental estimators for different tasks: + :class:`sklear...
diff --git a/sklearn/decomposition/tests/test_nmf.py b/sklearn/decomposition/tests/test_nmf.py index cca0fad114ae5..9f3df5b64a803 100644 --- a/sklearn/decomposition/tests/test_nmf.py +++ b/sklearn/decomposition/tests/test_nmf.py @@ -1,10 +1,13 @@ import re +import sys +from io import StringIO import numpy as np im...
Online implementation of Non-negative Matrix Factorizarion (NMF) <!-- If your issue is a usage question, submit it here instead: - StackOverflow with the scikit-learn tag: https://stackoverflow.com/questions/tagged/scikit-learn - Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn For more informati...
Nice figure! Let me see with the others if this should go in. Could you run on a wikipedia text example? Here is a benchmark on word counts for the first paragraph of 500k wikipedia articles. To speed up calculations, I used a HashingVectorizer with 4096 features. The KL divergence is calculated on 50k test samples. ...
1,587,129,543,000
[ "Waiting for Reviewer", "module:decomposition" ]
Feature Request
[ "sklearn/decomposition/_nmf.py:norm", "sklearn/decomposition/_nmf.py:_beta_divergence", "sklearn/decomposition/_nmf.py:_fit_coordinate_descent", "sklearn/decomposition/_nmf.py:_multiplicative_update_w", "sklearn/decomposition/_nmf.py:_multiplicative_update_h", "sklearn/decomposition/_nmf.py:_fit_multiplic...
[ "sklearn/decomposition/_nmf.py:MiniBatchNMF.__init__", "sklearn/decomposition/_nmf.py:MiniBatchNMF._check_params", "sklearn/decomposition/_nmf.py:MiniBatchNMF._solve_W", "sklearn/decomposition/_nmf.py:MiniBatchNMF._minibatch_step", "sklearn/decomposition/_nmf.py:MiniBatchNMF._minibatch_convergence", "skle...
9
76
sympy/sympy
sympy__sympy-27051
a912bdfea081dd08b727118ce8bbc0f45c246730
null
diff --git a/.mailmap b/.mailmap index d36148d403cc..76ae38efbf77 100644 --- a/.mailmap +++ b/.mailmap @@ -197,6 +197,7 @@ Aaron Meurer <asmeurer@gmail.com> Aaron Miller <acmiller273@gmail.com> Aaron Stiff <69512633+AaronStiff@users.noreply.github.com> Aaryan Dewan <aaryandewan@yahoo.com> Aaryan Dewan <49852384+aary...
`prime(n)` is very slow ```python In [4]: %time [[prime(n) for i in range(n)] for n in range(1, 10)] CPU times: user 180 ms, sys: 7.68 ms, total: 187 ms Wall time: 185 ms Out[4]: [[2], [3, 3], [5, 5, 5], [7, 7, 7, 7], [11, 11, 11, 11, 11], [13, 13, 13, 13, 13, 13], [17, 17, 17, ↪ ↪ 17, 17, 17, 17], [19, 19, 1...
I agree with setting a threshold and using the sieve for cases where `n` is below that threshold. However, I think we can only determine the specific value through experimentation. Expanding the `sieve` to an appropriate length is important not just for `prime` but for many `ntheory` functions. Therefore, I think it...
1,725,886,272,000
[]
Performance Issue
[ "sympy/ntheory/generate.py:prime" ]
[]
1
82
numba/numba
numba__numba-9757
b49469f06176769ecfa47a7865ef2030f4ed77f4
null
diff --git a/docs/upcoming_changes/9757.bug_fix.rst b/docs/upcoming_changes/9757.bug_fix.rst new file mode 100644 index 00000000000..17a239ef57c --- /dev/null +++ b/docs/upcoming_changes/9757.bug_fix.rst @@ -0,0 +1,6 @@ +Fix excessive memory use/poor memory behaviour in the dispatcher for non-fingerprintable types. +--...
Calling a @jit function with a type that cannot be "fingerprinted" uses excessive memory <!-- Thanks for opening an issue! To help the Numba team handle your information efficiently, please first ensure that there is no other issue present that already describes the issue you have (search at https://github.com/nu...
1,729,170,497,000
[ "5 - Ready to merge" ]
Performance Issue
[ "numba/core/dispatcher.py:_DispatcherBase.__init__", "numba/core/dispatcher.py:_DispatcherBase._compile_for_args", "numba/core/dispatcher.py:_DispatcherBase.typeof_pyval" ]
[]
3
83
jobatabs/textec
jobatabs__textec-53
951409c20a2cf9243690b9bda7f7a22bb7c999a7
null
diff --git a/src/repositories/reference_repository.py b/src/repositories/reference_repository.py index 7768842..6d0d67b 100644 --- a/src/repositories/reference_repository.py +++ b/src/repositories/reference_repository.py @@ -34,14 +34,14 @@ def delete_reference(_id): db.session.commit() -def create_reference(r...
Fix code scanning alert - SQL doesn't use :symbols Tracking issue for: - [x] https://github.com/jobatabs/textec/security/code-scanning/3
1,733,481,438,000
[]
Security Vulnerability
[ "src/repositories/reference_repository.py:create_reference" ]
[]
1
86
django/django
django__django-19009
5f30fd2358fd60a514bdba31594bfc8122f30167
null
diff --git a/django/contrib/gis/db/backends/base/models.py b/django/contrib/gis/db/backends/base/models.py index 589c872da611..38309f0e5dea 100644 --- a/django/contrib/gis/db/backends/base/models.py +++ b/django/contrib/gis/db/backends/base/models.py @@ -1,4 +1,5 @@ from django.contrib.gis import gdal +from django.uti...
diff --git a/tests/gis_tests/test_spatialrefsys.py b/tests/gis_tests/test_spatialrefsys.py index 512fd217c375..b87dcf8b9293 100644 --- a/tests/gis_tests/test_spatialrefsys.py +++ b/tests/gis_tests/test_spatialrefsys.py @@ -1,5 +1,6 @@ import re +from django.contrib.gis.db.backends.base.models import SpatialRefSysMix...
Refactoring SpatialRefSysMixin.srs for efficiency and better error handling Description (last modified by Arnaldo Govene) The srs property in the SpatialRefSysMixin class has several issues that can be addressed to improve code efficiency, maintainability, and adherence to best practices. The following points high...
['Thank you, agree the error overwrite could be improved, the caching logic is also due a refresh', 1735805940.0] ['Thank you, agree the error overwrite could be improved, the caching logic is also due a refresh', 1735805940.0]
1,736,246,205,000
[]
Performance Issue
[ "django/contrib/gis/db/backends/base/models.py:SpatialRefSysMixin.srs" ]
[]
1
90
internetarchive/openlibrary
internetarchive__openlibrary-3196
061f07ec54cec6ffd169580641eb87144fe7ea14
null
diff --git a/openlibrary/plugins/upstream/utils.py b/openlibrary/plugins/upstream/utils.py index 24ab1af95e9..d9773e1c4c9 100644 --- a/openlibrary/plugins/upstream/utils.py +++ b/openlibrary/plugins/upstream/utils.py @@ -761,7 +761,8 @@ def setup(): 'request': Request(), 'logger': logging.getLogger("o...
XSS issues on books page <!-- What problem are we solving? What does the experience look like today? What are the symptoms? --> XSS issues ### Evidence / Screenshot (if possible) ### Relevant url? https://openlibrary.org/books/OL27912983M/'_script_alert(_Ultra_Security_)_script https://twitter.com/BugsBun751...
https://openlibrary.org/people/darkscanner this is the corresponding patron who reported the XSS vectors Should I fix this issue or somebody is already working? @cYph3r1337 Go for it! Let us know if you have any questions; https://github.com/internetarchive/openlibrary/wiki/Frontend-Guide has a good summary of how temp...
1,584,392,077,000
[]
Security Vulnerability
[ "openlibrary/plugins/upstream/utils.py:setup" ]
[]
1
91
pandas-dev/pandas
pandas-dev__pandas-21401
2a097061700e2441349cafd89348c9fc8d14ba31
null
diff --git a/doc/source/io.rst b/doc/source/io.rst index 9aff1e54d8e98..fa6a8b1d01530 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -4989,6 +4989,54 @@ with respect to the timezone. timezone aware or naive. When reading ``TIMESTAMP WITH TIME ZONE`` types, pandas will convert the data to UTC. +.. _io.sql...
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py index eeeb55cb8e70c..c346103a70c98 100644 --- a/pandas/tests/io/test_sql.py +++ b/pandas/tests/io/test_sql.py @@ -375,12 +375,16 @@ def _read_sql_iris_named_parameter(self): iris_frame = self.pandasSQL.read_query(query, params=params) ...
Use multi-row inserts for massive speedups on to_sql over high latency connections I have been trying to insert ~30k rows into a mysql database using pandas-0.15.1, oursql-0.9.3.1 and sqlalchemy-0.9.4. Because the machine is as across the atlantic from me, calling `data.to_sql` was taking >1 hr to insert the data. On i...
This seems reasonable. Thanks for investigating this! For the implementation, it will depend on how sqlalchemy deals with database flavors that does not support this (I can't test this at the moment, but it seems that sqlalchemy raises an error (eg http://stackoverflow.com/questions/23886764/multiple-insert-statements...
1,528,542,064,000
[ "Enhancement", "Performance", "IO SQL" ]
Feature Request
[ "pandas/core/generic.py:NDFrame.to_sql", "pandas/io/sql.py:to_sql", "pandas/io/sql.py:SQLTable.insert_statement", "pandas/io/sql.py:SQLTable._execute_insert", "pandas/io/sql.py:SQLTable.insert", "pandas/io/sql.py:SQLDatabase.to_sql", "pandas/io/sql.py:SQLiteDatabase.to_sql" ]
[ "pandas/io/sql.py:SQLTable._execute_insert_multi" ]
7
92
django/django
django__django-13134
3071660acfbdf4b5c59457c8e9dc345d5e8894c5
null
diff --git a/django/contrib/admin/sites.py b/django/contrib/admin/sites.py index e200fdeb1e0c..600944ebc022 100644 --- a/django/contrib/admin/sites.py +++ b/django/contrib/admin/sites.py @@ -3,19 +3,23 @@ from weakref import WeakSet from django.apps import apps +from django.conf import settings from django.contrib...
diff --git a/tests/admin_views/admin.py b/tests/admin_views/admin.py index 4a72e3070f1d..1140f0349602 100644 --- a/tests/admin_views/admin.py +++ b/tests/admin_views/admin.py @@ -15,10 +15,11 @@ from django.core.mail import EmailMessage from django.db import models from django.forms.models import BaseModelFormSet -f...
Avoid potential admin model enumeration Description When attacking a default Django server (one created with startproject is sufficient), you can enumerate the names of the applications and models by fuzzing the admin URL path. A redirect indicates that the model exists in that application, while a 404 indicates it ...
["Thanks Daniel. Pasting here extracts from the security team discussion. This idea of adding a final_catch_all_pattern seems most likely: Well, I vaguely recalled, and managed to find, \u200bhttps://groups.google.com/d/msgid/django-developers/CAHdnYzu2zHVMcrjsSRpvRrdQBMntqy%2Bh0puWB2-uB8GOU6Tf7g%40mail.gmail.com where...
1,593,639,332,000
[]
Security Vulnerability
[ "django/contrib/admin/sites.py:AdminSite.get_urls" ]
[ "django/contrib/admin/sites.py:AdminSite.catch_all_view" ]
1
93
webcompat/webcompat.com
webcompat__webcompat.com-2731
384c4feeef14a13655b8264dde9f10bfa5614134
null
diff --git a/.circleci/config.yml b/.circleci/config.yml index 7e91e837e..a065b8326 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -58,7 +58,7 @@ jobs: nosetests npm run lint npm run build - npm run test:js -- --reporters="runner" --firefoxBinary=`which...
diff --git a/tests/fixtures/api/issues.7a8b2d603c698b0e98a26c77661d12e2.json b/tests/fixtures/api/issues.7a8b2d603c698b0e98a26c77661d12e2.json index 69dae89cc..7775bcddd 100644 --- a/tests/fixtures/api/issues.7a8b2d603c698b0e98a26c77661d12e2.json +++ b/tests/fixtures/api/issues.7a8b2d603c698b0e98a26c77661d12e2.json @@ ...
Removes unnecessary use of markdown-it The Github API already [offers a markdown version](https://developer.github.com/v3/media/#html) for the requests we're making, so it's not necessary to use markdown-it to process our requests. However we still need it to format users comments, so we're still using it on all of...
1,544,481,443,000
[]
Performance Issue
[ "webcompat/api/endpoints.py:proxy_issue", "webcompat/api/endpoints.py:edit_issue", "webcompat/api/endpoints.py:proxy_issues", "webcompat/api/endpoints.py:get_user_activity_issues", "webcompat/api/endpoints.py:get_issue_category", "webcompat/api/endpoints.py:get_search_results", "webcompat/api/endpoints....
[]
10
94
scikit-learn/scikit-learn
scikit-learn__scikit-learn-14012
15b54340ee7dc7cb870a418d1b5f6f553672f5dd
null
diff --git a/doc/whats_new/v0.22.rst b/doc/whats_new/v0.22.rst index d1459bcb8caf3..de0572b9ffd7d 100644 --- a/doc/whats_new/v0.22.rst +++ b/doc/whats_new/v0.22.rst @@ -47,6 +47,11 @@ Changelog validation data separately to avoid any data leak. :pr:`13933` by `NicolasHug`_. +- |Feature| :class:`ensemble.HistGra...
diff --git a/sklearn/ensemble/_hist_gradient_boosting/tests/test_warm_start.py b/sklearn/ensemble/_hist_gradient_boosting/tests/test_warm_start.py new file mode 100644 index 0000000000000..806ad94ccee98 --- /dev/null +++ b/sklearn/ensemble/_hist_gradient_boosting/tests/test_warm_start.py @@ -0,0 +1,190 @@ +import numpy...
Feature request: warm starting for histogram-based GBM #### Description This is a feature request to add the warm start parameter, which exists for [gradient boosting](https://scikit-learn.org/dev/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html#sklearn.ensemble.GradientBoostingClassifier), to the ne...
This is on my TODO list, but I don't know yet when I'll start working on this. If anyone wants to give it a try I'll be happy to provide review and/or guidance. @mfeurer thanks for the input! @NicolasHug I think this would be great to prioritize. Shouldn't be too hard, right? Honestly I'm happy to work on it but I'...
1,559,575,018,000
[]
Feature Request
[ "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:BaseHistGradientBoosting.__init__", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:BaseHistGradientBoosting.fit", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:HistGradientBoostingRegressor.__init__", "sklearn/ense...
[ "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:BaseHistGradientBoosting._is_fitted", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:BaseHistGradientBoosting._clear_state", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:BaseHistGradientBoosting._get_small_trainset"...
4
95
airbnb/knowledge-repo
airbnb__knowledge-repo-558
d1874d3e71994cd9bb769d60133c046b6d2b8bb5
null
diff --git a/knowledge_repo/app/routes/comment.py b/knowledge_repo/app/routes/comment.py index 988d82840..5fe02bce6 100644 --- a/knowledge_repo/app/routes/comment.py +++ b/knowledge_repo/app/routes/comment.py @@ -6,7 +6,7 @@ - /delete_comment """ import logging -from flask import request, Blueprint, g +from flask ...
XSS vulnerability Auto-reviewers: @NiharikaRay @matthewwardrop @earthmancash @danfrankj Hello, guys! There is a cross-site scripting (XSS) vulnerability in the Knowledge Repo 0.7.4 (other versions may be affected as well) which allows remote attackers to inject arbitrary JavaScript via post comments functionalit...
Thanks Ekzorcist. I'll make sure this gets fixed before the next release :). XSS is also present on 0.7.6 Is this one fixed @fahrishb? Also raised in #254. We should prioritize fixing this. Hi @redyaffle, I suspect that this is just one in a myriad of security vulnerabilities that exist throughout the knowledge repo...
1,590,817,301,000
[]
Security Vulnerability
[ "knowledge_repo/app/routes/comment.py:post_comment" ]
[]
1
96
pulp/pulp_rpm
pulp__pulp_rpm-3224
6b999b4d8a16d936927e3fc9dec74fe506c25226
null
diff --git a/CHANGES/3225.misc b/CHANGES/3225.misc new file mode 100644 index 000000000..e1a600d91 --- /dev/null +++ b/CHANGES/3225.misc @@ -0,0 +1,1 @@ +Load one Artifact checksum type taken package_checksum_type during publish metadata generation. diff --git a/CHANGES/3226.misc b/CHANGES/3226.misc new file mode 10064...
Switch to values() instead of select_related **Version** all **Describe the bug** Some queries use select_related, if we do not need the model instance, we could just use values() that will produce queryset dict results instead. **To Reproduce** Steps to reproduce the behavior: **Expected behavior** A clear ...
1,691,165,280,000
[ "backport-3.22" ]
Performance Issue
[ "pulp_rpm/app/tasks/publishing.py:PublicationData.publish_artifacts", "pulp_rpm/app/tasks/publishing.py:generate_repo_metadata", "pulp_rpm/app/tasks/synchronizing.py:add_metadata_to_publication" ]
[]
3
97
sancus-tee/sancus-compiler
sancus-tee__sancus-compiler-36
dc3910192f99374c91ac5f12187dbc460454fa1f
null
diff --git a/src/drivers/linker.py b/src/drivers/linker.py index 4f3ec85..c369e43 100644 --- a/src/drivers/linker.py +++ b/src/drivers/linker.py @@ -17,6 +17,7 @@ MAC_SIZE = int(sancus.config.SECURITY / 8) KEY_SIZE = MAC_SIZE +CONNECTION_STRUCT_SIZE = 6 + KEY_SIZE class SmEntry: @@ -63,7 +64,7 @@ def add_sym(f...
diff --git a/src/stubs/sm_attest.c b/src/stubs/sm_attest.c new file mode 100644 index 0000000..6bfec25 --- /dev/null +++ b/src/stubs/sm_attest.c @@ -0,0 +1,16 @@ +#include "reactive_stubs_support.h" + +uint16_t SM_ENTRY(SM_NAME) __sm_attest(const uint8_t* challenge, size_t len, + uint8_t *result) +{ + if( !sancus_i...
Check for integer overflow in sancus_is_outside_sm macro
thanks for the PR! Not sure about your first clause "( __OUTSIDE_SM(p, sm) && ((len <= 0) ||". If the length is zero, the condition should always be true and the outside_sm is not necessary I'd say. A negative length doesn't make any sense and the disadvantage of a macro is that we can't easily enforce an `unsigned` ty...
1,629,731,760,000
[]
Security Vulnerability
[ "src/drivers/linker.py:add_sym", "src/drivers/linker.py:get_io_sym_map", "src/drivers/linker.py:get_io_sect_map", "src/drivers/linker.py:sort_entries", "src/drivers/sancus/sancus_config.py:SmConfig.__init__", "src/drivers/sancus/sancus_config.py:SmConfig.__str__" ]
[ "src/drivers/sancus/sancus_config.py:SmConfig.num_connections" ]
6
98
scikit-learn/scikit-learn
scikit-learn__scikit-learn-6116
5e4f524f8408e5a0efb96f8128504f8ca747a95e
null
diff --git a/doc/whats_new.rst b/doc/whats_new.rst index b5f10da91d28f..a4b775ec66d0a 100644 --- a/doc/whats_new.rst +++ b/doc/whats_new.rst @@ -36,6 +36,11 @@ Enhancements (`#6288 <https://github.com/scikit-learn/scikit-learn/pull/6288>`_) by `Jake VanderPlas`_. + - :class:`ensemble.GradientBoostingCla...
diff --git a/sklearn/ensemble/tests/test_gradient_boosting.py b/sklearn/ensemble/tests/test_gradient_boosting.py index 1ebf82fa3ae44..634bc259a1167 100644 --- a/sklearn/ensemble/tests/test_gradient_boosting.py +++ b/sklearn/ensemble/tests/test_gradient_boosting.py @@ -1050,6 +1050,9 @@ def check_sparse_input(EstimatorC...
GradientBoostingClassifier.fit accepts sparse X, but .predict does not I have a sparse dataset that is too large for main memory if I call `X.todense()`. If I understand correctly, `GradientBoostingClassifier.fit` will accept my sparse `X`, but it is not currently possible to use `GradientBoostingClassifier.predict` o...
Confirmed. I think small rework of https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_gradient_boosting.pyx#L39 is needed. Looks like function _predict_regression_tree_inplace_fast was written to somehow optimize prediction speed. But i see some problems in it, its main loop https://github.com/s...
1,452,013,011,000
[ "Waiting for Reviewer" ]
Feature Request
[ "sklearn/ensemble/gradient_boosting.py:BaseGradientBoosting._staged_decision_function", "sklearn/ensemble/gradient_boosting.py:GradientBoostingClassifier.decision_function", "sklearn/ensemble/gradient_boosting.py:GradientBoostingClassifier.staged_decision_function", "sklearn/ensemble/gradient_boosting.py:Grad...
[]
10
99
duncanscanga/VDRS-Solutions
duncanscanga__VDRS-Solutions-73
c97e24d072fe1bffe3dc9e3b363d40b725a70be9
null
diff --git a/.gitignore b/.gitignore index c5586c4..cc6983a 100644 --- a/.gitignore +++ b/.gitignore @@ -23,6 +23,8 @@ docs/_build/ .vscode db.sqlite **/__pycache__/ +**/latest_logs +**/archived_logs # Coverage reports htmlcov/ diff --git a/app/models.py b/app/models.py index 5ae4e11..562268e 100644 --- a/app/mo...
diff --git a/app_test/frontend/test_listing.py b/app_test/frontend/test_listing.py index bc47c57..c6a75da 100644 --- a/app_test/frontend/test_listing.py +++ b/app_test/frontend/test_listing.py @@ -1,5 +1,4 @@ from seleniumbase import BaseCase - from app_test.conftest import base_url from app.models import Booking, L...
Test SQL Injection: Register Function (name, email)
1,668,638,393,000
[]
Security Vulnerability
[ "app/models.py:register" ]
[]
1
100
scipy/scipy
scipy__scipy-5647
ccb338fa0521070107200a9ebcab67905b286a7a
null
diff --git a/benchmarks/benchmarks/spatial.py b/benchmarks/benchmarks/spatial.py index f3005f6d70bb..0825bd5dcfa8 100644 --- a/benchmarks/benchmarks/spatial.py +++ b/benchmarks/benchmarks/spatial.py @@ -106,27 +106,74 @@ class Neighbors(Benchmark): [1, 2, np.inf], [0.2, 0.5], BOX_SIZES, LEAF_...
diff --git a/scipy/spatial/tests/test_kdtree.py b/scipy/spatial/tests/test_kdtree.py index e5f85440fa86..6c1a51772e20 100644 --- a/scipy/spatial/tests/test_kdtree.py +++ b/scipy/spatial/tests/test_kdtree.py @@ -4,13 +4,17 @@ from __future__ import division, print_function, absolute_import from numpy.testing import ...
Massive performance regression in cKDTree.query with L_inf distance in scipy 0.17 Hi, While running some of my own code, I noticed a major slowdown in scipy 0.17 which I tracked to the cKDtree.query call with L_inf distance. I did a git bisect and tracked the commit that caused the issue -- the commit is : 58a10c26883...
1,451,630,231,000
[ "enhancement", "scipy.spatial" ]
Performance Issue
[ "benchmarks/benchmarks/spatial.py:Neighbors.setup", "benchmarks/benchmarks/spatial.py:Neighbors.time_sparse_distance_matrix", "benchmarks/benchmarks/spatial.py:Neighbors.time_count_neighbors" ]
[ "benchmarks/benchmarks/spatial.py:CNeighbors.setup", "benchmarks/benchmarks/spatial.py:CNeighbors.time_count_neighbors_deep", "benchmarks/benchmarks/spatial.py:CNeighbors.time_count_neighbors_shallow" ]
3
101
plone/plone.restapi
plone__plone.restapi-859
32a80ab4850252f1d5b9d1b3c847e9e32d3aa45e
null
diff --git a/news/857.bugfix b/news/857.bugfix new file mode 100644 index 0000000000..acfdabf048 --- /dev/null +++ b/news/857.bugfix @@ -0,0 +1,2 @@ +Sharing POST: Limit roles to ones the user is allowed to delegate. +[lgraf] \ No newline at end of file diff --git a/src/plone/restapi/deserializer/local_roles.py b/src/p...
diff --git a/src/plone/restapi/tests/test_content_local_roles.py b/src/plone/restapi/tests/test_content_local_roles.py index a26e6f3877..bd27f9f48a 100644 --- a/src/plone/restapi/tests/test_content_local_roles.py +++ b/src/plone/restapi/tests/test_content_local_roles.py @@ -6,6 +6,8 @@ from plone.app.testing import SI...
Privilege escalation in @sharing endpoint (PloneHotfix20200121) - https://github.com/plone/Products.CMFPlone/issues/3021 - https://plone.org/security/hotfix/20200121/privilege-escalation-when-plone-restapi-is-installed `plone.restapi.deserializer.local_roles.DeserializeFromJson` has a weakness. This endpoint was i...
1,579,683,301,000
[]
Security Vulnerability
[ "src/plone/restapi/deserializer/local_roles.py:DeserializeFromJson.__call__" ]
[]
1
103
openwisp/openwisp-users
openwisp__openwisp-users-286
6c083027ed7bb467351f8f05ac201fa60d9bdc24
null
diff --git a/openwisp_users/admin.py b/openwisp_users/admin.py index 8619e39d..f0cb45aa 100644 --- a/openwisp_users/admin.py +++ b/openwisp_users/admin.py @@ -318,7 +318,7 @@ def get_readonly_fields(self, request, obj=None): # do not allow operators to escalate their privileges if not request.user.is_...
diff --git a/openwisp_users/tests/test_admin.py b/openwisp_users/tests/test_admin.py index f6e4e767..0c4157d3 100644 --- a/openwisp_users/tests/test_admin.py +++ b/openwisp_users/tests/test_admin.py @@ -261,6 +261,21 @@ def test_admin_change_non_superuser_readonly_fields(self): html = 'class="readonly"><im...
[bug/security] Org managers can escalate themselves to superusers **Expected behavior:** org manager (non superuser) should not be able to edit permissions, nor flag themselves or others are superusers. **Actual behavior**: org manager (non superuser) are able to flag themselves or others are superusers. This is ...
1,634,301,535,000
[]
Security Vulnerability
[ "openwisp_users/admin.py:UserAdmin.get_readonly_fields" ]
[]
1
104
rucio/rucio
rucio__rucio-4930
92474868cec7d64b83b46b451ecdc4a294ef2711
null
diff --git a/lib/rucio/web/ui/flask/common/utils.py b/lib/rucio/web/ui/flask/common/utils.py index fd905830e2..fd62365f6d 100644 --- a/lib/rucio/web/ui/flask/common/utils.py +++ b/lib/rucio/web/ui/flask/common/utils.py @@ -94,8 +94,6 @@ def html_escape(s, quote=True): # catch these from the webpy input() storage objec...
Privilege escalation issue in the Rucio WebUI Motivation ---------- The move to FLASK as a webui backend introduced a cookie leak in the auth_token workflows of the webui. This potentially leak the contents of cookies to other sessions. Impact is that Rucio authentication tokens are leaked to other users accessing th...
1,634,836,642,000
[]
Security Vulnerability
[ "lib/rucio/web/ui/flask/common/utils.py:add_cookies", "lib/rucio/web/ui/flask/common/utils.py:redirect_to_last_known_url", "lib/rucio/web/ui/flask/common/utils.py:finalize_auth", "lib/rucio/web/ui/flask/common/utils.py:authenticate" ]
[]
4
105
AzureAD/microsoft-authentication-library-for-python
AzureAD__microsoft-authentication-library-for-python-407
3062770948f1961a13767ee85dd7ba664440feb3
null
diff --git a/msal/application.py b/msal/application.py index 686cc95d..3651d216 100644 --- a/msal/application.py +++ b/msal/application.py @@ -170,6 +170,7 @@ def __init__( # This way, it holds the same positional param place for PCA, # when we would eventually want to add this feature...
Instance metadata caching This issue is inspired by an improvement made in MSAL .Net 4.1: * documented in [its release blog post here](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/msal-net-4.1#getaccounts-and-acquiretokensilent-are-now-less-network-chatty), * its original issue [this on...
1,631,732,455,000
[]
Performance Issue
[ "msal/application.py:ClientApplication.__init__" ]
[]
1
106
home-assistant/core
home-assistant__core-15182
4fbe3bb07062b81ac4562d4080550d92cbd47828
null
diff --git a/homeassistant/components/http/__init__.py b/homeassistant/components/http/__init__.py index d8c877e83a205..f769d2bc4ffba 100644 --- a/homeassistant/components/http/__init__.py +++ b/homeassistant/components/http/__init__.py @@ -180,7 +180,7 @@ def __init__(self, hass, api_password, middlewares...
diff --git a/tests/components/http/test_auth.py b/tests/components/http/test_auth.py index a44d17d513db9..dd8b2cd35c46c 100644 --- a/tests/components/http/test_auth.py +++ b/tests/components/http/test_auth.py @@ -41,7 +41,7 @@ def app(): """Fixture to setup a web.Application.""" app = web.Application() a...
Security issue, unauthorized trusted access by spoofing x-forwarded-for header **Home Assistant release with the issue:** 0.68.1 but probably older versions too **Last working Home Assistant release (if known):** N/A **Operating environment (Hass.io/Docker/Windows/etc.):** Hassbian but should be applicable to ...
This used to work (last known working version 0.62)...but after upgrading recently and checking it in the latest dev version I can confirm that it is an issue The thing is that I didnt even hacked the headers and still no password is asked when connecting from external network @rofrantz if you are not spoofing the hea...
1,530,140,910,000
[ "core", "cla-signed", "integration: http" ]
Security Vulnerability
[ "homeassistant/components/http/__init__.py:HomeAssistantHTTP.__init__", "homeassistant/components/http/real_ip.py:setup_real_ip" ]
[]
2
107
jazzband/django-two-factor-auth
jazzband__django-two-factor-auth-390
430ef08a2c6cec4e0ce6cfa4a08686d4e62f73b7
null
diff --git a/CHANGELOG.md b/CHANGELOG.md index d242f7369..45b1cc581 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,8 @@ ### Changed - The templates are now based on Bootstrap 4. +- `DisableView` now checks user has verified before disabling two-factor on + their account ## 1.12 - 2020-07-08 ### Added ...
diff --git a/tests/test_views_disable.py b/tests/test_views_disable.py index 69a74a8ff..74b963f91 100644 --- a/tests/test_views_disable.py +++ b/tests/test_views_disable.py @@ -2,7 +2,7 @@ from django.shortcuts import resolve_url from django.test import TestCase from django.urls import reverse -from django_otp impor...
DisableView doesn't require request user to be verified `DisableView` only checks that the request user *has* a valid, not that the user is verified. It is possible that if a site has been misconfigured and allows users to bypass django-two-factor-auth's login view (e.g. the admin site hasn't been configured correct...
I think this is a fairly minor security issue as it relies on things beyond our control going wrong first and no secrets are exposed. I should have PR ready by the end of the day. I think this is a fairly minor security issue as it relies on things beyond our control going wrong first and no secrets are exposed. I shou...
1,601,579,910,000
[]
Security Vulnerability
[ "two_factor/views/profile.py:DisableView.get", "two_factor/views/profile.py:DisableView.form_valid" ]
[ "two_factor/views/profile.py:DisableView.dispatch" ]
2
108
netbox-community/netbox
netbox-community__netbox-7676
8f1acb700d72467ffe7ae5c8502422a1eac0693d
null
diff --git a/netbox/netbox/authentication.py b/netbox/netbox/authentication.py index 653fad3b055..a67ec451d1e 100644 --- a/netbox/netbox/authentication.py +++ b/netbox/netbox/authentication.py @@ -34,7 +34,7 @@ def get_object_permissions(self, user_obj): object_permissions = ObjectPermission.objects.filter( ...
gunicorn spends 80% of each request in get_object_permissions() ### NetBox version v3.0.3 ### Python version 3.9 ### Steps to Reproduce 1. Configure Netbox with AD authentication (a few thousands users + heavy nesting) 2. Create a bunch of devices (~500 servers in my case, ~500 cables, ~50 VLANs, ~500 I...
Nice sleuthing! This looks related to (if not the same root issue as) #6926. Do you agree? Also, can you confirm whether this is reproducible using local authentication, or only as an LDAP user? > Nice sleuthing! This looks related to (if not the same root issue as) #6926. Do you agree? I'm not sure, but looks re...
1,635,451,702,000
[]
Performance Issue
[ "netbox/netbox/authentication.py:ObjectPermissionMixin.get_object_permissions" ]
[]
1
109
Innopoints/backend
Innopoints__backend-124
c1779409130762a141710875bd1d758c8671b1d8
null
diff --git a/innopoints/blueprints.py b/innopoints/blueprints.py index dc520e8..f5d350e 100644 --- a/innopoints/blueprints.py +++ b/innopoints/blueprints.py @@ -5,6 +5,8 @@ from flask import Blueprint +from innopoints.core.helpers import csrf_protect + def _factory(partial_module_string, url_prefix='/'): "...
Implement CSRF protection
1,588,369,030,000
[]
Security Vulnerability
[ "innopoints/views/account.py:get_info", "innopoints/views/authentication.py:authorize", "innopoints/views/authentication.py:login_cheat" ]
[ "innopoints/core/helpers.py:csrf_protect", "innopoints/schemas/account.py:AccountSchema.get_csrf_token" ]
3
110
zulip/zulip
zulip__zulip-14091
cb85763c7870bd6c83e84e4e7bca0900810457e9
null
diff --git a/tools/coveragerc b/tools/coveragerc index b47d57aee05af..24b0536d71b23 100644 --- a/tools/coveragerc +++ b/tools/coveragerc @@ -15,6 +15,8 @@ exclude_lines = raise UnexpectedWebhookEventType # Don't require coverage for blocks only run when type-checking if TYPE_CHECKING: + # Don't requir...
diff --git a/tools/test-backend b/tools/test-backend index 83a1fb73a9f21..e3ce1c4daee3f 100755 --- a/tools/test-backend +++ b/tools/test-backend @@ -93,7 +93,6 @@ not_yet_fully_covered = {path for target in [ 'zerver/lib/parallel.py', 'zerver/lib/profile.py', 'zerver/lib/queue.py', - 'zerver/lib/rate_...
Optimize rate_limiter performance for get_events queries See https://chat.zulip.org/#narrow/stream/3-backend/topic/profiling.20get_events/near/816860 for profiling details, but basically, currently a get_events request spends 1.4ms/request talking to redis for our rate limiter, which is somewhere between 15% and 50% of...
Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out! <!-- areaLabelAddition --> @zulipbot claim Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out! <!-- areaLabe...
1,583,193,587,000
[ "size: XL", "has conflicts" ]
Performance Issue
[ "zerver/lib/rate_limiter.py:RateLimitedObject.rate_limit", "zerver/lib/rate_limiter.py:RateLimitedObject.__init__", "zerver/lib/rate_limiter.py:RateLimitedObject.max_api_calls", "zerver/lib/rate_limiter.py:RateLimitedObject.max_api_window", "zerver/lib/rate_limiter.py:RateLimiterBackend.block_access", "ze...
[ "zerver/lib/rate_limiter.py:RateLimitedObject.get_rules", "zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend._garbage_collect_for_rule", "zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend.need_to_limit", "zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend.get_api_calls_left", ...
8
111
modin-project/modin
modin-project__modin-3404
41213581b974760928ed36f05c479937dcdfea55
null
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8dd750c75be..85cbe64b76a 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -447,6 +447,8 @@ jobs: - run: pytest -n 2 modin/experimental/xgboost/test/test_default.py - run: pytest -n 2 modin/experimental/xgboost/te...
diff --git a/modin/experimental/xgboost/test/test_dmatrix.py b/modin/experimental/xgboost/test/test_dmatrix.py new file mode 100644 index 00000000000..c9498131ef4 --- /dev/null +++ b/modin/experimental/xgboost/test/test_dmatrix.py @@ -0,0 +1,160 @@ +# Licensed to Modin Development Team under one or more contributor lic...
Expansion DMatrix on 6 new parameters Need add new parameters and functions in class DMatrix Support parameters ``missing``, ``feature_names``, ``feature_types``, ``feature_weights``, ``silent``, ``enable_categorical``.
1,631,033,063,000
[]
Feature Request
[ "modin/experimental/xgboost/xgboost.py:DMatrix.__init__", "modin/experimental/xgboost/xgboost.py:Booster.predict", "modin/experimental/xgboost/xgboost_ray.py:ModinXGBoostActor._get_dmatrix", "modin/experimental/xgboost/xgboost_ray.py:ModinXGBoostActor.set_train_data", "modin/experimental/xgboost/xgboost_ray...
[ "modin/experimental/xgboost/xgboost.py:DMatrix.get_dmatrix_params", "modin/experimental/xgboost/xgboost.py:DMatrix.feature_names", "modin/experimental/xgboost/xgboost.py:DMatrix.feature_types", "modin/experimental/xgboost/xgboost.py:DMatrix.num_row", "modin/experimental/xgboost/xgboost.py:DMatrix.num_col", ...
8
112
pandas-dev/pandas
pandas-dev__pandas-35029
65319af6e563ccbb02fb5152949957b6aef570ef
null
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst index 6aff4f4bd41e2..b1257fe893804 100644 --- a/doc/source/whatsnew/v1.2.0.rst +++ b/doc/source/whatsnew/v1.2.0.rst @@ -8,6 +8,15 @@ including other versions of pandas. {{ header }} +.. warning:: + + Previously, the default argument ``e...
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py index c582a0fa23577..98a55ae39bd77 100644 --- a/pandas/tests/io/excel/test_readers.py +++ b/pandas/tests/io/excel/test_readers.py @@ -577,6 +577,10 @@ def test_date_conversion_overflow(self, read_ext): if pd.read_excel.k...
Deprecate using `xlrd` engine in favor of openpyxl xlrd is unmaintained and the previous maintainer has asked us to move towards openpyxl. xlrd works now, but *might* have some issues when Python 3.9 or later gets released and changes some elements of the XML parser, as default usage right now throws a `PendingDeprecat...
@WillAyd Should we start with adding `openpyxl` as an engine in [pandas.read_excel](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html#pandas-read-excel). Happy to contribute a PR for it. It is already available just need to make it default over time, so want to raise a FutureWarning w...
1,593,268,003,000
[ "IO Excel", "Blocker", "Deprecate" ]
Feature Request
[ "pandas/io/excel/_base.py:ExcelFile.__init__" ]
[]
1
113
pyca/pyopenssl
pyca__pyopenssl-578
f189de9becf14840712b01877ebb1f08c26f894c
null
diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 19cebf1e8..56c3c74c8 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -25,6 +25,10 @@ Changes: - Added ``OpenSSL.X509Store.set_time()`` to set a custom verification time when verifying certificate chains. `#567 <https://github.com/pyca/pyopenssl/pull/567>`_ +- Cha...
Poor performance on Connection.recv() with large values of bufsiz. Discovered while investigating kennethreitz/requests#3729. When allocating a buffer using CFFI's `new` method (as `Connection.recv` does via `_ffi.new("char[]", bufsiz)`, CFFI kindly zeroes that buffer for us. That means this call has a performance c...
So, Armin has pointed out that this actually already exists: `ffi.new_allocator(should_clear_after_alloc=False)("char[]", bufsiz)` should be a solution to the zeroing out of memory. I'm going to investigate it now. Excellently, that does work. I'll start working on a PR for this tomorrow.
1,480,282,277,000
[]
Performance Issue
[ "src/OpenSSL/SSL.py:Connection.recv", "src/OpenSSL/SSL.py:Connection.recv_into", "src/OpenSSL/SSL.py:Connection.bio_read", "src/OpenSSL/SSL.py:Connection.server_random", "src/OpenSSL/SSL.py:Connection.client_random", "src/OpenSSL/SSL.py:Connection.master_key", "src/OpenSSL/SSL.py:Connection._get_finishe...
[]
7
114
matchms/matchms-backup
matchms__matchms-backup-187
da8debd649798367ed29102b2965a1ab0e6f4938
null
diff --git a/environment.yml b/environment.yml index b38d68a..f46d4a2 100644 --- a/environment.yml +++ b/environment.yml @@ -7,7 +7,6 @@ dependencies: - gensim==3.8.0 - matplotlib==3.2.1 - numpy==1.18.1 - - openbabel==3.0.0 - pip==20.0.2 - pyteomics==4.2 - python==3.7 diff --git a/matchms/filtering/d...
diff --git a/tests/test_utils.py b/tests/test_utils.py index 1918816..7816846 100644 --- a/tests/test_utils.py +++ b/tests/test_utils.py @@ -1,18 +1,81 @@ -from matchms.utils import is_valid_inchikey +from matchms.utils import mol_converter +from matchms.utils import is_valid_inchi, is_valid_inchikey, is_valid_smiles +...
openbabel has contagious license https://tldrlegal.com/license/gnu-general-public-license-v2
Openbabel seems like a lot of pain. So far I didn't get the time to look at it enough. But I hope that I can soon try to look for alternatives. There are certainly several potential alternatives, but we would have to test them on a large number of conversions to find if they have a similar or better success rate. So...
1,588,796,120,000
[]
Security Vulnerability
[ "matchms/filtering/derive_inchi_from_smiles.py:derive_inchi_from_smiles", "matchms/filtering/derive_inchikey_from_inchi.py:derive_inchikey_from_inchi", "matchms/filtering/derive_smiles_from_inchi.py:derive_smiles_from_inchi", "matchms/utils.py:mol_converter", "matchms/utils.py:is_valid_inchi" ]
[ "matchms/utils.py:convert_smiles_to_inchi", "matchms/utils.py:convert_inchi_to_smiles", "matchms/utils.py:convert_inchi_to_inchikey" ]
5
115
latchset/jwcrypto
latchset__jwcrypto-195
e0249b1161cb8ef7a3a9910948103b816bc0a299
null
diff --git a/README.md b/README.md index ccd39c4..1b886cd 100644 --- a/README.md +++ b/README.md @@ -16,3 +16,23 @@ Documentation ============= http://jwcrypto.readthedocs.org + +Deprecation Notices +=================== + +2020.12.11: The RSA1_5 algorithm is now considered deprecated due to numerous +implementation...
diff --git a/jwcrypto/tests-cookbook.py b/jwcrypto/tests-cookbook.py index 40b8d36..dd6f36e 100644 --- a/jwcrypto/tests-cookbook.py +++ b/jwcrypto/tests-cookbook.py @@ -1110,7 +1110,8 @@ def test_5_1_encryption(self): plaintext = Payload_plaintext_5 protected = base64url_decode(JWE_Protected_Header_5_...
The pyca/cryptography RSA PKCS#1 v1.5 is unsafe, making the users of it vulnerable to Bleichenbacher attacks The API provided by pyca/cryptography is not secure, as documented in their docs: https://github.com/pyca/cryptography/commit/8686d524b7b890bcbe6132b774bd72a3ae37cf0d As far as I can tell, it's one of the AP...
At the very least I will add documentation about the problem. Should we also disable RSA1_5 by default ? At least until pyca provides some option ? > At the very least I will add documentation about the problem. and what users that need this mechanism for interoperability are expected to do? rewrite their appli...
1,607,722,553,000
[]
Security Vulnerability
[ "jwcrypto/jwt.py:JWT.make_encrypted_token" ]
[]
1
116
django/django
django__django-5605
6d9c5d46e644a8ef93b0227fc710e09394a03992
null
diff --git a/django/middleware/csrf.py b/django/middleware/csrf.py index ba9f63ec3d91..276b31e10fc2 100644 --- a/django/middleware/csrf.py +++ b/django/middleware/csrf.py @@ -8,6 +8,7 @@ import logging import re +import string from django.conf import settings from django.urls import get_callable @@ -16,8 +17,10...
diff --git a/tests/csrf_tests/test_context_processor.py b/tests/csrf_tests/test_context_processor.py index 270b3e4771ab..5db0116db0e2 100644 --- a/tests/csrf_tests/test_context_processor.py +++ b/tests/csrf_tests/test_context_processor.py @@ -1,6 +1,5 @@ -import json - from django.http import HttpRequest +from django....
Prevent repetitive output to counter BREACH-type attacks Description Currently the CSRF middleware sets the cookie value to the plain value of the token. This makes it possible to try to guess the token by trying longer and longer prefix matches. An effective countermeasure would be to prevent repetitive output from...
['"Instead of delivering the credential as a 32-byte string, it should be delivered as a 64-byte string. The first 32 bytes are a one-time pad, and the second 32 bytes are encoded using the XOR algorithm between the pad and the "real" token." \u200bhttps://github.com/rails/rails/pull/11729 via \u200bArs Technica', 1375...
1,446,914,517,000
[]
Security Vulnerability
[ "django/middleware/csrf.py:_get_new_csrf_key", "django/middleware/csrf.py:get_token", "django/middleware/csrf.py:rotate_token", "django/middleware/csrf.py:_sanitize_token", "django/middleware/csrf.py:CsrfViewMiddleware.process_view", "django/middleware/csrf.py:CsrfViewMiddleware.process_response" ]
[ "django/middleware/csrf.py:_get_new_csrf_string", "django/middleware/csrf.py:_salt_cipher_secret", "django/middleware/csrf.py:_unsalt_cipher_token", "django/middleware/csrf.py:_get_new_csrf_token", "django/middleware/csrf.py:_compare_salted_tokens" ]
6
117
jax-ml/jax
jax-ml__jax-25114
df6758f021167b1c0b85f5d6e4986f6f0d2a1169
null
diff --git a/CHANGELOG.md b/CHANGELOG.md index ce8b040439c0..b5758d107077 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -79,6 +79,7 @@ When releasing, please add the new-release-boilerplate to docs/pallas/CHANGELOG. * {func}`jax.lax.linalg.eig` and the related `jax.numpy` functions ({func}`jax.numpy.linalg.ei...
Allow specifying `exec_time_optimization_effort` and `memory_fitting_effort` via CLI. New compiler options (see title) have recently been [exposed](https://github.com/jax-ml/jax/issues/24625) in JAX. To use them users have to change their Python code like this: ``` jax.jit(f, compiler_options={ "exec_time_opt...
1,732,629,502,000
[ "pull ready" ]
Feature Request
[ "jax/_src/compiler.py:get_compile_options" ]
[]
1
118
justin13601/ACES
justin13601__ACES-145
89291a01522d2c23e45aa0ee6a09c32645f4d1b4
null
diff --git a/.github/workflows/code-quality-main.yaml b/.github/workflows/code-quality-main.yaml index d3369699..691b47c7 100644 --- a/.github/workflows/code-quality-main.yaml +++ b/.github/workflows/code-quality-main.yaml @@ -13,10 +13,16 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v...
diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index c22ef161..4e51b115 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -17,16 +17,16 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name...
Task definition matters in terms of memory usage As suggested by @Oufattole, this kind of config seems to use excessive amounts of memory (more than 400 GB on MIMIC-IV in my case) ![image](https://github.com/user-attachments/assets/36e87b96-d4b1-491b-b664-f1f39fd1d2ad) using regex mitigates this: ![image](https://gi...
Yep this is a limitation with the creation of the predicates as memory peaks during this process (also brought up in #89, which was closed after #90 was merged with the ability to match using regex). Basically, each time a predicate is defined in a configuration file, a column is created which corresponds to that pr...
1,729,818,802,000
[]
Performance Issue
[ "src/aces/config.py:TaskExtractorConfig.load", "src/aces/config.py:TaskExtractorConfig._initialize_predicates", "src/aces/predicates.py:get_predicates_df", "src/aces/query.py:query" ]
[]
4
119
aio-libs/aiohttp
aio-libs__aiohttp-7829
c0377bf80e72134ad7bf29aa847f0301fbdb130f
null
diff --git a/CHANGES/7829.misc b/CHANGES/7829.misc new file mode 100644 index 00000000000..9eb060f4713 --- /dev/null +++ b/CHANGES/7829.misc @@ -0,0 +1,3 @@ +Improved URL handler resolution time by indexing resources in the UrlDispatcher. +For applications with a large number of handlers, this should increase performan...
diff --git a/tests/test_urldispatch.py b/tests/test_urldispatch.py index 5f5687eacc3..adb1e52e781 100644 --- a/tests/test_urldispatch.py +++ b/tests/test_urldispatch.py @@ -1264,10 +1264,17 @@ async def test_prefixed_subapp_overlap(app: Any) -> None: subapp2.router.add_get("/b", handler2) app.add_subapp("/ss"...
UrlDispatcher improvements ### Describe the bug The UrlDispatcher currently has to do a linear search to dispatch urls which has an average time complexity of `O(n)`. This means that urls that are registered first can be found faster than urls that are registered later. ### To Reproduce Results ``` % pyt...
1,699,830,964,000
[ "bot:chronographer:provided" ]
Performance Issue
[ "aiohttp/web_urldispatcher.py:PrefixedSubAppResource.__init__", "aiohttp/web_urldispatcher.py:PrefixedSubAppResource.add_prefix", "aiohttp/web_urldispatcher.py:PrefixedSubAppResource.resolve", "aiohttp/web_urldispatcher.py:UrlDispatcher.resolve", "aiohttp/web_urldispatcher.py:UrlDispatcher.__init__", "aio...
[ "aiohttp/web_urldispatcher.py:PrefixedSubAppResource._add_prefix_to_resources", "aiohttp/web_urldispatcher.py:UrlDispatcher._get_resource_index_key", "aiohttp/web_urldispatcher.py:UrlDispatcher.index_resource", "aiohttp/web_urldispatcher.py:UrlDispatcher.unindex_resource" ]
6
121
jupyterhub/oauthenticator
jupyterhub__oauthenticator-764
f4da2e8eedeec009f2d1ac749b6da2ee160652b0
null
diff --git a/oauthenticator/google.py b/oauthenticator/google.py index f06f6aca..28b76d48 100644 --- a/oauthenticator/google.py +++ b/oauthenticator/google.py @@ -14,6 +14,7 @@ class GoogleOAuthenticator(OAuthenticator, GoogleOAuth2Mixin): user_auth_state_key = "google_user" + _service_credentials = {} ...
diff --git a/oauthenticator/tests/test_google.py b/oauthenticator/tests/test_google.py index 4c3a9e0c..070de014 100644 --- a/oauthenticator/tests/test_google.py +++ b/oauthenticator/tests/test_google.py @@ -3,6 +3,7 @@ import logging import re from unittest import mock +from unittest.mock import AsyncMock from py...
The google authenticator's calls to check groups is done sync, can it be made async? Quickly written, I can fill this in in more detail later.
1,727,389,390,000
[ "maintenance" ]
Performance Issue
[ "oauthenticator/google.py:GoogleOAuthenticator.update_auth_model", "oauthenticator/google.py:GoogleOAuthenticator._service_client", "oauthenticator/google.py:GoogleOAuthenticator._setup_service", "oauthenticator/google.py:GoogleOAuthenticator._fetch_member_groups" ]
[ "oauthenticator/google.py:GoogleOAuthenticator._get_service_credentials", "oauthenticator/google.py:GoogleOAuthenticator._is_token_valid", "oauthenticator/google.py:GoogleOAuthenticator._setup_service_credentials" ]
4
122
PlasmaPy/PlasmaPy
PlasmaPy__PlasmaPy-2542
db62754e6150dc841074afe2e155f21a806e7f1b
null
diff --git a/changelog/2542.internal.rst b/changelog/2542.internal.rst new file mode 100644 index 0000000000..628869c5d5 --- /dev/null +++ b/changelog/2542.internal.rst @@ -0,0 +1,1 @@ +Refactored `~plasmapy.formulary.lengths.gyroradius` to reduce the cognitive complexity of the function. diff --git a/plasmapy/formular...
Reduce cognitive complexity of `gyroradius` Right now [`plasmapy.formulary.lengths.gyroradius`](https://github.com/PlasmaPy/PlasmaPy/blob/fc12bbd286ae9db82ab313d8c380808b814a462c/plasmapy/formulary/lengths.py#LL102C5-L102C15) has several nested conditionals. As a consequence, the code for `gyroradius` has high [cognit...
Come to think of it, once PlasmaPy goes Python 3.10+ (for our first release after April 2024), we might be able to use [structural pattern matching](https://docs.python.org/3/whatsnew/3.10.html#pep-634-structural-pattern-matching). Another possibility here would be to use [guard clauses](https://medium.com/lemon-c...
1,709,064,730,000
[ "plasmapy.formulary" ]
Performance Issue
[ "plasmapy/formulary/lengths.py:gyroradius" ]
[]
1
123
scikit-learn/scikit-learn
scikit-learn__scikit-learn-24076
30bf6f39a7126a351db8971d24aa865fa5605569
null
diff --git a/.gitignore b/.gitignore index f6250b4d5f580..89600846100a8 100644 --- a/.gitignore +++ b/.gitignore @@ -90,6 +90,7 @@ sklearn/metrics/_dist_metrics.pyx sklearn/metrics/_dist_metrics.pxd sklearn/metrics/_pairwise_distances_reduction/_argkmin.pxd sklearn/metrics/_pairwise_distances_reduction/_argkmin.pyx ...
diff --git a/sklearn/metrics/tests/test_pairwise_distances_reduction.py b/sklearn/metrics/tests/test_pairwise_distances_reduction.py index ad0ddbc60e9bd..7355dfd6ba912 100644 --- a/sklearn/metrics/tests/test_pairwise_distances_reduction.py +++ b/sklearn/metrics/tests/test_pairwise_distances_reduction.py @@ -13,10 +13,1...
knn predict unreasonably slow b/c of use of scipy.stats.mode ```python import numpy as np from sklearn.datasets import make_blobs from sklearn.neighbors import KNeighborsClassifier X, y = make_blobs(centers=2, random_state=4, n_samples=30) knn = KNeighborsClassifier(algorithm='kd_tree').fit(X, y) x_min, x_max...
@amueller Do you mean something like this ``` max(top_k, key = list(top_k).count) ``` That isn't going to apply to every row, and involves n_classes passes over each. Basically because we know the set of class labels, we shouldn't need to be doing unique. Yes, we could construct a CSR sparse matrix and sum_duplic...
1,659,396,563,000
[ "Performance", "module:metrics", "module:neighbors", "cython" ]
Performance Issue
[ "sklearn/neighbors/_base.py:NeighborsBase._fit", "sklearn/neighbors/_classification.py:KNeighborsClassifier.predict", "sklearn/neighbors/_classification.py:KNeighborsClassifier.predict_proba" ]
[ "sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py:ArgKminClassMode.is_usable_for", "sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py:ArgKminClassMode.compute", "sklearn/neighbors/_classification.py:_adjusted_metric" ]
3
125
fractal-analytics-platform/fractal-tasks-core
fractal-analytics-platform__fractal-tasks-core-889
5c5ccaff10eb5508bae321c61cb5d0741a36a91a
null
diff --git a/CHANGELOG.md b/CHANGELOG.md index f7aea0d58..35082befe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,9 @@ **Note**: Numbers like (\#123) point to closed Pull Requests on the fractal-tasks-core repository. +# 1.4.1 + +* Tasks: + * Remove overlap checking for output ROIs in Cellpose task to a...
diff --git a/tests/tasks/test_workflows_cellpose_segmentation.py b/tests/tasks/test_workflows_cellpose_segmentation.py index 2c3c04216..1ce511d30 100644 --- a/tests/tasks/test_workflows_cellpose_segmentation.py +++ b/tests/tasks/test_workflows_cellpose_segmentation.py @@ -602,49 +602,6 @@ def test_workflow_bounding_box...
Review/optimize use of `get_overlapping_pairs_3D` in Cellpose task Branching from #764 (fixed in principle with #778) After a fix, it'd be useful to have a rough benchmark of `get_overlapping_pairs_3D for` one of those many-labels cases (say 1000 labels for a given `i_ROI`). Since this function does nothing else tha...
Another thing to consider here: This (as well as the original implementation) will only check for potential bounding box overlaps within a given ROI that's being processed, right? A future improvement may be to check for overlaps across ROIs as well, as in theory, they could also overlap. Given that this is just fo...
1,736,344,999,000
[]
Performance Issue
[ "fractal_tasks_core/tasks/cellpose_segmentation.py:cellpose_segmentation" ]
[]
1
126
OpenEnergyPlatform/open-MaStR
OpenEnergyPlatform__open-MaStR-598
bd44ce26986a0c41b25250951a4a9a391723d690
null
diff --git a/CHANGELOG.md b/CHANGELOG.md index 42432fd8..404fb9ac 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,8 @@ and the versioning aims to respect [Semantic Versioning](http://semver.org/spec/ ### Changed - Repair Header image in Readme [#587](https://github.com/OpenEnergyPlatform/open-MaStR/pull/...
diff --git a/tests/xml_download/test_utils_write_to_database.py b/tests/xml_download/test_utils_write_to_database.py index dc5aaacd..e72e5501 100644 --- a/tests/xml_download/test_utils_write_to_database.py +++ b/tests/xml_download/test_utils_write_to_database.py @@ -1,25 +1,35 @@ +import os +import sqlite3 import sys ...
Increase parsing speed This task contains several steps: 1. Search different ways that might increase parsing speed. Parsing is done right now by the `pandas.read_xml` method [here](https://github.com/OpenEnergyPlatform/open-MaStR/blob/d17a92c17912f7f96814d309859dc495f942c07a/open_mastr/xml_download/utils_write_to_d...
Hi! I started working on this task and decided to change the steps slightly: 1. Construct the benchmark - Use the Marktstammdatenregister to construct a few datasets of various size - ✅ ([link](https://github.com/AlexandraImbrisca/open-MaStR/tree/develop/benchmark)) - Create a script to automate the calcu...
1,736,704,440,000
[]
Performance Issue
[ "open_mastr/xml_download/utils_write_to_database.py:write_mastr_xml_to_database", "open_mastr/xml_download/utils_write_to_database.py:is_table_relevant", "open_mastr/xml_download/utils_write_to_database.py:create_database_table", "open_mastr/xml_download/utils_write_to_database.py:cast_date_columns_to_datetim...
[ "open_mastr/xml_download/utils_write_to_database.py:extract_xml_table_name", "open_mastr/xml_download/utils_write_to_database.py:extract_sql_table_name", "open_mastr/xml_download/utils_write_to_database.py:cast_date_columns_to_string", "open_mastr/xml_download/utils_write_to_database.py:read_xml_file", "ope...
10
127
kedro-org/kedro
kedro-org__kedro-4367
70734ce00ee46b58c85b4cf04afbe89d32c06758
null
diff --git a/RELEASE.md b/RELEASE.md index 7af26f6ca4..c19418f0ca 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -2,6 +2,7 @@ ## Major features and improvements * Implemented `KedroDataCatalog.to_config()` method that converts the catalog instance into a configuration format suitable for serialization. +* Improve Omeg...
Improve OmegaConfigLoader performance when global/variable interpolations are involved ## Description Extending on the investigation in #3893 , OmegaConfigLoader lags in resolving catalog configurations when global/variable interpolations are involved. ## Context **Some previous observations:** https://github...
During the investigation, I found that there was a slow bit in `_set_globals_value`. I didn't spent enough time to fix it, but with a quick fix it improves roughly from 1.5s -> 0.9s, but there are probably more. Particularly, there is an obvious slow path `global_oc` get created and destroyed for every reference of `$g...
1,733,266,369,000
[]
Performance Issue
[ "kedro/config/omegaconf_config.py:OmegaConfigLoader.__init__", "kedro/config/omegaconf_config.py:OmegaConfigLoader.load_and_merge_dir_config", "kedro/config/omegaconf_config.py:OmegaConfigLoader._get_globals_value", "kedro/config/omegaconf_config.py:OmegaConfigLoader._get_runtime_value" ]
[]
4
128
UCL/TLOmodel
UCL__TLOmodel-1524
d5e6d31ab27549dc5671ace98c68d11d46d40890
null
diff --git a/src/tlo/methods/contraception.py b/src/tlo/methods/contraception.py index ab6c633f4c..09cb394804 100644 --- a/src/tlo/methods/contraception.py +++ b/src/tlo/methods/contraception.py @@ -1146,12 +1146,12 @@ def __init__(self, module, person_id, new_contraceptive): self.TREATMENT_ID = "Contracepti...
Population dataframe accesses in `HSI_Contraception_FamilyPlanningAppt.EXPECTED_APPT_FOOTPRINT` are performance bottleneck Look at the most recent successful profiling run results https://github-pages.ucl.ac.uk/TLOmodel-profiling/_static/profiling_html/schedule_1226_dbf33d69edadbacdc2dad1fc32c93eb771b03f9b.html c...
@BinglingICL, I think it is okay to set the EXPECTED_APPT_FOOTPRINT with the same logic when HSI is scheduled instead of when it is run, isn't it? > [@BinglingICL](https://github.com/BinglingICL), I think it is okay to set the EXPECTED_APPT_FOOTPRINT with the same logic when HSI is scheduled instead of when it is run, ...
1,732,204,946,000
[]
Performance Issue
[ "src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.EXPECTED_APPT_FOOTPRINT", "src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.__init__", "src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.apply" ]
[ "src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt._get_appt_footprint" ]
3
129
Deltares/imod-python
Deltares__imod-python-1159
4f3b71675708c5a832fb78e05f679bb89158f95d
null
diff --git a/.gitignore b/.gitignore index fd95f7d8d..28f9b76a9 100644 --- a/.gitignore +++ b/.gitignore @@ -141,5 +141,4 @@ examples/data .pixi /imod/tests/mydask.png -/imod/tests/unittest_report.xml -/imod/tests/examples_report.xml +/imod/tests/*_report.xml diff --git a/docs/api/changelog.rst b/docs/api/changelog...
diff --git a/imod/tests/test_mf6/test_mf6_chd.py b/imod/tests/test_mf6/test_mf6_chd.py index ef0a04108..c3b96b982 100644 --- a/imod/tests/test_mf6/test_mf6_chd.py +++ b/imod/tests/test_mf6/test_mf6_chd.py @@ -238,6 +238,7 @@ def test_from_imod5_shd(imod5_dataset, tmp_path): chd_shd.write("chd_shd", [1], write_cont...
Bottlenecks writing HFBs LHM https://github.com/Deltares/imod-python/pull/1157 considerably improves the speed with which the LHM is written: From 34 minutes to 12.5 minutes. This is still slower than expected however. Based on profiling, I've identified 3 main bottlenecks: - 4.5 minutes: ``xu.DataArray.from_structu...
1,723,823,927,000
[]
Performance Issue
[ "imod/mf6/simulation.py:Modflow6Simulation.from_imod5_data", "imod/schemata.py:scalar_None", "imod/typing/grid.py:as_ugrid_dataarray" ]
[ "imod/typing/grid.py:GridCache.__init__", "imod/typing/grid.py:GridCache.get_grid", "imod/typing/grid.py:GridCache.remove_first", "imod/typing/grid.py:GridCache.clear" ]
3
130
Project-MONAI/MONAI
Project-MONAI__MONAI-8123
052dbb4439165bfc1fc3132fadb0955587e4d30e
null
diff --git a/monai/networks/nets/segresnet_ds.py b/monai/networks/nets/segresnet_ds.py index 1ac5a79ee3..098e490511 100644 --- a/monai/networks/nets/segresnet_ds.py +++ b/monai/networks/nets/segresnet_ds.py @@ -508,8 +508,10 @@ def forward( # type: ignore outputs: list[torch.Tensor] = [] outputs_au...
Optimize the VISTA3D latency **Is your feature request related to a problem? Please describe.** The VISTA3D is a very brilliant model for everything segmentation in a medical image. However it suffers a slowdown [issue](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d/docs#inference-gpu-benchmarks) w...
1,727,686,856,000
[]
Performance Issue
[ "monai/networks/nets/segresnet_ds.py:SegResNetDS2.forward", "monai/networks/nets/vista3d.py:ClassMappingClassify.forward" ]
[]
2
131
Qiskit/qiskit
Qiskit__qiskit-13052
9898979d2947340514adca7121ed8eaea75df77f
null
diff --git a/crates/accelerate/src/filter_op_nodes.rs b/crates/accelerate/src/filter_op_nodes.rs new file mode 100644 index 000000000000..7c41391f3788 --- /dev/null +++ b/crates/accelerate/src/filter_op_nodes.rs @@ -0,0 +1,63 @@ +// This code is part of Qiskit. +// +// (C) Copyright IBM 2024 +// +// This code is licens...
Port `FilterOpNodes` to Rust Port `FilterOpNodes` to Rust
1,724,882,933,000
[ "performance", "Changelog: None", "Rust", "mod: transpiler" ]
Feature Request
[ "qiskit/transpiler/passes/utils/filter_op_nodes.py:FilterOpNodes.run" ]
[]
1
133
ivadomed/ivadomed
ivadomed__ivadomed-1081
dee79e6bd90b30095267a798da99c4033cc5991b
null
diff --git a/docs/source/configuration_file.rst b/docs/source/configuration_file.rst index 8e9169961..ab31be8a5 100644 --- a/docs/source/configuration_file.rst +++ b/docs/source/configuration_file.rst @@ -2278,8 +2278,28 @@ Postprocessing Evaluation Parameters --------------------- -Dict. Parameters to get object d...
Time issue when running `--test` on large microscopy images **Disclaimer** This issue is based on preliminary observations and does not contain a lot of details/examples at the moment. ## Issue description I tried running the `--test` command on large microscopy images on `rosenberg`, and it takes a very long ti...
Update: I ran the same test again to have a little bit more details. Turns out, the `evaluate` function took ~5h30min to compute metrics on my 2 images 😬. ![eval_om_02](https://user-images.githubusercontent.com/54086142/139498300-1c13bd3f-72af-4ce2-b0d7-daa66aa4f9f5.png) I also ran the command with a cProfile (...
1,645,136,481,000
[ "bug" ]
Performance Issue
[ "ivadomed/evaluation.py:evaluate", "ivadomed/evaluation.py:Evaluation3DMetrics.__init__", "ivadomed/evaluation.py:Evaluation3DMetrics.get_ltpr", "ivadomed/evaluation.py:Evaluation3DMetrics.get_lfdr", "ivadomed/main.py:run_command", "ivadomed/metrics.py:get_metric_fns" ]
[]
6
134
zulip/zulip
zulip__zulip-31168
3d58a7ec0455ba609955c13470d31854687eff7a
null
diff --git a/zerver/lib/users.py b/zerver/lib/users.py index 3b00d4187ff2d..cb24813fb1a6a 100644 --- a/zerver/lib/users.py +++ b/zerver/lib/users.py @@ -10,6 +10,7 @@ from django.conf import settings from django.core.exceptions import ValidationError from django.db.models import Q, QuerySet +from django.db.models.fu...
diff --git a/zerver/tests/test_subs.py b/zerver/tests/test_subs.py index f4a3d0b3be9bc..da7dfe34e1660 100644 --- a/zerver/tests/test_subs.py +++ b/zerver/tests/test_subs.py @@ -2684,7 +2684,7 @@ def test_realm_admin_remove_multiple_users_from_stream(self) -> None: for name in ["cordelia", "prospero", "iago...
Make creating streams faster in large organizations In large organizations (like chat.zulip.org), creating a stream can be very slow (seconds to tens of seconds). We should fix this. It should be possible to reproduce this by creating 20K users with `manage.py populate_db --extra-users` in a development environment....
Hello @zulip/server-streams members, this issue was labeled with the "area: stream settings" label, so you may want to check it out! <!-- areaLabelAddition --> Hello @zulip/server-streams members, this issue was labeled with the "area: stream settings" label, so you may want to check it out! <!-- areaLabelAddition -...
1,722,342,135,000
[ "area: channel settings", "priority: high", "size: XL", "post release", "area: performance" ]
Performance Issue
[ "zerver/views/streams.py:principal_to_user_profile", "zerver/views/streams.py:remove_subscriptions_backend", "zerver/views/streams.py:add_subscriptions_backend" ]
[ "zerver/lib/users.py:bulk_access_users_by_email", "zerver/lib/users.py:bulk_access_users_by_id", "zerver/views/streams.py:bulk_principals_to_user_profiles" ]
3
135
vllm-project/vllm
vllm-project__vllm-7209
9855aea21b6aec48b12cef3a1614e7796b970a73
null
diff --git a/vllm/core/evictor.py b/vllm/core/evictor.py index ed7e06cab2996..44adc4158abec 100644 --- a/vllm/core/evictor.py +++ b/vllm/core/evictor.py @@ -1,6 +1,7 @@ import enum +import heapq from abc import ABC, abstractmethod -from typing import OrderedDict, Tuple +from typing import Dict, List, Tuple class...
[Bug]: Prefix Caching in BlockSpaceManagerV1 and BlockSpaceManagerV2 Increases Time to First Token(TTFT) and Slows Down System ### Your current environment ```text PyTorch version: 2.3.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64...
I meet the problem when enabling prefix caching in BlockSpaceManagerV2 ...
1,722,960,271,000
[ "ready" ]
Performance Issue
[ "vllm/core/evictor.py:LRUEvictor.__init__", "vllm/core/evictor.py:LRUEvictor.evict", "vllm/core/evictor.py:LRUEvictor.add" ]
[ "vllm/core/evictor.py:LRUEvictor._cleanup_if_necessary", "vllm/core/evictor.py:LRUEvictor._cleanup" ]
3
136
xCDAT/xcdat
xCDAT__xcdat-689
584fccee91f559089e4bb24c9cc45983b998bf4a
null
diff --git a/.vscode/xcdat.code-workspace b/.vscode/xcdat.code-workspace index 906eb68e..8eaa4e47 100644 --- a/.vscode/xcdat.code-workspace +++ b/.vscode/xcdat.code-workspace @@ -61,7 +61,7 @@ "configurations": [ { "name": "Python: Current File", - "type": "python",...
diff --git a/tests/test_temporal.py b/tests/test_temporal.py index 6e2d6049..e5489b1b 100644 --- a/tests/test_temporal.py +++ b/tests/test_temporal.py @@ -121,7 +121,7 @@ def test_averages_for_yearly_time_series(self): }, ) - assert result.identical(expected) + xr.testing.assert_id...
[Enhancement]: Temporal averaging performance ### Is your feature request related to a problem? This may not be a high priority issue, but I think it is worthwhile to document here: When refactoring e3sm_diags with xcdat, I was working on using temporal.climatology operation to get annual cycle of a data stream whi...
Possible next steps: 1. Try `flox` with e3sm_diags and xCDAT temporal APIs 2. Reference #490 3. Analyze codebase for other bottlenecks besides grouping Regarding to results: Case2,3,4 are identical, while, xcdat result from Case 1 is slightly off. ``` Case 1: [1.63433977e-08 1.73700556e-08 2.73745702e-08 3.220...
1,724,972,659,000
[ "type: enhancement" ]
Performance Issue
[ "xcdat/temporal.py:TemporalAccessor.departures", "xcdat/temporal.py:TemporalAccessor._group_average", "xcdat/temporal.py:TemporalAccessor._get_weights", "xcdat/temporal.py:TemporalAccessor._group_data", "xcdat/temporal.py:TemporalAccessor._label_time_coords", "xcdat/temporal.py:TemporalAccessor._keep_weig...
[ "xcdat/temporal.py:TemporalAccessor._calculate_departures" ]
6
137