repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | patch stringlengths 400 333k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels listlengths 0 7 ⌀ | category stringclasses 4
values | edit_functions listlengths 1 10 | added_functions listlengths 0 20 | edit_functions_length int64 1 10 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
duncanscanga/VDRS-Solutions | duncanscanga__VDRS-Solutions-73 | c97e24d072fe1bffe3dc9e3b363d40b725a70be9 | diff --git a/.gitignore b/.gitignore
index c5586c4..cc6983a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,6 +23,8 @@ docs/_build/
.vscode
db.sqlite
**/__pycache__/
+**/latest_logs
+**/archived_logs
# Coverage reports
htmlcov/
diff --git a/app/models.py b/app/models.py
index 5ae4e11..562268e 100644
--- a/app/mo... | diff --git a/app_test/frontend/test_listing.py b/app_test/frontend/test_listing.py
index bc47c57..c6a75da 100644
--- a/app_test/frontend/test_listing.py
+++ b/app_test/frontend/test_listing.py
@@ -1,5 +1,4 @@
from seleniumbase import BaseCase
-
from app_test.conftest import base_url
from app.models import Booking, L... | Test SQL Injection: Register Function (name, email)
| 1,668,638,393,000 | [] | Security Vulnerability | [
"app/models.py:register"
] | [] | 1 | |
scipy/scipy | scipy__scipy-5647 | ccb338fa0521070107200a9ebcab67905b286a7a | diff --git a/benchmarks/benchmarks/spatial.py b/benchmarks/benchmarks/spatial.py
index f3005f6d70bb..0825bd5dcfa8 100644
--- a/benchmarks/benchmarks/spatial.py
+++ b/benchmarks/benchmarks/spatial.py
@@ -106,27 +106,74 @@ class Neighbors(Benchmark):
[1, 2, np.inf],
[0.2, 0.5],
BOX_SIZES, LEAF_... | diff --git a/scipy/spatial/tests/test_kdtree.py b/scipy/spatial/tests/test_kdtree.py
index e5f85440fa86..6c1a51772e20 100644
--- a/scipy/spatial/tests/test_kdtree.py
+++ b/scipy/spatial/tests/test_kdtree.py
@@ -4,13 +4,17 @@
from __future__ import division, print_function, absolute_import
from numpy.testing import ... | Massive performance regression in cKDTree.query with L_inf distance in scipy 0.17
Hi,
While running some of my own code, I noticed a major slowdown in scipy 0.17 which I tracked to the cKDtree.query call with L_inf distance.
I did a git bisect and tracked the commit that caused the issue -- the commit is :
58a10c26883... | 1,451,630,231,000 | [
"enhancement",
"scipy.spatial"
] | Performance Issue | [
"benchmarks/benchmarks/spatial.py:Neighbors.setup",
"benchmarks/benchmarks/spatial.py:Neighbors.time_sparse_distance_matrix",
"benchmarks/benchmarks/spatial.py:Neighbors.time_count_neighbors"
] | [
"benchmarks/benchmarks/spatial.py:CNeighbors.setup",
"benchmarks/benchmarks/spatial.py:CNeighbors.time_count_neighbors_deep",
"benchmarks/benchmarks/spatial.py:CNeighbors.time_count_neighbors_shallow"
] | 3 | |
isledecomp/reccmp | isledecomp__reccmp-29 | 2bc2a068cfa33098699bf838217a4b02114a3eb0 | diff --git a/reccmp/isledecomp/compare/db.py b/reccmp/isledecomp/compare/db.py
index fff032e3..894f2316 100644
--- a/reccmp/isledecomp/compare/db.py
+++ b/reccmp/isledecomp/compare/db.py
@@ -24,8 +24,6 @@
value text,
primary key (addr, name)
) without rowid;
-
- CREATE INDEX `symbols_na` ON `s... | Alert to ambiguous name match
Re: isledecomp/isle#1192
When matching a comment annotation by its name[^1], we pick whichever entry is available first based on insertion order. [^2] This allows us to match multiple entries that have the same name. For example: `_Construct`. The order of match _attempts_ depends on th... | 1,733,601,445,000 | [] | Feature Request | [
"reccmp/isledecomp/compare/db.py:MatchInfo.get",
"reccmp/isledecomp/compare/db.py:CompareDb.__init__",
"reccmp/isledecomp/compare/db.py:CompareDb._find_potential_match",
"reccmp/isledecomp/compare/db.py:CompareDb._match_on",
"reccmp/isledecomp/compare/db.py:CompareDb.match_vtable"
] | [
"reccmp/isledecomp/compare/db.py:CompareDb.search_symbol",
"reccmp/isledecomp/compare/db.py:CompareDb.search_name"
] | 5 | ||
plone/plone.restapi | plone__plone.restapi-859 | 32a80ab4850252f1d5b9d1b3c847e9e32d3aa45e | diff --git a/news/857.bugfix b/news/857.bugfix
new file mode 100644
index 0000000000..acfdabf048
--- /dev/null
+++ b/news/857.bugfix
@@ -0,0 +1,2 @@
+Sharing POST: Limit roles to ones the user is allowed to delegate.
+[lgraf]
\ No newline at end of file
diff --git a/src/plone/restapi/deserializer/local_roles.py b/src/p... | diff --git a/src/plone/restapi/tests/test_content_local_roles.py b/src/plone/restapi/tests/test_content_local_roles.py
index a26e6f3877..bd27f9f48a 100644
--- a/src/plone/restapi/tests/test_content_local_roles.py
+++ b/src/plone/restapi/tests/test_content_local_roles.py
@@ -6,6 +6,8 @@
from plone.app.testing import SI... | Privilege escalation in @sharing endpoint (PloneHotfix20200121)
- https://github.com/plone/Products.CMFPlone/issues/3021
- https://plone.org/security/hotfix/20200121/privilege-escalation-when-plone-restapi-is-installed
`plone.restapi.deserializer.local_roles.DeserializeFromJson` has a weakness.
This endpoint was i... | 1,579,683,301,000 | [] | Security Vulnerability | [
"src/plone/restapi/deserializer/local_roles.py:DeserializeFromJson.__call__"
] | [] | 1 | |
openwisp/openwisp-users | openwisp__openwisp-users-286 | 6c083027ed7bb467351f8f05ac201fa60d9bdc24 | diff --git a/openwisp_users/admin.py b/openwisp_users/admin.py
index 8619e39d..f0cb45aa 100644
--- a/openwisp_users/admin.py
+++ b/openwisp_users/admin.py
@@ -318,7 +318,7 @@ def get_readonly_fields(self, request, obj=None):
# do not allow operators to escalate their privileges
if not request.user.is_... | diff --git a/openwisp_users/tests/test_admin.py b/openwisp_users/tests/test_admin.py
index f6e4e767..0c4157d3 100644
--- a/openwisp_users/tests/test_admin.py
+++ b/openwisp_users/tests/test_admin.py
@@ -261,6 +261,21 @@ def test_admin_change_non_superuser_readonly_fields(self):
html = 'class="readonly"><im... | [bug/security] Org managers can escalate themselves to superusers
**Expected behavior:** org manager (non superuser) should not be able to edit permissions, nor flag themselves or others are superusers.
**Actual behavior**: org manager (non superuser) are able to flag themselves or others are superusers.
This is ... | 1,634,301,535,000 | [] | Security Vulnerability | [
"openwisp_users/admin.py:UserAdmin.get_readonly_fields"
] | [] | 1 | |
rucio/rucio | rucio__rucio-4930 | 92474868cec7d64b83b46b451ecdc4a294ef2711 | diff --git a/lib/rucio/web/ui/flask/common/utils.py b/lib/rucio/web/ui/flask/common/utils.py
index fd905830e2..fd62365f6d 100644
--- a/lib/rucio/web/ui/flask/common/utils.py
+++ b/lib/rucio/web/ui/flask/common/utils.py
@@ -94,8 +94,6 @@ def html_escape(s, quote=True):
# catch these from the webpy input() storage objec... | Privilege escalation issue in the Rucio WebUI
Motivation
----------
The move to FLASK as a webui backend introduced a cookie leak in the auth_token workflows of the webui. This potentially leak the contents of cookies to other sessions. Impact is that Rucio authentication tokens are leaked to other users accessing th... | 1,634,836,642,000 | [] | Security Vulnerability | [
"lib/rucio/web/ui/flask/common/utils.py:add_cookies",
"lib/rucio/web/ui/flask/common/utils.py:redirect_to_last_known_url",
"lib/rucio/web/ui/flask/common/utils.py:finalize_auth",
"lib/rucio/web/ui/flask/common/utils.py:authenticate"
] | [] | 4 | ||
AzureAD/microsoft-authentication-library-for-python | AzureAD__microsoft-authentication-library-for-python-407 | 3062770948f1961a13767ee85dd7ba664440feb3 | diff --git a/msal/application.py b/msal/application.py
index 686cc95d..3651d216 100644
--- a/msal/application.py
+++ b/msal/application.py
@@ -170,6 +170,7 @@ def __init__(
# This way, it holds the same positional param place for PCA,
# when we would eventually want to add this feature... | Instance metadata caching
This issue is inspired by an improvement made in MSAL .Net 4.1:
* documented in [its release blog post here](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/msal-net-4.1#getaccounts-and-acquiretokensilent-are-now-less-network-chatty),
* its original issue [this on... | 1,631,732,455,000 | [] | Performance Issue | [
"msal/application.py:ClientApplication.__init__"
] | [] | 1 | ||
home-assistant/core | home-assistant__core-15182 | 4fbe3bb07062b81ac4562d4080550d92cbd47828 | diff --git a/homeassistant/components/http/__init__.py b/homeassistant/components/http/__init__.py
index d8c877e83a205..f769d2bc4ffba 100644
--- a/homeassistant/components/http/__init__.py
+++ b/homeassistant/components/http/__init__.py
@@ -180,7 +180,7 @@ def __init__(self, hass, api_password,
middlewares... | diff --git a/tests/components/http/test_auth.py b/tests/components/http/test_auth.py
index a44d17d513db9..dd8b2cd35c46c 100644
--- a/tests/components/http/test_auth.py
+++ b/tests/components/http/test_auth.py
@@ -41,7 +41,7 @@ def app():
"""Fixture to setup a web.Application."""
app = web.Application()
a... | Security issue, unauthorized trusted access by spoofing x-forwarded-for header
**Home Assistant release with the issue:**
0.68.1 but probably older versions too
**Last working Home Assistant release (if known):**
N/A
**Operating environment (Hass.io/Docker/Windows/etc.):**
Hassbian but should be applicable to ... | This used to work (last known working version 0.62)...but after upgrading recently and checking it in the latest dev version I can confirm that it is an issue
The thing is that I didnt even hacked the headers and still no password is asked when connecting from external network
@rofrantz if you are not spoofing the hea... | 1,530,140,910,000 | [
"core",
"cla-signed",
"integration: http"
] | Security Vulnerability | [
"homeassistant/components/http/__init__.py:HomeAssistantHTTP.__init__",
"homeassistant/components/http/real_ip.py:setup_real_ip"
] | [] | 2 |
jazzband/django-two-factor-auth | jazzband__django-two-factor-auth-390 | 430ef08a2c6cec4e0ce6cfa4a08686d4e62f73b7 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index d242f7369..45b1cc581 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,8 @@
### Changed
- The templates are now based on Bootstrap 4.
+- `DisableView` now checks user has verified before disabling two-factor on
+ their account
## 1.12 - 2020-07-08
### Added
... | diff --git a/tests/test_views_disable.py b/tests/test_views_disable.py
index 69a74a8ff..74b963f91 100644
--- a/tests/test_views_disable.py
+++ b/tests/test_views_disable.py
@@ -2,7 +2,7 @@
from django.shortcuts import resolve_url
from django.test import TestCase
from django.urls import reverse
-from django_otp impor... | DisableView doesn't require request user to be verified
`DisableView` only checks that the request user *has* a valid, not that the user is verified.
It is possible that if a site has been misconfigured and allows users to bypass django-two-factor-auth's login view (e.g. the admin site hasn't been configured correct... | I think this is a fairly minor security issue as it relies on things beyond our control going wrong first and no secrets are exposed. I should have PR ready by the end of the day.
I think this is a fairly minor security issue as it relies on things beyond our control going wrong first and no secrets are exposed. I shou... | 1,601,579,910,000 | [] | Security Vulnerability | [
"two_factor/views/profile.py:DisableView.get",
"two_factor/views/profile.py:DisableView.form_valid"
] | [
"two_factor/views/profile.py:DisableView.dispatch"
] | 2 |
netbox-community/netbox | netbox-community__netbox-7676 | 8f1acb700d72467ffe7ae5c8502422a1eac0693d | diff --git a/netbox/netbox/authentication.py b/netbox/netbox/authentication.py
index 653fad3b055..a67ec451d1e 100644
--- a/netbox/netbox/authentication.py
+++ b/netbox/netbox/authentication.py
@@ -34,7 +34,7 @@ def get_object_permissions(self, user_obj):
object_permissions = ObjectPermission.objects.filter(
... | gunicorn spends 80% of each request in get_object_permissions()
### NetBox version
v3.0.3
### Python version
3.9
### Steps to Reproduce
1. Configure Netbox with AD authentication (a few thousands users + heavy nesting)
2. Create a bunch of devices (~500 servers in my case, ~500 cables, ~50 VLANs, ~500 I... | Nice sleuthing! This looks related to (if not the same root issue as) #6926. Do you agree?
Also, can you confirm whether this is reproducible using local authentication, or only as an LDAP user?
> Nice sleuthing! This looks related to (if not the same root issue as) #6926. Do you agree?
I'm not sure, but looks re... | 1,635,451,702,000 | [] | Performance Issue | [
"netbox/netbox/authentication.py:ObjectPermissionMixin.get_object_permissions"
] | [] | 1 | |
Innopoints/backend | Innopoints__backend-124 | c1779409130762a141710875bd1d758c8671b1d8 | diff --git a/innopoints/blueprints.py b/innopoints/blueprints.py
index dc520e8..f5d350e 100644
--- a/innopoints/blueprints.py
+++ b/innopoints/blueprints.py
@@ -5,6 +5,8 @@
from flask import Blueprint
+from innopoints.core.helpers import csrf_protect
+
def _factory(partial_module_string, url_prefix='/'):
"... | Implement CSRF protection
| 1,588,369,030,000 | [] | Security Vulnerability | [
"innopoints/views/account.py:get_info",
"innopoints/views/authentication.py:authorize",
"innopoints/views/authentication.py:login_cheat"
] | [
"innopoints/core/helpers.py:csrf_protect",
"innopoints/schemas/account.py:AccountSchema.get_csrf_token"
] | 3 | ||
zulip/zulip | zulip__zulip-14091 | cb85763c7870bd6c83e84e4e7bca0900810457e9 | diff --git a/tools/coveragerc b/tools/coveragerc
index b47d57aee05af..24b0536d71b23 100644
--- a/tools/coveragerc
+++ b/tools/coveragerc
@@ -15,6 +15,8 @@ exclude_lines =
raise UnexpectedWebhookEventType
# Don't require coverage for blocks only run when type-checking
if TYPE_CHECKING:
+ # Don't requir... | diff --git a/tools/test-backend b/tools/test-backend
index 83a1fb73a9f21..e3ce1c4daee3f 100755
--- a/tools/test-backend
+++ b/tools/test-backend
@@ -93,7 +93,6 @@ not_yet_fully_covered = {path for target in [
'zerver/lib/parallel.py',
'zerver/lib/profile.py',
'zerver/lib/queue.py',
- 'zerver/lib/rate_... | Optimize rate_limiter performance for get_events queries
See https://chat.zulip.org/#narrow/stream/3-backend/topic/profiling.20get_events/near/816860 for profiling details, but basically, currently a get_events request spends 1.4ms/request talking to redis for our rate limiter, which is somewhere between 15% and 50% of... | Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out!
<!-- areaLabelAddition -->
@zulipbot claim
Hello @zulip/server-production members, this issue was labeled with the "area: production" label, so you may want to check it out!
<!-- areaLabe... | 1,583,193,587,000 | [
"size: XL",
"has conflicts"
] | Performance Issue | [
"zerver/lib/rate_limiter.py:RateLimitedObject.rate_limit",
"zerver/lib/rate_limiter.py:RateLimitedObject.__init__",
"zerver/lib/rate_limiter.py:RateLimitedObject.max_api_calls",
"zerver/lib/rate_limiter.py:RateLimitedObject.max_api_window",
"zerver/lib/rate_limiter.py:RateLimiterBackend.block_access",
"ze... | [
"zerver/lib/rate_limiter.py:RateLimitedObject.get_rules",
"zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend._garbage_collect_for_rule",
"zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend.need_to_limit",
"zerver/lib/rate_limiter.py:TornadoInMemoryRateLimiterBackend.get_api_calls_left",
... | 8 |
modin-project/modin | modin-project__modin-3404 | 41213581b974760928ed36f05c479937dcdfea55 | diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 8dd750c75be..85cbe64b76a 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -447,6 +447,8 @@ jobs:
- run: pytest -n 2 modin/experimental/xgboost/test/test_default.py
- run: pytest -n 2 modin/experimental/xgboost/te... | diff --git a/modin/experimental/xgboost/test/test_dmatrix.py b/modin/experimental/xgboost/test/test_dmatrix.py
new file mode 100644
index 00000000000..c9498131ef4
--- /dev/null
+++ b/modin/experimental/xgboost/test/test_dmatrix.py
@@ -0,0 +1,160 @@
+# Licensed to Modin Development Team under one or more contributor lic... | Expansion DMatrix on 6 new parameters
Need add new parameters and functions in class DMatrix
Support parameters ``missing``, ``feature_names``, ``feature_types``, ``feature_weights``, ``silent``, ``enable_categorical``.
| 1,631,033,063,000 | [] | Feature Request | [
"modin/experimental/xgboost/xgboost.py:DMatrix.__init__",
"modin/experimental/xgboost/xgboost.py:Booster.predict",
"modin/experimental/xgboost/xgboost_ray.py:ModinXGBoostActor._get_dmatrix",
"modin/experimental/xgboost/xgboost_ray.py:ModinXGBoostActor.set_train_data",
"modin/experimental/xgboost/xgboost_ray... | [
"modin/experimental/xgboost/xgboost.py:DMatrix.get_dmatrix_params",
"modin/experimental/xgboost/xgboost.py:DMatrix.feature_names",
"modin/experimental/xgboost/xgboost.py:DMatrix.feature_types",
"modin/experimental/xgboost/xgboost.py:DMatrix.num_row",
"modin/experimental/xgboost/xgboost.py:DMatrix.num_col",
... | 8 | |
pandas-dev/pandas | pandas-dev__pandas-35029 | 65319af6e563ccbb02fb5152949957b6aef570ef | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 6aff4f4bd41e2..b1257fe893804 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -8,6 +8,15 @@ including other versions of pandas.
{{ header }}
+.. warning::
+
+ Previously, the default argument ``e... | diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index c582a0fa23577..98a55ae39bd77 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -577,6 +577,10 @@ def test_date_conversion_overflow(self, read_ext):
if pd.read_excel.k... | Deprecate using `xlrd` engine in favor of openpyxl
xlrd is unmaintained and the previous maintainer has asked us to move towards openpyxl. xlrd works now, but *might* have some issues when Python 3.9 or later gets released and changes some elements of the XML parser, as default usage right now throws a `PendingDeprecat... | @WillAyd Should we start with adding `openpyxl` as an engine in [pandas.read_excel](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html#pandas-read-excel).
Happy to contribute a PR for it.
It is already available just need to make it default over time, so want to raise a FutureWarning w... | 1,593,268,003,000 | [
"IO Excel",
"Blocker",
"Deprecate"
] | Feature Request | [
"pandas/io/excel/_base.py:ExcelFile.__init__"
] | [] | 1 |
pyca/pyopenssl | pyca__pyopenssl-578 | f189de9becf14840712b01877ebb1f08c26f894c | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 19cebf1e8..56c3c74c8 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -25,6 +25,10 @@ Changes:
- Added ``OpenSSL.X509Store.set_time()`` to set a custom verification time when verifying certificate chains.
`#567 <https://github.com/pyca/pyopenssl/pull/567>`_
+- Cha... | Poor performance on Connection.recv() with large values of bufsiz.
Discovered while investigating kennethreitz/requests#3729.
When allocating a buffer using CFFI's `new` method (as `Connection.recv` does via `_ffi.new("char[]", bufsiz)`, CFFI kindly zeroes that buffer for us. That means this call has a performance c... | So, Armin has pointed out that this actually already exists: `ffi.new_allocator(should_clear_after_alloc=False)("char[]", bufsiz)` should be a solution to the zeroing out of memory. I'm going to investigate it now.
Excellently, that does work. I'll start working on a PR for this tomorrow. | 1,480,282,277,000 | [] | Performance Issue | [
"src/OpenSSL/SSL.py:Connection.recv",
"src/OpenSSL/SSL.py:Connection.recv_into",
"src/OpenSSL/SSL.py:Connection.bio_read",
"src/OpenSSL/SSL.py:Connection.server_random",
"src/OpenSSL/SSL.py:Connection.client_random",
"src/OpenSSL/SSL.py:Connection.master_key",
"src/OpenSSL/SSL.py:Connection._get_finishe... | [] | 7 | |
matchms/matchms-backup | matchms__matchms-backup-187 | da8debd649798367ed29102b2965a1ab0e6f4938 | diff --git a/environment.yml b/environment.yml
index b38d68a..f46d4a2 100644
--- a/environment.yml
+++ b/environment.yml
@@ -7,7 +7,6 @@ dependencies:
- gensim==3.8.0
- matplotlib==3.2.1
- numpy==1.18.1
- - openbabel==3.0.0
- pip==20.0.2
- pyteomics==4.2
- python==3.7
diff --git a/matchms/filtering/d... | diff --git a/tests/test_utils.py b/tests/test_utils.py
index 1918816..7816846 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -1,18 +1,81 @@
-from matchms.utils import is_valid_inchikey
+from matchms.utils import mol_converter
+from matchms.utils import is_valid_inchi, is_valid_inchikey, is_valid_smiles
+... | openbabel has contagious license
https://tldrlegal.com/license/gnu-general-public-license-v2
| Openbabel seems like a lot of pain. So far I didn't get the time to look at it enough. But I hope that I can soon try to look for alternatives. There are certainly several potential alternatives, but we would have to test them on a large number of conversions to find if they have a similar or better success rate.
So... | 1,588,796,120,000 | [] | Security Vulnerability | [
"matchms/filtering/derive_inchi_from_smiles.py:derive_inchi_from_smiles",
"matchms/filtering/derive_inchikey_from_inchi.py:derive_inchikey_from_inchi",
"matchms/filtering/derive_smiles_from_inchi.py:derive_smiles_from_inchi",
"matchms/utils.py:mol_converter",
"matchms/utils.py:is_valid_inchi"
] | [
"matchms/utils.py:convert_smiles_to_inchi",
"matchms/utils.py:convert_inchi_to_smiles",
"matchms/utils.py:convert_inchi_to_inchikey"
] | 5 |
latchset/jwcrypto | latchset__jwcrypto-195 | e0249b1161cb8ef7a3a9910948103b816bc0a299 | diff --git a/README.md b/README.md
index ccd39c4..1b886cd 100644
--- a/README.md
+++ b/README.md
@@ -16,3 +16,23 @@ Documentation
=============
http://jwcrypto.readthedocs.org
+
+Deprecation Notices
+===================
+
+2020.12.11: The RSA1_5 algorithm is now considered deprecated due to numerous
+implementation... | diff --git a/jwcrypto/tests-cookbook.py b/jwcrypto/tests-cookbook.py
index 40b8d36..dd6f36e 100644
--- a/jwcrypto/tests-cookbook.py
+++ b/jwcrypto/tests-cookbook.py
@@ -1110,7 +1110,8 @@ def test_5_1_encryption(self):
plaintext = Payload_plaintext_5
protected = base64url_decode(JWE_Protected_Header_5_... | The pyca/cryptography RSA PKCS#1 v1.5 is unsafe, making the users of it vulnerable to Bleichenbacher attacks
The API provided by pyca/cryptography is not secure, as documented in their docs:
https://github.com/pyca/cryptography/commit/8686d524b7b890bcbe6132b774bd72a3ae37cf0d
As far as I can tell, it's one of the AP... | At the very least I will add documentation about the problem.
Should we also disable RSA1_5 by default ? At least until pyca provides some option ?
> At the very least I will add documentation about the problem.
and what users that need this mechanism for interoperability are expected to do? rewrite their appli... | 1,607,722,553,000 | [] | Security Vulnerability | [
"jwcrypto/jwt.py:JWT.make_encrypted_token"
] | [] | 1 |
django/django | django__django-5605 | 6d9c5d46e644a8ef93b0227fc710e09394a03992 | diff --git a/django/middleware/csrf.py b/django/middleware/csrf.py
index ba9f63ec3d91..276b31e10fc2 100644
--- a/django/middleware/csrf.py
+++ b/django/middleware/csrf.py
@@ -8,6 +8,7 @@
import logging
import re
+import string
from django.conf import settings
from django.urls import get_callable
@@ -16,8 +17,10... | diff --git a/tests/csrf_tests/test_context_processor.py b/tests/csrf_tests/test_context_processor.py
index 270b3e4771ab..5db0116db0e2 100644
--- a/tests/csrf_tests/test_context_processor.py
+++ b/tests/csrf_tests/test_context_processor.py
@@ -1,6 +1,5 @@
-import json
-
from django.http import HttpRequest
+from django.... | Prevent repetitive output to counter BREACH-type attacks
Description
Currently the CSRF middleware sets the cookie value to the plain value of the token. This makes it possible to try to guess the token by trying longer and longer prefix matches. An effective countermeasure would be to prevent repetitive output from... | ['"Instead of delivering the credential as a 32-byte string, it should be delivered as a 64-byte string. The first 32 bytes are a one-time pad, and the second 32 bytes are encoded using the XOR algorithm between the pad and the "real" token." \u200bhttps://github.com/rails/rails/pull/11729 via \u200bArs Technica', 1375... | 1,446,914,517,000 | [] | Security Vulnerability | [
"django/middleware/csrf.py:_get_new_csrf_key",
"django/middleware/csrf.py:get_token",
"django/middleware/csrf.py:rotate_token",
"django/middleware/csrf.py:_sanitize_token",
"django/middleware/csrf.py:CsrfViewMiddleware.process_view",
"django/middleware/csrf.py:CsrfViewMiddleware.process_response"
] | [
"django/middleware/csrf.py:_get_new_csrf_string",
"django/middleware/csrf.py:_salt_cipher_secret",
"django/middleware/csrf.py:_unsalt_cipher_token",
"django/middleware/csrf.py:_get_new_csrf_token",
"django/middleware/csrf.py:_compare_salted_tokens"
] | 6 |
jax-ml/jax | jax-ml__jax-25114 | df6758f021167b1c0b85f5d6e4986f6f0d2a1169 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index ce8b040439c0..b5758d107077 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -79,6 +79,7 @@ When releasing, please add the new-release-boilerplate to docs/pallas/CHANGELOG.
* {func}`jax.lax.linalg.eig` and the related `jax.numpy` functions
({func}`jax.numpy.linalg.ei... | Allow specifying `exec_time_optimization_effort` and `memory_fitting_effort` via CLI.
New compiler options (see title) have recently been [exposed](https://github.com/jax-ml/jax/issues/24625) in JAX.
To use them users have to change their Python code like this:
```
jax.jit(f, compiler_options={
"exec_time_opt... | 1,732,629,502,000 | [
"pull ready"
] | Feature Request | [
"jax/_src/compiler.py:get_compile_options"
] | [] | 1 | ||
justin13601/ACES | justin13601__ACES-145 | 89291a01522d2c23e45aa0ee6a09c32645f4d1b4 | diff --git a/.github/workflows/code-quality-main.yaml b/.github/workflows/code-quality-main.yaml
index d3369699..691b47c7 100644
--- a/.github/workflows/code-quality-main.yaml
+++ b/.github/workflows/code-quality-main.yaml
@@ -13,10 +13,16 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v... | diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index c22ef161..4e51b115 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -17,16 +17,16 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v3
+ uses: actions/checkout@v4
- name... | Task definition matters in terms of memory usage
As suggested by @Oufattole, this kind of config seems to use excessive amounts of memory (more than 400 GB on MIMIC-IV in my case)

using regex mitigates this:
.
Basically, each time a predicate is defined in a configuration file, a column is created which corresponds to that pr... | 1,729,818,802,000 | [] | Performance Issue | [
"src/aces/config.py:TaskExtractorConfig.load",
"src/aces/config.py:TaskExtractorConfig._initialize_predicates",
"src/aces/predicates.py:get_predicates_df",
"src/aces/query.py:query"
] | [] | 4 |
cogent3/cogent3 | cogent3__cogent3-2138 | f816fab7fd1f72cb9cbf499b04b237f86f61b66e | diff --git a/src/cogent3/app/align.py b/src/cogent3/app/align.py
index 35d379900..0ba0df9e7 100644
--- a/src/cogent3/app/align.py
+++ b/src/cogent3/app/align.py
@@ -828,7 +828,7 @@ def cogent3_score(aln: AlignedSeqsType) -> float:
if aln.num_seqs == 1 or len(aln) == 0:
msg = "zero length" if len(aln) == 0... | diff --git a/tests/test_app/test_sample.py b/tests/test_app/test_sample.py
index eb1ca31dc..f0768b699 100644
--- a/tests/test_app/test_sample.py
+++ b/tests/test_app/test_sample.py
@@ -293,40 +293,6 @@ def test_fixedlength(self):
for name, seq in got.to_dict().items():
self.assertIn(seq, expect[na... | ENH: add a SequenceCollection.count_ambiguous_per_seq method
This is really valuable for identifying low quality sequences. It should also be added to the `drop_bad_seqs()` method.
This should be implemented with `cogent3.core.new_sequence.Sequence.count_ambiguous()` and `cogent3.core.new_alignment.SequenceCollectio... | 1,732,600,498,000 | [] | Feature Request | [
"src/cogent3/app/align.py:cogent3_score",
"src/cogent3/app/sample.py:omit_bad_seqs.__init__",
"src/cogent3/app/sample.py:omit_bad_seqs.main",
"src/cogent3/core/new_alignment.py:Alignment.count_gaps_per_seq"
] | [
"src/cogent3/core/new_alignment.py:SequenceCollection.count_ambiguous_per_seq",
"src/cogent3/core/new_alignment.py:Alignment.count_ambiguous_per_seq",
"src/cogent3/core/new_sequence.py:Sequence.count_ambiguous"
] | 4 | |
aio-libs/aiohttp | aio-libs__aiohttp-7829 | c0377bf80e72134ad7bf29aa847f0301fbdb130f | diff --git a/CHANGES/7829.misc b/CHANGES/7829.misc
new file mode 100644
index 00000000000..9eb060f4713
--- /dev/null
+++ b/CHANGES/7829.misc
@@ -0,0 +1,3 @@
+Improved URL handler resolution time by indexing resources in the UrlDispatcher.
+For applications with a large number of handlers, this should increase performan... | diff --git a/tests/test_urldispatch.py b/tests/test_urldispatch.py
index 5f5687eacc3..adb1e52e781 100644
--- a/tests/test_urldispatch.py
+++ b/tests/test_urldispatch.py
@@ -1264,10 +1264,17 @@ async def test_prefixed_subapp_overlap(app: Any) -> None:
subapp2.router.add_get("/b", handler2)
app.add_subapp("/ss"... | UrlDispatcher improvements
### Describe the bug
The UrlDispatcher currently has to do a linear search to dispatch urls which has an average time complexity of `O(n)`. This means that urls that are registered first can be found faster than urls that are registered later.
### To Reproduce
Results
```
% pyt... | 1,699,830,964,000 | [
"bot:chronographer:provided"
] | Performance Issue | [
"aiohttp/web_urldispatcher.py:PrefixedSubAppResource.__init__",
"aiohttp/web_urldispatcher.py:PrefixedSubAppResource.add_prefix",
"aiohttp/web_urldispatcher.py:PrefixedSubAppResource.resolve",
"aiohttp/web_urldispatcher.py:UrlDispatcher.resolve",
"aiohttp/web_urldispatcher.py:UrlDispatcher.__init__",
"aio... | [
"aiohttp/web_urldispatcher.py:PrefixedSubAppResource._add_prefix_to_resources",
"aiohttp/web_urldispatcher.py:UrlDispatcher._get_resource_index_key",
"aiohttp/web_urldispatcher.py:UrlDispatcher.index_resource",
"aiohttp/web_urldispatcher.py:UrlDispatcher.unindex_resource"
] | 6 | |
jupyterhub/oauthenticator | jupyterhub__oauthenticator-764 | f4da2e8eedeec009f2d1ac749b6da2ee160652b0 | diff --git a/oauthenticator/google.py b/oauthenticator/google.py
index f06f6aca..28b76d48 100644
--- a/oauthenticator/google.py
+++ b/oauthenticator/google.py
@@ -14,6 +14,7 @@
class GoogleOAuthenticator(OAuthenticator, GoogleOAuth2Mixin):
user_auth_state_key = "google_user"
+ _service_credentials = {}
... | diff --git a/oauthenticator/tests/test_google.py b/oauthenticator/tests/test_google.py
index 4c3a9e0c..070de014 100644
--- a/oauthenticator/tests/test_google.py
+++ b/oauthenticator/tests/test_google.py
@@ -3,6 +3,7 @@
import logging
import re
from unittest import mock
+from unittest.mock import AsyncMock
from py... | The google authenticator's calls to check groups is done sync, can it be made async?
Quickly written, I can fill this in in more detail later.
| 1,727,389,390,000 | [
"maintenance"
] | Performance Issue | [
"oauthenticator/google.py:GoogleOAuthenticator.update_auth_model",
"oauthenticator/google.py:GoogleOAuthenticator._service_client",
"oauthenticator/google.py:GoogleOAuthenticator._setup_service",
"oauthenticator/google.py:GoogleOAuthenticator._fetch_member_groups"
] | [
"oauthenticator/google.py:GoogleOAuthenticator._get_service_credentials",
"oauthenticator/google.py:GoogleOAuthenticator._is_token_valid",
"oauthenticator/google.py:GoogleOAuthenticator._setup_service_credentials"
] | 4 | |
PlasmaPy/PlasmaPy | PlasmaPy__PlasmaPy-2542 | db62754e6150dc841074afe2e155f21a806e7f1b | diff --git a/changelog/2542.internal.rst b/changelog/2542.internal.rst
new file mode 100644
index 0000000000..628869c5d5
--- /dev/null
+++ b/changelog/2542.internal.rst
@@ -0,0 +1,1 @@
+Refactored `~plasmapy.formulary.lengths.gyroradius` to reduce the cognitive complexity of the function.
diff --git a/plasmapy/formular... | Reduce cognitive complexity of `gyroradius`
Right now [`plasmapy.formulary.lengths.gyroradius`](https://github.com/PlasmaPy/PlasmaPy/blob/fc12bbd286ae9db82ab313d8c380808b814a462c/plasmapy/formulary/lengths.py#LL102C5-L102C15) has several nested conditionals. As a consequence, the code for `gyroradius` has high [cognit... | Come to think of it, once PlasmaPy goes Python 3.10+ (for our first release after April 2024), we might be able to use [structural pattern matching](https://docs.python.org/3/whatsnew/3.10.html#pep-634-structural-pattern-matching).
Another possibility here would be to use [guard clauses](https://medium.com/lemon-c... | 1,709,064,730,000 | [
"plasmapy.formulary"
] | Performance Issue | [
"plasmapy/formulary/lengths.py:gyroradius"
] | [] | 1 | |
cleder/crepr | cleder__crepr-64 | 7ca6f2377ce5a29ab5b81d7e2c960bfa26b04d12 | diff --git a/crepr/crepr.py b/crepr/crepr.py
index f6d3f98..c3a4109 100644
--- a/crepr/crepr.py
+++ b/crepr/crepr.py
@@ -12,6 +12,7 @@
import inspect
import pathlib
import uuid
+from collections.abc import Callable
from collections.abc import Iterable
from collections.abc import Iterator
from types import Mapping... | diff --git a/tests/run_test.py b/tests/run_test.py
index f1582b5..307aced 100644
--- a/tests/run_test.py
+++ b/tests/run_test.py
@@ -188,8 +188,9 @@ def test_show() -> None:
result = runner.invoke(crepr.app, ["add", "tests/classes/kw_only_test.py"])
assert result.exit_code == 0
+ assert "__repr__ generat... | Sparse output of the changes.
For the `add` and `remove` commands:
Without the `--diff` or `--inline` switch, only the proposed changes should be presented on the screen, with the name of the class the `__repr__` is generated or removed from.
| 1,728,055,882,000 | [
"enhancement",
"Tests",
"Review effort [1-5]: 2",
"hacktoberfest-accepted"
] | Feature Request | [
"crepr/crepr.py:add",
"crepr/crepr.py:remove"
] | [
"crepr/crepr.py:print_changes",
"crepr/crepr.py:apply_changes"
] | 2 | |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-24076 | 30bf6f39a7126a351db8971d24aa865fa5605569 | diff --git a/.gitignore b/.gitignore
index f6250b4d5f580..89600846100a8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -90,6 +90,7 @@ sklearn/metrics/_dist_metrics.pyx
sklearn/metrics/_dist_metrics.pxd
sklearn/metrics/_pairwise_distances_reduction/_argkmin.pxd
sklearn/metrics/_pairwise_distances_reduction/_argkmin.pyx
... | diff --git a/sklearn/metrics/tests/test_pairwise_distances_reduction.py b/sklearn/metrics/tests/test_pairwise_distances_reduction.py
index ad0ddbc60e9bd..7355dfd6ba912 100644
--- a/sklearn/metrics/tests/test_pairwise_distances_reduction.py
+++ b/sklearn/metrics/tests/test_pairwise_distances_reduction.py
@@ -13,10 +13,1... | knn predict unreasonably slow b/c of use of scipy.stats.mode
```python
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.neighbors import KNeighborsClassifier
X, y = make_blobs(centers=2, random_state=4, n_samples=30)
knn = KNeighborsClassifier(algorithm='kd_tree').fit(X, y)
x_min, x_max... | @amueller Do you mean something like this
```
max(top_k, key = list(top_k).count)
```
That isn't going to apply to every row, and involves n_classes passes over
each. Basically because we know the set of class labels, we shouldn't need
to be doing unique.
Yes, we could construct a CSR sparse matrix and sum_duplic... | 1,659,396,563,000 | [
"Performance",
"module:metrics",
"module:neighbors",
"cython"
] | Performance Issue | [
"sklearn/neighbors/_base.py:NeighborsBase._fit",
"sklearn/neighbors/_classification.py:KNeighborsClassifier.predict",
"sklearn/neighbors/_classification.py:KNeighborsClassifier.predict_proba"
] | [
"sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py:ArgKminClassMode.is_usable_for",
"sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py:ArgKminClassMode.compute",
"sklearn/neighbors/_classification.py:_adjusted_metric"
] | 3 |
fractal-analytics-platform/fractal-tasks-core | fractal-analytics-platform__fractal-tasks-core-889 | 5c5ccaff10eb5508bae321c61cb5d0741a36a91a | diff --git a/CHANGELOG.md b/CHANGELOG.md
index f7aea0d58..35082befe 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,9 @@
**Note**: Numbers like (\#123) point to closed Pull Requests on the fractal-tasks-core repository.
+# 1.4.1
+
+* Tasks:
+ * Remove overlap checking for output ROIs in Cellpose task to a... | diff --git a/tests/tasks/test_workflows_cellpose_segmentation.py b/tests/tasks/test_workflows_cellpose_segmentation.py
index 2c3c04216..1ce511d30 100644
--- a/tests/tasks/test_workflows_cellpose_segmentation.py
+++ b/tests/tasks/test_workflows_cellpose_segmentation.py
@@ -602,49 +602,6 @@ def test_workflow_bounding_box... | Review/optimize use of `get_overlapping_pairs_3D` in Cellpose task
Branching from #764 (fixed in principle with #778)
After a fix, it'd be useful to have a rough benchmark of `get_overlapping_pairs_3D for` one of those many-labels cases (say 1000 labels for a given `i_ROI`). Since this function does nothing else tha... | Another thing to consider here:
This (as well as the original implementation) will only check for potential bounding box overlaps within a given ROI that's being processed, right?
A future improvement may be to check for overlaps across ROIs as well, as in theory, they could also overlap. Given that this is just fo... | 1,736,344,999,000 | [] | Performance Issue | [
"fractal_tasks_core/tasks/cellpose_segmentation.py:cellpose_segmentation"
] | [] | 1 |
OpenEnergyPlatform/open-MaStR | OpenEnergyPlatform__open-MaStR-598 | bd44ce26986a0c41b25250951a4a9a391723d690 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 42432fd8..404fb9ac 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -13,6 +13,8 @@ and the versioning aims to respect [Semantic Versioning](http://semver.org/spec/
### Changed
- Repair Header image in Readme
[#587](https://github.com/OpenEnergyPlatform/open-MaStR/pull/... | diff --git a/tests/xml_download/test_utils_write_to_database.py b/tests/xml_download/test_utils_write_to_database.py
index dc5aaacd..e72e5501 100644
--- a/tests/xml_download/test_utils_write_to_database.py
+++ b/tests/xml_download/test_utils_write_to_database.py
@@ -1,25 +1,35 @@
+import os
+import sqlite3
import sys
... | Increase parsing speed
This task contains several steps:
1. Search different ways that might increase parsing speed. Parsing is done right now by the `pandas.read_xml` method [here](https://github.com/OpenEnergyPlatform/open-MaStR/blob/d17a92c17912f7f96814d309859dc495f942c07a/open_mastr/xml_download/utils_write_to_d... | Hi! I started working on this task and decided to change the steps slightly:
1. Construct the benchmark
- Use the Marktstammdatenregister to construct a few datasets of various size - ✅ ([link](https://github.com/AlexandraImbrisca/open-MaStR/tree/develop/benchmark))
- Create a script to automate the calcu... | 1,736,704,440,000 | [] | Performance Issue | [
"open_mastr/xml_download/utils_write_to_database.py:write_mastr_xml_to_database",
"open_mastr/xml_download/utils_write_to_database.py:is_table_relevant",
"open_mastr/xml_download/utils_write_to_database.py:create_database_table",
"open_mastr/xml_download/utils_write_to_database.py:cast_date_columns_to_datetim... | [
"open_mastr/xml_download/utils_write_to_database.py:extract_xml_table_name",
"open_mastr/xml_download/utils_write_to_database.py:extract_sql_table_name",
"open_mastr/xml_download/utils_write_to_database.py:cast_date_columns_to_string",
"open_mastr/xml_download/utils_write_to_database.py:read_xml_file",
"ope... | 10 |
kedro-org/kedro | kedro-org__kedro-4367 | 70734ce00ee46b58c85b4cf04afbe89d32c06758 | diff --git a/RELEASE.md b/RELEASE.md
index 7af26f6ca4..c19418f0ca 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -2,6 +2,7 @@
## Major features and improvements
* Implemented `KedroDataCatalog.to_config()` method that converts the catalog instance into a configuration format suitable for serialization.
+* Improve Omeg... | Improve OmegaConfigLoader performance when global/variable interpolations are involved
## Description
Extending on the investigation in #3893 , OmegaConfigLoader lags in resolving catalog configurations when global/variable interpolations are involved.
## Context
**Some previous observations:**
https://github... | During the investigation, I found that there was a slow bit in `_set_globals_value`. I didn't spent enough time to fix it, but with a quick fix it improves roughly from 1.5s -> 0.9s, but there are probably more.
Particularly, there is an obvious slow path `global_oc` get created and destroyed for every reference of `$g... | 1,733,266,369,000 | [] | Performance Issue | [
"kedro/config/omegaconf_config.py:OmegaConfigLoader.__init__",
"kedro/config/omegaconf_config.py:OmegaConfigLoader.load_and_merge_dir_config",
"kedro/config/omegaconf_config.py:OmegaConfigLoader._get_globals_value",
"kedro/config/omegaconf_config.py:OmegaConfigLoader._get_runtime_value"
] | [] | 4 | |
UCL/TLOmodel | UCL__TLOmodel-1524 | d5e6d31ab27549dc5671ace98c68d11d46d40890 | diff --git a/src/tlo/methods/contraception.py b/src/tlo/methods/contraception.py
index ab6c633f4c..09cb394804 100644
--- a/src/tlo/methods/contraception.py
+++ b/src/tlo/methods/contraception.py
@@ -1146,12 +1146,12 @@ def __init__(self, module, person_id, new_contraceptive):
self.TREATMENT_ID = "Contracepti... | Population dataframe accesses in `HSI_Contraception_FamilyPlanningAppt.EXPECTED_APPT_FOOTPRINT` are performance bottleneck
Look at the most recent successful profiling run results
https://github-pages.ucl.ac.uk/TLOmodel-profiling/_static/profiling_html/schedule_1226_dbf33d69edadbacdc2dad1fc32c93eb771b03f9b.html
c... | @BinglingICL, I think it is okay to set the EXPECTED_APPT_FOOTPRINT with the same logic when HSI is scheduled instead of when it is run, isn't it?
> [@BinglingICL](https://github.com/BinglingICL), I think it is okay to set the EXPECTED_APPT_FOOTPRINT with the same logic when HSI is scheduled instead of when it is run, ... | 1,732,204,946,000 | [] | Performance Issue | [
"src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.EXPECTED_APPT_FOOTPRINT",
"src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.__init__",
"src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt.apply"
] | [
"src/tlo/methods/contraception.py:HSI_Contraception_FamilyPlanningAppt._get_appt_footprint"
] | 3 | |
Deltares/imod-python | Deltares__imod-python-1159 | 4f3b71675708c5a832fb78e05f679bb89158f95d | diff --git a/.gitignore b/.gitignore
index fd95f7d8d..28f9b76a9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -141,5 +141,4 @@ examples/data
.pixi
/imod/tests/mydask.png
-/imod/tests/unittest_report.xml
-/imod/tests/examples_report.xml
+/imod/tests/*_report.xml
diff --git a/docs/api/changelog.rst b/docs/api/changelog... | diff --git a/imod/tests/test_mf6/test_mf6_chd.py b/imod/tests/test_mf6/test_mf6_chd.py
index ef0a04108..c3b96b982 100644
--- a/imod/tests/test_mf6/test_mf6_chd.py
+++ b/imod/tests/test_mf6/test_mf6_chd.py
@@ -238,6 +238,7 @@ def test_from_imod5_shd(imod5_dataset, tmp_path):
chd_shd.write("chd_shd", [1], write_cont... | Bottlenecks writing HFBs LHM
https://github.com/Deltares/imod-python/pull/1157 considerably improves the speed with which the LHM is written: From 34 minutes to 12.5 minutes. This is still slower than expected however. Based on profiling, I've identified 3 main bottlenecks:
- 4.5 minutes: ``xu.DataArray.from_structu... | 1,723,823,927,000 | [] | Performance Issue | [
"imod/mf6/simulation.py:Modflow6Simulation.from_imod5_data",
"imod/schemata.py:scalar_None",
"imod/typing/grid.py:as_ugrid_dataarray"
] | [
"imod/typing/grid.py:GridCache.__init__",
"imod/typing/grid.py:GridCache.get_grid",
"imod/typing/grid.py:GridCache.remove_first",
"imod/typing/grid.py:GridCache.clear"
] | 3 | |
Project-MONAI/MONAI | Project-MONAI__MONAI-8123 | 052dbb4439165bfc1fc3132fadb0955587e4d30e | diff --git a/monai/networks/nets/segresnet_ds.py b/monai/networks/nets/segresnet_ds.py
index 1ac5a79ee3..098e490511 100644
--- a/monai/networks/nets/segresnet_ds.py
+++ b/monai/networks/nets/segresnet_ds.py
@@ -508,8 +508,10 @@ def forward( # type: ignore
outputs: list[torch.Tensor] = []
outputs_au... | Optimize the VISTA3D latency
**Is your feature request related to a problem? Please describe.**
The VISTA3D is a very brilliant model for everything segmentation in a medical image. However it suffers a slowdown [issue](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d/docs#inference-gpu-benchmarks) w... | 1,727,686,856,000 | [] | Performance Issue | [
"monai/networks/nets/segresnet_ds.py:SegResNetDS2.forward",
"monai/networks/nets/vista3d.py:ClassMappingClassify.forward"
] | [] | 2 | ||
NNPDF/eko | NNPDF__eko-415 | ed3d22b742798e6d94c052a8fa4950efb977ec19 | diff --git a/benchmarks/eko/benchmark_inverse_matching.py b/benchmarks/eko/benchmark_inverse_matching.py
index d63b776f0..1004a5481 100644
--- a/benchmarks/eko/benchmark_inverse_matching.py
+++ b/benchmarks/eko/benchmark_inverse_matching.py
@@ -90,14 +90,10 @@ def benchmark_inverse_matching():
with pytest.raises(A... | diff --git a/tests/ekobox/test_apply.py b/tests/ekobox/test_apply.py
index 70eb74adc..522c4dc6d 100644
--- a/tests/ekobox/test_apply.py
+++ b/tests/ekobox/test_apply.py
@@ -5,34 +5,31 @@
from tests.conftest import EKOFactory
-class TestApply:
- def test_apply(self, eko_factory: EKOFactory, fake_pdf):
- e... | Allow for parallelization in `evolven3fit_new`
Currently `evolven3fit_new` evolves the PDFs sequentially both in Q2 and replicas. For many replicas this means that evolution can take up to several hours while even for the usual 100 replica fits it generally will take over 20 minutes for the evolution to complete.
W... | As per the discussion w @felixhekhorn, the necessary change here is for eko to allow for more than one replica at once. To first approximation it should be trivial (the EKO x PDF contraction should have one extra axis for the number of replicas).
Then of course the changes will need to be propagated to evolven3fit a... | 1,728,993,708,000 | [
"enhancement",
"refactor"
] | Feature Request | [
"benchmarks/eko/benchmark_inverse_matching.py:benchmark_inverse_matching",
"src/ekobox/apply.py:apply_pdf",
"src/ekobox/apply.py:apply_pdf_flavor",
"src/ekobox/evol_pdf.py:evolve_pdfs",
"src/ekobox/evol_pdf.py:collect_blocks",
"src/ekomark/benchmark/runner.py:Runner.log"
] | [
"src/ekobox/apply.py:rotate_result",
"src/ekobox/apply.py:apply_grids"
] | 6 |
Qiskit/qiskit | Qiskit__qiskit-13052 | 9898979d2947340514adca7121ed8eaea75df77f | diff --git a/crates/accelerate/src/filter_op_nodes.rs b/crates/accelerate/src/filter_op_nodes.rs
new file mode 100644
index 000000000000..7c41391f3788
--- /dev/null
+++ b/crates/accelerate/src/filter_op_nodes.rs
@@ -0,0 +1,63 @@
+// This code is part of Qiskit.
+//
+// (C) Copyright IBM 2024
+//
+// This code is licens... | Port `FilterOpNodes` to Rust
Port `FilterOpNodes` to Rust
| 1,724,882,933,000 | [
"performance",
"Changelog: None",
"Rust",
"mod: transpiler"
] | Feature Request | [
"qiskit/transpiler/passes/utils/filter_op_nodes.py:FilterOpNodes.run"
] | [] | 1 | ||
ivadomed/ivadomed | ivadomed__ivadomed-1081 | dee79e6bd90b30095267a798da99c4033cc5991b | diff --git a/docs/source/configuration_file.rst b/docs/source/configuration_file.rst
index 8e9169961..ab31be8a5 100644
--- a/docs/source/configuration_file.rst
+++ b/docs/source/configuration_file.rst
@@ -2278,8 +2278,28 @@ Postprocessing
Evaluation Parameters
---------------------
-Dict. Parameters to get object d... | Time issue when running `--test` on large microscopy images
**Disclaimer**
This issue is based on preliminary observations and does not contain a lot of details/examples at the moment.
## Issue description
I tried running the `--test` command on large microscopy images on `rosenberg`, and it takes a very long ti... | Update: I ran the same test again to have a little bit more details.
Turns out, the `evaluate` function took ~5h30min to compute metrics on my 2 images 😬.

I also ran the command with a cProfile (... | 1,645,136,481,000 | [
"bug"
] | Performance Issue | [
"ivadomed/evaluation.py:evaluate",
"ivadomed/evaluation.py:Evaluation3DMetrics.__init__",
"ivadomed/evaluation.py:Evaluation3DMetrics.get_ltpr",
"ivadomed/evaluation.py:Evaluation3DMetrics.get_lfdr",
"ivadomed/main.py:run_command",
"ivadomed/metrics.py:get_metric_fns"
] | [] | 6 | |
zulip/zulip | zulip__zulip-31168 | 3d58a7ec0455ba609955c13470d31854687eff7a | diff --git a/zerver/lib/users.py b/zerver/lib/users.py
index 3b00d4187ff2d..cb24813fb1a6a 100644
--- a/zerver/lib/users.py
+++ b/zerver/lib/users.py
@@ -10,6 +10,7 @@
from django.conf import settings
from django.core.exceptions import ValidationError
from django.db.models import Q, QuerySet
+from django.db.models.fu... | diff --git a/zerver/tests/test_subs.py b/zerver/tests/test_subs.py
index f4a3d0b3be9bc..da7dfe34e1660 100644
--- a/zerver/tests/test_subs.py
+++ b/zerver/tests/test_subs.py
@@ -2684,7 +2684,7 @@ def test_realm_admin_remove_multiple_users_from_stream(self) -> None:
for name in ["cordelia", "prospero", "iago... | Make creating streams faster in large organizations
In large organizations (like chat.zulip.org), creating a stream can be very slow (seconds to tens of seconds). We should fix this.
It should be possible to reproduce this by creating 20K users with `manage.py populate_db --extra-users` in a development environment.... | Hello @zulip/server-streams members, this issue was labeled with the "area: stream settings" label, so you may want to check it out!
<!-- areaLabelAddition -->
Hello @zulip/server-streams members, this issue was labeled with the "area: stream settings" label, so you may want to check it out!
<!-- areaLabelAddition -... | 1,722,342,135,000 | [
"area: channel settings",
"priority: high",
"size: XL",
"post release",
"area: performance"
] | Performance Issue | [
"zerver/views/streams.py:principal_to_user_profile",
"zerver/views/streams.py:remove_subscriptions_backend",
"zerver/views/streams.py:add_subscriptions_backend"
] | [
"zerver/lib/users.py:bulk_access_users_by_email",
"zerver/lib/users.py:bulk_access_users_by_id",
"zerver/views/streams.py:bulk_principals_to_user_profiles"
] | 3 |
vllm-project/vllm | vllm-project__vllm-7209 | 9855aea21b6aec48b12cef3a1614e7796b970a73 | diff --git a/vllm/core/evictor.py b/vllm/core/evictor.py
index ed7e06cab2996..44adc4158abec 100644
--- a/vllm/core/evictor.py
+++ b/vllm/core/evictor.py
@@ -1,6 +1,7 @@
import enum
+import heapq
from abc import ABC, abstractmethod
-from typing import OrderedDict, Tuple
+from typing import Dict, List, Tuple
class... | [Bug]: Prefix Caching in BlockSpaceManagerV1 and BlockSpaceManagerV2 Increases Time to First Token(TTFT) and Slows Down System
### Your current environment
```text
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64... | I meet the problem when enabling prefix caching in BlockSpaceManagerV2 ... | 1,722,960,271,000 | [
"ready"
] | Performance Issue | [
"vllm/core/evictor.py:LRUEvictor.__init__",
"vllm/core/evictor.py:LRUEvictor.evict",
"vllm/core/evictor.py:LRUEvictor.add"
] | [
"vllm/core/evictor.py:LRUEvictor._cleanup_if_necessary",
"vllm/core/evictor.py:LRUEvictor._cleanup"
] | 3 | |
xCDAT/xcdat | xCDAT__xcdat-689 | 584fccee91f559089e4bb24c9cc45983b998bf4a | diff --git a/.vscode/xcdat.code-workspace b/.vscode/xcdat.code-workspace
index 906eb68e..8eaa4e47 100644
--- a/.vscode/xcdat.code-workspace
+++ b/.vscode/xcdat.code-workspace
@@ -61,7 +61,7 @@
"configurations": [
{
"name": "Python: Current File",
- "type": "python",... | diff --git a/tests/test_temporal.py b/tests/test_temporal.py
index 6e2d6049..e5489b1b 100644
--- a/tests/test_temporal.py
+++ b/tests/test_temporal.py
@@ -121,7 +121,7 @@ def test_averages_for_yearly_time_series(self):
},
)
- assert result.identical(expected)
+ xr.testing.assert_id... | [Enhancement]: Temporal averaging performance
### Is your feature request related to a problem?
This may not be a high priority issue, but I think it is worthwhile to document here:
When refactoring e3sm_diags with xcdat, I was working on using temporal.climatology operation to get annual cycle of a data stream whi... | Possible next steps:
1. Try `flox` with e3sm_diags and xCDAT temporal APIs
2. Reference #490
3. Analyze codebase for other bottlenecks besides grouping
Regarding to results: Case2,3,4 are identical, while, xcdat result from Case 1 is slightly off.
```
Case 1: [1.63433977e-08 1.73700556e-08 2.73745702e-08 3.220... | 1,724,972,659,000 | [
"type: enhancement"
] | Performance Issue | [
"xcdat/temporal.py:TemporalAccessor.departures",
"xcdat/temporal.py:TemporalAccessor._group_average",
"xcdat/temporal.py:TemporalAccessor._get_weights",
"xcdat/temporal.py:TemporalAccessor._group_data",
"xcdat/temporal.py:TemporalAccessor._label_time_coords",
"xcdat/temporal.py:TemporalAccessor._keep_weig... | [
"xcdat/temporal.py:TemporalAccessor._calculate_departures"
] | 6 |
ansys/pymapdl | ansys__pymapdl-3705 | 7e86b2a110e37383797654ee4e338d53c00df296 | diff --git a/doc/changelog.d/3705.added.md b/doc/changelog.d/3705.added.md
new file mode 100644
index 0000000000..9bc4540620
--- /dev/null
+++ b/doc/changelog.d/3705.added.md
@@ -0,0 +1,1 @@
+feat: speed up `requires_package` using caching
\ No newline at end of file
diff --git a/src/ansys/mapdl/core/misc.py b/src/ansy... | diff --git a/tests/common.py b/tests/common.py
index 142e0a681a..9a7581f843 100644
--- a/tests/common.py
+++ b/tests/common.py
@@ -26,6 +26,7 @@
import subprocess
import time
from typing import Dict, List
+from warnings import warn
import psutil
diff --git a/tests/test_mapdl.py b/tests/test_mapdl.py
index 7d902... | `requires_package` is very slow
It seems that `requires_package` is very slow:

_Originally posted by @germa89 in https://github.com/ansys/pymapdl/issues/3703#issuecomment-2615615323_
| 1,737,985,959,000 | [
"new feature",
"enhancement"
] | Performance Issue | [
"src/ansys/mapdl/core/misc.py:requires_package"
] | [
"src/ansys/mapdl/core/misc.py:is_package_installed_cached"
] | 1 | |
CrossGL/crosstl | CrossGL__crosstl-229 | 0ddc76be73115c830de6f7e64c975b78443251db | diff --git a/crosstl/translator/lexer.py b/crosstl/translator/lexer.py
index e6540499..ac89f534 100644
--- a/crosstl/translator/lexer.py
+++ b/crosstl/translator/lexer.py
@@ -1,81 +1,82 @@
import re
+from collections import OrderedDict
-TOKENS = [
- ("COMMENT_SINGLE", r"//.*"),
- ("COMMENT_MULTI", r"/\*[\s\S]*... | diff --git a/tests/test_translator/test_lexer.py b/tests/test_translator/test_lexer.py
index abff042f..35c778a1 100644
--- a/tests/test_translator/test_lexer.py
+++ b/tests/test_translator/test_lexer.py
@@ -1,6 +1,6 @@
-from crosstl.translator.lexer import Lexer
import pytest
from typing import List
+from crosstl.tra... | Optimize Lexer Performance with Token Caching and Pattern Combining
## Description
The lexer can be optimized to improve performance and reduce memory usage. This task focuses on implementing token caching, combining similar regex patterns, and caching compiled patterns.
### Proposed Changes
1. **Token Cachi... | @Swish78 can you assign me this task I think I can complete this
@NripeshN Hii I would like to work on this issue
To assign issues to yourself please read this documentation: https://github.com/CrossGL/crosstl/blob/main/CONTRIBUTING.md#-assigning-or-creating-issues
@CrossGL-issue-bot assign me
@CrossGL-issue-bot ass... | 1,735,129,841,000 | [] | Performance Issue | [
"crosstl/translator/lexer.py:Lexer.tokenize",
"crosstl/translator/lexer.py:Lexer.__init__"
] | [
"crosstl/translator/lexer.py:Lexer._compile_patterns",
"crosstl/translator/lexer.py:Lexer._get_cached_token"
] | 2 |
kiasambrook/tfl-live-tracker | kiasambrook__tfl-live-tracker-7 | b38f2a397075ec42d8f2f14eaffcdc5661a6a20b | diff --git a/app/api/tfl_api.py b/app/api/tfl_api.py
index 232b626..b688263 100644
--- a/app/api/tfl_api.py
+++ b/app/api/tfl_api.py
@@ -1,4 +1,5 @@
import requests
+import requests_cache
class TfLAPI:
def __init__(self, api_key):
@@ -7,9 +8,16 @@ def __init__(self, api_key):
def fetch_all_stop_points(s... | Cache Station List
### Summary
Currently the station list is refreshed everytime the program is launched, it would probably be best to cache this to improve performance of app and prevent being rate limited by API.
### Tasks
- Cache the list of stations
- Data stored should include ID, name, and location
| 1,734,612,485,000 | [] | Performance Issue | [
"app/api/tfl_api.py:TfLAPI.fetch_all_stop_points",
"app/ui/main_window.py:MainWindow.build"
] | [] | 2 | ||
bennettoxford/bennettbot | bennettoxford__bennettbot-698 | cf65e68f5ef9d07359f1af8c90a8358c31b2fddf | diff --git a/workspace/dependabot/jobs.py b/workspace/dependabot/jobs.py
index ad92d5b4..602c491a 100644
--- a/workspace/dependabot/jobs.py
+++ b/workspace/dependabot/jobs.py
@@ -1,7 +1,7 @@
from datetime import date, timedelta
from itertools import cycle, islice
-from workspace.utils.people import TEAM_REX, People... | diff --git a/tests/workspace/test_people.py b/tests/workspace/test_people.py
index db1b11f6..748591c9 100644
--- a/tests/workspace/test_people.py
+++ b/tests/workspace/test_people.py
@@ -1,24 +1,25 @@
-from workspace.utils.people import (
- People,
- get_formatted_slack_username,
- get_person_from_github_usern... | Refactor Team REX Standup Rota
#674 introduced the concept of non-spreadsheet based `RotaReporter`s and some improvements to how staff slack/Github usernames are handled. The Team REX standup rota could be refactored to use these new structures, for clarity and such that users are notified on Slack when they are on du... | Following on from #674, would like to also address if quick:
- re-name the dependabot rota as it's no longer using dependabot...
- add a note in `people.py` about the consequences of adding to or reordering `TEAM_REX`. | 1,736,954,346,000 | [] | Feature Request | [
"workspace/dependabot/jobs.py:DependabotRotaReporter.get_rota_text_for_week",
"workspace/report/generate_report.py:get_status_and_summary",
"workspace/standup/jobs.py:weekly_rota",
"workspace/standup/jobs.py:daily_rota",
"workspace/utils/people.py:get_person_from_github_username",
"workspace/utils/people.... | [
"workspace/utils/people.py:Person.formatted_slack_username",
"workspace/utils/people.py:PersonCollection.__init__",
"workspace/utils/people.py:PersonCollection.__iter__",
"workspace/utils/people.py:People.by_github_username"
] | 6 |
ckan/ckan | ckan__ckan-8226 | 15f32e57e229db6436d10adddf73bcff84bdd453 | diff --git a/changes/6146.feature b/changes/6146.feature
new file mode 100644
index 00000000000..e40b88d474f
--- /dev/null
+++ b/changes/6146.feature
@@ -0,0 +1,1 @@
+Render snippets faster through better use of existing jinja2 tags. Use ``{% snippet 'path/to/snippet.html', arg1=test %}`` instead of ``{{ h.snippet('pat... | snippet performance issue
**CKAN version**
all
**Describe the bug**
Heavy use of template `{% snippet %}` slows page rendering times compared to alternatives like macros or `{% include %}`
**Steps to reproduce**
Compare the rendering time of a page using snippets with one that uses macros or `{% include %}` t... | @smotornyuk mentioned that current jinja2 now has an `{% include … without context %}` that we could have our snippet tag emit instead of calling render and inserting the contents. Should be an easy win. | 1,715,738,892,000 | [
"Performance",
"Backport dev-v2.11"
] | Performance Issue | [
"ckan/config/middleware/flask_app.py:_ungettext_alias",
"ckan/config/middleware/flask_app.py:make_flask_stack",
"ckan/config/middleware/flask_app.py:helper_functions",
"ckan/config/middleware/flask_app.py:c_object",
"ckan/lib/helpers.py:snippet",
"ckan/lib/jinja_extensions.py:CkanFileSystemLoader.get_sour... | [
"ckan/lib/jinja_extensions.py:SnippetExtension.parse"
] | 7 | |
fulcrumgenomics/prymer | fulcrumgenomics__prymer-99 | f62c4b52b6753ed070cea2269f2fffa36591de69 | diff --git a/prymer/offtarget/offtarget_detector.py b/prymer/offtarget/offtarget_detector.py
index d1c7868..71d3201 100644
--- a/prymer/offtarget/offtarget_detector.py
+++ b/prymer/offtarget/offtarget_detector.py
@@ -75,6 +75,7 @@
""" # noqa: E501
import itertools
+from collections import defaultdict
from context... | diff --git a/tests/offtarget/test_offtarget.py b/tests/offtarget/test_offtarget.py
index f8f5f8c..a2f3a46 100644
--- a/tests/offtarget/test_offtarget.py
+++ b/tests/offtarget/test_offtarget.py
@@ -11,6 +11,7 @@
from prymer.offtarget.bwa import BWA_EXECUTABLE_NAME
from prymer.offtarget.bwa import BwaHit
from prymer.o... | Speed up from O(n^2) iteration over all left/right primer hits in `OffTargetDetector._to_amplicons`
`OffTargetDetector._to_amplicons` uses `itertools.product` over the left and right primer hits and evaluates each of them in a loop to identify valid left/right hit combinations for the pair.
Suggestion:
1. Get hit... | I think this sounds like a great idea. I would make a couple of suggestions also:
1. You could _probably_ do the splitting into left/right/+/- and by refname all in one pass ... if you created a simple dataclass that held four lists (one for each strand and orientation). You could then have a `dict[refname, new_cl... | 1,733,420,754,000 | [] | Performance Issue | [
"prymer/offtarget/offtarget_detector.py:OffTargetDetector._build_off_target_result",
"prymer/offtarget/offtarget_detector.py:OffTargetDetector._to_amplicons"
] | [] | 2 |
thunderbird/appointment | thunderbird__appointment-707 | 80dfb37920008d6aa45e60321d040163766a7824 | diff --git a/backend/src/appointment/controller/apis/google_client.py b/backend/src/appointment/controller/apis/google_client.py
index 88386a14a..31de7bb7b 100644
--- a/backend/src/appointment/controller/apis/google_client.py
+++ b/backend/src/appointment/controller/apis/google_client.py
@@ -1,12 +1,18 @@
import loggi... | diff --git a/backend/test/integration/test_schedule.py b/backend/test/integration/test_schedule.py
index d5e7ca2dc..1f67250dd 100644
--- a/backend/test/integration/test_schedule.py
+++ b/backend/test/integration/test_schedule.py
@@ -91,8 +91,6 @@ def test_create_schedule_with_end_time_before_start_time(
assert... | Use FreeBusy API to generate scheduling time
https://developers.google.com/calendar/api/v3/reference/freebusy/query
I somehow missed this api / component for caldav apis and google api. We can use this in-place for generating availability times.
| Testing this with a couple of calendars and it seems like it significantly speeds up queries (I mean duh). From 3 seconds with 5 calendars of varying degree to about 500ms with sort of the same output. I'll need to dig up what qualifies as "busy" time as we might be counting "busy" different or wrong. | 1,727,902,924,000 | [] | Feature Request | [
"backend/src/appointment/controller/calendar.py:Tools.existing_events_for_schedule"
] | [
"backend/src/appointment/controller/apis/google_client.py:GoogleClient.get_free_busy",
"backend/src/appointment/controller/calendar.py:GoogleConnector.get_busy_time",
"backend/src/appointment/controller/calendar.py:CalDavConnector.get_busy_time",
"backend/src/appointment/utils.py:chunk_list"
] | 1 |
spack/spack | spack__spack-47213 | c8bebff7f5c5d805decfc340a4ff5791eb90ecc9 | diff --git a/lib/spack/docs/conf.py b/lib/spack/docs/conf.py
index 4873e3e104d6b3..0251387c2b71c0 100644
--- a/lib/spack/docs/conf.py
+++ b/lib/spack/docs/conf.py
@@ -220,6 +220,7 @@ def setup(sphinx):
("py:class", "spack.filesystem_view.SimpleFilesystemView"),
("py:class", "spack.traverse.EdgeAndDepth"),
... | diff --git a/lib/spack/spack/test/compilers/basics.py b/lib/spack/spack/test/compilers/basics.py
index ee31e50f53893e..75e79d497c6c65 100644
--- a/lib/spack/spack/test/compilers/basics.py
+++ b/lib/spack/spack/test/compilers/basics.py
@@ -3,8 +3,10 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Test basic be... | Allow skipping compiler checking during concretization.
### Summary
During the concretization process, all compilers undergo testing, which is run multiple times and is particularly time-consuming for certain compilers that require lengthy startup times, such as Intel ICC. This has become a significant time-drain duri... | My suggestion is to make yet another file system cache of the form `hash(compiler entry) => compiler_verbose_output`. | 1,729,849,798,000 | [
"compilers",
"documentation",
"tests",
"utilities",
"core"
] | Feature Request | [
"lib/spack/spack/compiler.py:_get_compiler_version_output",
"lib/spack/spack/compiler.py:get_compiler_version_output",
"lib/spack/spack/compiler.py:Compiler.__init__",
"lib/spack/spack/compiler.py:Compiler.real_version",
"lib/spack/spack/compiler.py:Compiler.compiler_verbose_output",
"lib/spack/spack/comp... | [
"lib/spack/spack/compiler.py:CompilerCacheEntry.__init__",
"lib/spack/spack/compiler.py:CompilerCacheEntry.from_dict",
"lib/spack/spack/compiler.py:CompilerCache.value",
"lib/spack/spack/compiler.py:CompilerCache.get",
"lib/spack/spack/compiler.py:FileCompilerCache.__init__",
"lib/spack/spack/compiler.py:... | 9 |
una-auxme/paf | una-auxme__paf-619 | 4977a44250a626969c42552f22c27d69f393e605 | diff --git a/code/mapping/ext_modules/mapping_common/entity.py b/code/mapping/ext_modules/mapping_common/entity.py
index 5c84f42f..19a59dd6 100644
--- a/code/mapping/ext_modules/mapping_common/entity.py
+++ b/code/mapping/ext_modules/mapping_common/entity.py
@@ -2,6 +2,8 @@
from enum import Enum
from dataclasses impo... | diff --git a/code/mapping/tests/mapping_common/test_entity.py b/code/mapping/tests/mapping_common/test_entity.py
index 7c1f1aa7..c308c911 100644
--- a/code/mapping/tests/mapping_common/test_entity.py
+++ b/code/mapping/tests/mapping_common/test_entity.py
@@ -10,6 +10,7 @@ def get_transform():
def get_car():
+ f... | [Feature]: Intermediate layer: integrate shapely into Shape2D
### Feature Description
Shapely description: Manipulation and analysis of geometric objects in the Cartesian plane
Might be a basic building block to "easily" develop algorithms that extract useful data from the intermediate layer
### Definition of ... | 1,736,952,829,000 | [
"planning",
"acting",
"system"
] | Feature Request | [
"code/mapping/ext_modules/mapping_common/transform.py:Vector2.__add__"
] | [
"code/mapping/ext_modules/mapping_common/entity.py:Entity.to_shapely",
"code/mapping/ext_modules/mapping_common/map.py:Map.build_tree",
"code/mapping/ext_modules/mapping_common/map.py:Map.filtered",
"code/mapping/ext_modules/mapping_common/map.py:MapTree.__init__",
"code/mapping/ext_modules/mapping_common/m... | 1 | |
nautobot/nautobot | nautobot__nautobot-6364 | c86be9c46f5f0ae8f9786fe059175d81c0463e89 | diff --git a/changes/6297.fixed b/changes/6297.fixed
index f889dcbefda..e6e279aba67 100644
--- a/changes/6297.fixed
+++ b/changes/6297.fixed
@@ -1,2 +1,4 @@
Fixed paginator widget to display the current selected `per_page` value even if it's not one of the `PER_PAGE_DEFAULTS` options.
Added pagination of related-obje... | diff --git a/nautobot/extras/tests/test_dynamicgroups.py b/nautobot/extras/tests/test_dynamicgroups.py
index ccf7993186b..c2cd6974f77 100644
--- a/nautobot/extras/tests/test_dynamicgroups.py
+++ b/nautobot/extras/tests/test_dynamicgroups.py
@@ -304,6 +304,10 @@ def test_static_member_operations(self):
sg.add_m... | Resources that are "too large" cause the nautobot process to crash
### Environment
* Nautobot version (Docker tag too if applicable): 2.3.4 (Docker tag 'latest')
* Python version: 3.10.15
* Database platform, version: PostgreSQL 15.1
* Middleware(s): None
### Steps to Reproduce
1. Create 200k devices
2. Create... | Thanks for the report and the investigation.
I suspect that in the DynamicGroup case it's the update to the members cache that's causing an out-of-memory condition, specifically `_set_members()`:
```py
to_remove = self.members.exclude(pk__in=value.values_list("pk", flat=True))
self._remo... | 1,728,660,712,000 | [] | Performance Issue | [
"nautobot/core/signals.py:invalidate_max_depth_cache",
"nautobot/extras/models/groups.py:DynamicGroup._set_members",
"nautobot/extras/models/groups.py:DynamicGroup.add_members",
"nautobot/extras/models/groups.py:DynamicGroup._add_members",
"nautobot/extras/models/groups.py:DynamicGroup.remove_members",
"n... | [
"nautobot/core/models/tree_queries.py:TreeModel.__init_subclass__"
] | 8 |
intelowlproject/GreedyBear | intelowlproject__GreedyBear-397 | a7912a75e28a43e1d5c10751c111948a96faaf5b | diff --git a/api/serializers.py b/api/serializers.py
index 937ce0f..4d2d676 100644
--- a/api/serializers.py
+++ b/api/serializers.py
@@ -47,14 +47,10 @@ def validate(self, data):
return data
-def feed_type_validation(feed_type):
- feed_choices = ["log4j", "cowrie", "all"]
- generalHoneypots = General... | diff --git a/tests/test_serializers.py b/tests/test_serializers.py
index b8e2dbc..c538a1b 100644
--- a/tests/test_serializers.py
+++ b/tests/test_serializers.py
@@ -31,13 +31,20 @@ def test_valid_fields(self):
for element in valid_data_choices:
data_ = {"feed_type": element[0], "attack_type": el... | Feed API slow
Querying the API, e.g. `/api/feeds/all/all/recent.json`, takes a while. Requesting the feed on my GreedyBear instancees with `curl -so /dev/null -w '%{time_total}\n' http://HOSTNAME/api/feeds/all/all/recent.json` takes about 6 seconds. This also produces significant CPU load on the server. However the Hon... | 1,733,767,958,000 | [] | Performance Issue | [
"api/serializers.py:feed_type_validation",
"api/serializers.py:FeedsSerializer.validate_feed_type",
"api/serializers.py:FeedsResponseSerializer.validate_feed_type",
"api/views.py:feeds",
"api/views.py:feeds_pagination",
"api/views.py:get_queryset",
"api/views.py:feeds_response"
] | [] | 7 | |
tjorim/pyrail | tjorim__pyrail-23 | 26c45e20827f647fe52196255b3edb6aa1f7a9a3 | diff --git a/examples/composition_from_docs_not_real.json b/examples/composition_from_docs_not_real.json
new file mode 100644
index 0000000..3fe3f1c
--- /dev/null
+++ b/examples/composition_from_docs_not_real.json
@@ -0,0 +1,131 @@
+{
+ "version": "1.1",
+ "timestamp": "1581856899",
+ "composition": {
+ ... | diff --git a/tests/test_irail.py b/tests/test_irail.py
index e108f34..b172338 100644
--- a/tests/test_irail.py
+++ b/tests/test_irail.py
@@ -1,11 +1,12 @@
"""Unit tests for the iRail API wrapper."""
+
from datetime import datetime, timedelta
from unittest.mock import AsyncMock, patch
from aiohttp import ClientSes... | Add dataclasses
We will be using mashumaro.
| First part was done in #22. | 1,736,199,500,000 | [] | Feature Request | [
"pyrail/irail.py:iRail._add_etag_header",
"pyrail/irail.py:iRail._validate_date",
"pyrail/irail.py:iRail._validate_time",
"pyrail/irail.py:iRail._validate_params",
"pyrail/irail.py:iRail._handle_response",
"pyrail/irail.py:iRail._do_request",
"pyrail/irail.py:iRail.get_stations",
"pyrail/irail.py:iRai... | [] | 8 |
OpenVoiceOS/ovos-core | OpenVoiceOS__ovos-core-617 | e87af007f7d223d673616eadfd056c3d614e317b | diff --git a/ovos_core/skill_installer.py b/ovos_core/skill_installer.py
index de847d258329..3f4f5304fbd3 100644
--- a/ovos_core/skill_installer.py
+++ b/ovos_core/skill_installer.py
@@ -1,15 +1,17 @@
import enum
+import shutil
import sys
from importlib import reload
from os.path import exists
from subprocess impo... | feat: constraints files
the skill installer via bus events (eg, ggwave) should use the constraints files from [ovos-releases](https://github.com/OpenVoiceOS/ovos-releases)
it currently has a [hardcoded path](https://github.com/OpenVoiceOS/ovos-core/blob/dev/ovos_core/skill_installer.py#L25) that was inherited from m... | 1,733,171,014,000 | [
"fix"
] | Feature Request | [
"ovos_core/skill_installer.py:SkillsStore.pip_install",
"ovos_core/skill_installer.py:SkillsStore.pip_uninstall",
"ovos_core/skill_installer.py:SkillsStore.validate_skill"
] | [
"ovos_core/skill_installer.py:SkillsStore.validate_constrainsts"
] | 3 | ||
UXARRAY/uxarray | UXARRAY__uxarray-1144 | 81ac5bd50ed2a2faca24008dc261029be30c37c4 | diff --git a/uxarray/formatting_html.py b/uxarray/formatting_html.py
index 8f15978b5..733232f5f 100644
--- a/uxarray/formatting_html.py
+++ b/uxarray/formatting_html.py
@@ -26,7 +26,7 @@ def _grid_sections(grid, max_items_collapse=15):
spherical_coordinates = list(
[coord for coord in ugrid.SPHERICAL_COOR... | Slowdown in `__repr__` due to `n_nodes_per_face` construction triggering
When printing a `Grid`, there is an unnecessary trigger to the `n_nodes_per_face` variable, which has a noticeable impact on performance for larger grids.
| 1,738,359,935,000 | [] | Performance Issue | [
"uxarray/formatting_html.py:_grid_sections",
"uxarray/grid/grid.py:Grid.__repr__",
"uxarray/grid/grid.py:Grid.face_node_connectivity"
] | [] | 3 | ||
UXARRAY/uxarray | UXARRAY__uxarray-1151 | 9926057173e143e8170e3337dd1b7d39a4d1a961 | diff --git a/uxarray/grid/slice.py b/uxarray/grid/slice.py
index 94e8e0eb8..8cce19d15 100644
--- a/uxarray/grid/slice.py
+++ b/uxarray/grid/slice.py
@@ -111,18 +111,23 @@ def _slice_face_indices(
node_indices = np.unique(grid.face_node_connectivity.values[face_indices].ravel())
node_indices = node_indices[nod... | diff --git a/test/test_subset.py b/test/test_subset.py
index 252719500..71be6dff5 100644
--- a/test/test_subset.py
+++ b/test/test_subset.py
@@ -24,14 +24,20 @@ def test_grid_face_isel():
for grid_path in GRID_PATHS:
grid = ux.open_grid(grid_path)
+ grid_contains_edge_node_conn = "edge_node_conne... | `edge_node_connectivity` unnecessarily constructed when slicing
When doing `Grid.isel()` with `n_face` or `n_node`, the `edge_node_connectivity` is constructed when the grid does not currently have any edges defined, which is not necessary and impacts the execution time. See also #1138
| 1,738,693,587,000 | [] | Performance Issue | [
"uxarray/grid/slice.py:_slice_face_indices",
"uxarray/subset/dataarray_accessor.py:DataArraySubsetAccessor.bounding_circle",
"uxarray/subset/dataarray_accessor.py:DataArraySubsetAccessor.nearest_neighbor",
"uxarray/subset/grid_accessor.py:GridSubsetAccessor.bounding_circle",
"uxarray/subset/grid_accessor.py... | [] | 5 | |
nautobot/nautobot | nautobot__nautobot-6837 | 9302a6072ce3c5e97ca9a673fb7b96fc7eaccb29 | diff --git a/changes/6767.added b/changes/6767.added
new file mode 100644
index 0000000000..f9de871e1a
--- /dev/null
+++ b/changes/6767.added
@@ -0,0 +1,1 @@
+Added cacheable `CustomField.choices` property for retrieving the list of permissible values for a select/multiselect Custom Field.
diff --git a/changes/6767.fix... | diff --git a/nautobot/extras/tests/test_customfields.py b/nautobot/extras/tests/test_customfields.py
index aaa3dcc4ed..ef2d6a1a69 100644
--- a/nautobot/extras/tests/test_customfields.py
+++ b/nautobot/extras/tests/test_customfields.py
@@ -53,12 +53,12 @@ def test_immutable_fields(self):
instance.refresh_from... | Hundreds of SQL queries made when fetching device details page with ~ 20 custom fields
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, plea... | Hi @progala, thank you for submitting the issue. Do you have a high number of device components attached to the device as well (ConsoleServerPort, ConsolePort, Interface, VRFs, etc.) I want to make sure whether querying those device components is causing the slowdown instead of the custom fields. It would be really hel... | 1,738,259,134,000 | [] | Performance Issue | [
"nautobot/core/views/utils.py:get_csv_form_fields_from_serializer_class",
"nautobot/extras/models/customfields.py:CustomFieldManager.get_for_model",
"nautobot/extras/models/customfields.py:CustomField.to_form_field",
"nautobot/extras/models/customfields.py:CustomField.validate"
] | [
"nautobot/extras/models/customfields.py:CustomField.choices_cache_key",
"nautobot/extras/models/customfields.py:CustomField.choices",
"nautobot/extras/signals.py:invalidate_choices_cache"
] | 4 |
spacetelescope/stcal | spacetelescope__stcal-337 | 9bd365de4afd442f016103e3db3b8944d22cf769 | diff --git a/changes/337.general.rst b/changes/337.general.rst
new file mode 100644
index 00000000..f73f7e3a
--- /dev/null
+++ b/changes/337.general.rst
@@ -0,0 +1,1 @@
+Performance improvements for jump step targeting both runtime and memory consumption. Results are mostly identical, but there are some differences in ... | diff --git a/tests/test_jump.py b/tests/test_jump.py
index c620b374..734b4075 100644
--- a/tests/test_jump.py
+++ b/tests/test_jump.py
@@ -10,8 +10,7 @@
flag_large_events,
point_inside_ellipse,
find_first_good_group,
- detect_jumps_data,
- find_last_grp
+ detect_jumps_data
)
DQFLAGS = {
@@ -... | Jump performance
Resolves [JP-3697](https://jira.stsci.edu/browse/JP-3697)
Closes #337
A number of changes to `twopoint_difference.py` and `jump.py` reduce memory usage by a factor of about 2 and improve runtime by a factor of anywhere from 3 to 20. Tested output files (noted below) are unchanged, except for MIR... | 1,738,693,487,000 | [
"testing",
"jump"
] | Performance Issue | [
"src/stcal/jump/jump.py:extend_saturation",
"src/stcal/jump/jump.py:extend_ellipses",
"src/stcal/jump/jump.py:find_last_grp",
"src/stcal/jump/jump.py:find_faint_extended",
"src/stcal/jump/jump.py:get_bigcontours",
"src/stcal/jump/jump.py:diff_meddiff_int",
"src/stcal/jump/jump.py:diff_meddiff_grp",
"s... | [
"src/stcal/jump/jump.py:ellipse_subim",
"src/stcal/jump/jump.py:convolve_fast",
"src/stcal/jump/twopoint_difference.py:propagate_flags"
] | 8 | |
CWorthy-ocean/roms-tools | CWorthy-ocean__roms-tools-227 | 67df5fc400b1daeb903ba9b2efe11899d8b55dfa | diff --git a/docs/releases.md b/docs/releases.md
index 587315aa..1d5430af 100644
--- a/docs/releases.md
+++ b/docs/releases.md
@@ -10,10 +10,15 @@
### Internal Changes
+* Parallelize computation of radiation correction, leading to a hugely improved memory footprint for surface forcing generation ([#227](https://gi... | diff --git a/roms_tools/tests/test_setup/test_utils.py b/roms_tools/tests/test_setup/test_utils.py
index de4faad6..1012b839 100644
--- a/roms_tools/tests/test_setup/test_utils.py
+++ b/roms_tools/tests/test_setup/test_utils.py
@@ -15,6 +15,7 @@ def test_interpolate_from_climatology(use_dask):
climatology = ERA5C... | Radiation correction is not done in parallel (even when using Dask)
## Issue
If the user chooses to apply a radiation correction for the surface forcing while using Dask via
```
SurfaceForcing(
grid=grid,
...,
correct_radiation=True,
use_dask=True
)
```
this radiation correction is not done in parallel... | 1,738,006,985,000 | [] | Performance Issue | [
"roms_tools/setup/datasets.py:_select_relevant_times",
"roms_tools/setup/surface_forcing.py:SurfaceForcing._set_variable_info",
"roms_tools/setup/surface_forcing.py:SurfaceForcing._apply_correction",
"roms_tools/setup/surface_forcing.py:SurfaceForcing._validate",
"roms_tools/setup/utils.py:interpolate_from_... | [] | 6 | |
twisted/klein | twisted__klein-773 | 04db6d26fd6f8dc749868268518957c8238c09d6 | diff --git a/src/klein/_resource.py b/src/klein/_resource.py
index b12f711c..4107357b 100644
--- a/src/klein/_resource.py
+++ b/src/klein/_resource.py
@@ -91,12 +91,11 @@ def extractURLparts(request: IRequest) -> Tuple[str, str, int, str, str]:
server_port = request.getHost().port
else:
server_po... | Klein adds significant performance overhead over a twisted.web server
A minimal hello world benchmark, just routing "/" and returning a string, is half the speed of the equivalent minimal twisted.web server.
I will start investigating where the performance overhead is, and hopefully find some places to optimize.
| 1,726,598,482,000 | [] | Performance Issue | [
"src/klein/_resource.py:extractURLparts",
"src/klein/_resource.py:KleinResource.render"
] | [] | 2 | ||
traceloop/openllmetry | traceloop__openllmetry-2577 | 847be92eb31f6815848e70a1d2cafb77959fe21e | diff --git a/packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py b/packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py
index e283307b9..f8d750d4c 100644
--- a/packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py
+... | 🐛 Bug Report: Performance issues with opentelemetry-instrumentation-openai
### Which component is this bug for?
Anthropic Instrumentation
### 📜 Description
Hi! First, thank you for opentelemetry-instrumentation-openai. It makes my life easier :-)
But, it's computationally extensive, significantly slowing the serv... | Thanks @Nagasaki45 for reporting! We'll investigate what's causing it. Since you're using streaming I have an assumption it's related to the token count enrichment which may affect latency (but some users find useful!). Can you try disabling it and see the effect? (Setting `enrich_token_usage` to false in the initializ... | 1,738,018,114,000 | [
"python",
"size:S",
"lgtm"
] | Performance Issue | [
"packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py:model_as_dict",
"packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/shared/__init__.py:model_as_dict",
"packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/util... | [] | 3 | |
equinor/ert | equinor__ert-9942 | 978727edea92bf79848f32765a95ef6d4938e9d0 | diff --git a/src/everest/bin/config_branch_script.py b/src/everest/bin/config_branch_script.py
index 5df90cb9010..d8ad052e66f 100644
--- a/src/everest/bin/config_branch_script.py
+++ b/src/everest/bin/config_branch_script.py
@@ -66,7 +66,7 @@ def _updated_initial_guess(conf_controls, opt_controls):
conf_controls =... | diff --git a/test-data/everest/math_func/config_advanced.yml b/test-data/everest/math_func/config_advanced.yml
index b9952f51557..96a7b24015f 100644
--- a/test-data/everest/math_func/config_advanced.yml
+++ b/test-data/everest/math_func/config_advanced.yml
@@ -20,9 +20,9 @@ objective_functions:
input_constraints:
... | Make control naming consistent
**Is your feature request related to a problem? Please describe.**
The variables that are optimized (controls in Everest nomenclature) are specified in a nested fashion: We have controls, which defines groups of variables, which in turn can have multiple components labeled with an integer... | 1,738,582,801,000 | [] | Feature Request | [
"src/everest/bin/config_branch_script.py:_updated_initial_guess",
"src/everest/config/everest_config.py:EverestConfig.validate_input_constraints_weight_definition",
"src/everest/config/everest_config.py:EverestConfig.formatted_control_names",
"src/everest/optimizer/everest2ropt.py:_parse_input_constraints",
... | [
"src/everest/config/everest_config.py:EverestConfig.formatted_control_names_dotdash"
] | 5 | |
equinor/ert | equinor__ert-9915 | 15f60779e8a00be8c510b7c9b7d130c16fc3520c | diff --git a/src/ert/config/design_matrix.py b/src/ert/config/design_matrix.py
index 1ecd5995bd8..0df73886271 100644
--- a/src/ert/config/design_matrix.py
+++ b/src/ert/config/design_matrix.py
@@ -191,7 +191,7 @@ def read_design_matrix(
if error_list := DesignMatrix._validate_design_matrix(design_matrix_df):... | diff --git a/tests/ert/unit_tests/gui/ide/test_range_string_argument.py b/tests/ert/unit_tests/gui/ide/test_range_string_argument.py
index 0a342924b48..af21dfd0cad 100644
--- a/tests/ert/unit_tests/gui/ide/test_range_string_argument.py
+++ b/tests/ert/unit_tests/gui/ide/test_range_string_argument.py
@@ -1,4 +1,5 @@
-fr... | Validate that DESIGN_MATRIX is correct upon initialization
The validation should include checking whether the design sheet part handles `NUM_REALIZATIONS` correctly, which means whether the all the design values combined with default sheet will handle the total number of realizations.
- `REAL` column (if provided) sh... | Currently design2params will fail if you run more realizations than entries in design matrix. Maybe we should do the same, just ignore `NUM_REALIZATIONS` if you choose design matrix and use number of realizations specified in the design matrix.
For instance if you only specify realization 1,4,7 in the design matrix, t... | 1,738,235,554,000 | [
"release-notes:improvement"
] | Feature Request | [
"src/ert/config/design_matrix.py:DesignMatrix.read_design_matrix",
"src/ert/config/design_matrix.py:DesignMatrix._validate_design_matrix",
"src/ert/gui/simulation/ensemble_experiment_panel.py:EnsembleExperimentPanel.__init__",
"src/ert/validation/active_range.py:ActiveRange.__init__"
] | [
"src/ert/validation/active_range.py:ActiveRange.validate_range_is_subset",
"src/ert/validation/range_string_argument.py:RangeSubsetStringArgument.__init__",
"src/ert/validation/range_string_argument.py:RangeSubsetStringArgument.validate"
] | 4 |
equinor/ert | equinor__ert-9771 | 61a6776f88663e06751878285408ae250853339e | diff --git a/src/ert/gui/simulation/experiment_panel.py b/src/ert/gui/simulation/experiment_panel.py
index 7be4a058423..3de30c78a4d 100644
--- a/src/ert/gui/simulation/experiment_panel.py
+++ b/src/ert/gui/simulation/experiment_panel.py
@@ -192,7 +192,7 @@ def addExperimentConfigPanel(
experiment_type = panel.... | diff --git a/tests/ert/ui_tests/gui/conftest.py b/tests/ert/ui_tests/gui/conftest.py
index bfdddf77a97..2100ced7685 100644
--- a/tests/ert/ui_tests/gui/conftest.py
+++ b/tests/ert/ui_tests/gui/conftest.py
@@ -239,7 +239,7 @@ def func(experiment_mode, gui, click_done=True):
assert isinstance(experiment_panel, E... | Mark ES-MDA as recommended
The ES-MDA algorithm is the recommended algorithm for multi iteration use. We should mark it as such. This should not affect the logging and usage statistics of methods.
| 1,737,019,577,000 | [
"release-notes:user-impact"
] | Feature Request | [
"src/ert/gui/simulation/experiment_panel.py:ExperimentPanel.addExperimentConfigPanel",
"src/ert/gui/simulation/experiment_panel.py:ExperimentPanel.get_current_experiment_type",
"src/ert/run_models/iterated_ensemble_smoother.py:IteratedEnsembleSmoother.description",
"src/ert/run_models/multiple_data_assimilati... | [
"src/ert/run_models/base_run_model.py:BaseRunModel.display_name",
"src/ert/run_models/multiple_data_assimilation.py:MultipleDataAssimilation.display_name"
] | 4 | |
hyeneung/tech-blog-hub-site | hyeneung__tech-blog-hub-site-49 | 940338fead4a77132a4d78b42232265301a20b83 | diff --git a/serverless/apis/recommend/lambda_function.py b/serverless/apis/recommend/lambda_function.py
index 3e72617..f61f099 100644
--- a/serverless/apis/recommend/lambda_function.py
+++ b/serverless/apis/recommend/lambda_function.py
@@ -1,32 +1,24 @@
import json
import os
from typing import Dict, Any, Tuple
-fro... | [Feature] Migrate from OpenSearch to Elasticsearch to improve Search Performance
## Work Details
- Current performance with AWS OpenSearch Free Tier shows query times averaging 1.5 seconds.
- Plan to migrate to a self-hosted Elasticsearch instance on EC2(free tier) using Docker.
- Expected outcome: Reduce query time... | 1,736,058,281,000 | [
":star2: feature"
] | Performance Issue | [
"serverless/apis/recommend/opensearch/article_analyze.py:get_db_dataframe",
"serverless/apis/recommend/opensearch/article_analyze.py:get_contents_base_recommendations",
"serverless/apis/recommend/recommend.py:get_recommend_articles_by_url"
] | [] | 3 | ||
waterloo-rocketry/omnibus | waterloo-rocketry__omnibus-343 | 8166561752e3e3621c9ed2967b50463d68b26282 | diff --git a/sinks/dashboard/dashboard.py b/sinks/dashboard/dashboard.py
index 42f85079..05165aaf 100644
--- a/sinks/dashboard/dashboard.py
+++ b/sinks/dashboard/dashboard.py
@@ -3,6 +3,7 @@
import sys
import json
import signal
+import time
import pyqtgraph
from pyqtgraph.Qt.QtCore import Qt, QTimer
@@ -30,7 +31... | [Feature request (NT)] - Better Timing Handling
# Feature Description
- **Describe the feature**
The way that the omnibus dashboard handles async timing is currently rather bad. It relies on the assumption that a large amount of data flowing through the bus, which can be a pain when testing and is not reliable.
I ... | 1,737,219,574,000 | [] | Feature Request | [
"sinks/dashboard/dashboard.py:Dashboard.__init__",
"sinks/dashboard/dashboard.py:Dashboard.change_detector",
"sinks/dashboard/dashboard.py:Dashboard.load",
"sinks/dashboard/dashboard.py:Dashboard.update",
"sinks/dashboard/items/periodic_can_sender.py:PeriodicCanSender.__init__",
"sinks/dashboard/items/per... | [
"sinks/dashboard/items/periodic_can_sender.py:PeriodicCanSender.on_clock_update",
"sinks/dashboard/publisher.py:Publisher.subscribe_clock",
"sinks/dashboard/publisher.py:Publisher.update_clock"
] | 9 | ||
aio-libs/aiohttp | aio-libs__aiohttp-9692 | dd0b6e37339bd91b2ba666208d0c92f314252f07 | diff --git a/CHANGES/9692.breaking.rst b/CHANGES/9692.breaking.rst
new file mode 100644
index 00000000000..e0fdae11416
--- /dev/null
+++ b/CHANGES/9692.breaking.rst
@@ -0,0 +1,1 @@
+Changed ``ClientRequest.request_info`` to be a `NamedTuple` to improve client performance -- by :user:`bdraco`.
diff --git a/aiohttp/clien... | RequestInfo is documented to be a namedtuple but its actually a dataclass or attrs
https://docs.aiohttp.org/en/stable/client_reference.html#aiohttp.ClientResponse.request_info
It would be about 2x as fast to create if it were a namedtuple though, and although dataclasses are faster to access its rarely accessed so n... | <img width="1500" alt="Screenshot 2024-11-05 at 8 42 07 PM" src="https://github.com/user-attachments/assets/92add3df-591f-4bfd-a138-db724e9414b6">
Production shows quite a bit of time making this one
I really doubt if we have the real bottleneck here. Attrs/dataclasses have better human-readable interface than named... | 1,730,950,174,000 | [
"bot:chronographer:provided",
"backport-3.11"
] | Performance Issue | [
"aiohttp/client_reqrep.py:ClientRequest.request_info"
] | [] | 1 | |
home-assistant/core | home-assistant__core-136739 | a8c382566cae06c43a33c4d3d44c9bc92ef7b4d8 | diff --git a/homeassistant/components/fritzbox/climate.py b/homeassistant/components/fritzbox/climate.py
index d5a81fdef1a3d3..87a87ac691f58b 100644
--- a/homeassistant/components/fritzbox/climate.py
+++ b/homeassistant/components/fritzbox/climate.py
@@ -141,7 +141,7 @@ async def async_set_temperature(self, **kwargs: A... | diff --git a/tests/components/fritzbox/test_climate.py b/tests/components/fritzbox/test_climate.py
index 29f5742216fb84..c7896920ce9364 100644
--- a/tests/components/fritzbox/test_climate.py
+++ b/tests/components/fritzbox/test_climate.py
@@ -273,20 +273,20 @@ async def test_update_error(hass: HomeAssistant, fritz: Moc... | Fritz!box sockets switch very slowly
### The problem
If I want to switch on a Fritz!Box socket via Home Assistant, it takes a very long time until it is switched on and the entity is updated.
### What version of Home Assistant Core has the issue?
core-2024.2.5
### What was the last working version of Home Assistant... |
Hey there @mib1185, @flabbamann, mind taking a look at this issue as it has been labeled with an integration (`fritzbox`) you are listed as a [code owner](https://github.com/home-assistant/core/blob/dev/CODEOWNERS#L446) for? Thanks!
<details>
<summary>Code owner commands</summary>
Code owners of `fritzbox` can trig... | 1,738,085,460,000 | [
"cla-signed",
"integration: fritzbox",
"small-pr",
"has-tests",
"by-code-owner",
"bugfix",
"Quality Scale: No score"
] | Performance Issue | [
"homeassistant/components/fritzbox/climate.py:FritzboxThermostat.async_set_temperature",
"homeassistant/components/fritzbox/cover.py:FritzboxCover.async_open_cover",
"homeassistant/components/fritzbox/cover.py:FritzboxCover.async_close_cover",
"homeassistant/components/fritzbox/cover.py:FritzboxCover.async_se... | [] | 9 |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11795 | f6c73aad5f1a67544bea137ebd9d1e22e0e56567 | diff --git a/yt_dlp/extractor/globo.py b/yt_dlp/extractor/globo.py
index d72296be6e0c..7acbd2820c03 100644
--- a/yt_dlp/extractor/globo.py
+++ b/yt_dlp/extractor/globo.py
@@ -1,32 +1,48 @@
-import base64
-import hashlib
import json
-import random
import re
+import uuid
from .common import InfoExtractor
-from ..net... | [Globo] Unable to download JSON metadata: HTTP Error 404: Not Found
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified ... | Same thing here.
It happens also for videos that do not require an account, for example https://globoplay.globo.com/v/12458100/.
Did you find any fix? I have the same issue and don't know what to do anymore, it simply doesn't work
I don't know anything about Python or how this process worked but I tried to analyze w... | 1,734,053,056,000 | [
"site-bug"
] | Bug Report | [
"yt_dlp/extractor/globo.py:GloboIE._real_extract"
] | [] | 1 | |
vllm-project/vllm | vllm-project__vllm-9338 | 8e3e7f271326e8cdb32c8f9581b2f98013a567c7 | diff --git a/benchmarks/benchmark_serving.py b/benchmarks/benchmark_serving.py
index 68f1e221c4bfb..0d205014b15bf 100644
--- a/benchmarks/benchmark_serving.py
+++ b/benchmarks/benchmark_serving.py
@@ -53,6 +53,8 @@
except ImportError:
from argparse import ArgumentParser as FlexibleArgumentParser
+MILLISECONDS_T... | [RFC]: Add Goodput Metric to Benchmark Serving
### Motivation.
Currently, all metrics vLLM has are more from the perspectives of GenAI Service Providers.
In order to provide a measurement from the perspectives of GenAI Service Users, we, from [Hao AI Lab](https://hao-ai-lab.github.io/home/#:~:text=Welcome%20to%2... | Hey @Imss27 ! Thanks for making this RFC!
I've read the doc and overall it makes sense to me, so please feel free to implement the changes and ping me when the PR is ready for review!
One suggestion I'd make is that I see you're using `inter_token_latency:30` in the example, but you probably want to use `TPOT` in... | 1,728,901,192,000 | [
"ready"
] | Feature Request | [
"benchmarks/benchmark_serving.py:calculate_metrics",
"benchmarks/benchmark_serving.py:benchmark",
"benchmarks/benchmark_serving.py:main"
] | [
"benchmarks/benchmark_serving.py:check_goodput_args",
"benchmarks/benchmark_serving.py:parse_goodput"
] | 3 | |
celery/django-celery-beat | celery__django-celery-beat-835 | 17d87f4951d42498c9ca9aaa23ecbd95f2a7dbc3 | diff --git a/django_celery_beat/schedulers.py b/django_celery_beat/schedulers.py
index ce46b661..f2ea4bc9 100644
--- a/django_celery_beat/schedulers.py
+++ b/django_celery_beat/schedulers.py
@@ -11,14 +11,16 @@
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
from django.db impo... | diff --git a/t/unit/test_schedulers.py b/t/unit/test_schedulers.py
index 544ac4bd..ef081e77 100644
--- a/t/unit/test_schedulers.py
+++ b/t/unit/test_schedulers.py
@@ -456,22 +456,67 @@ def setup_scheduler(self, app):
self.m4.save()
self.m4.refresh_from_db()
- dt_aware = make_aware(datetime(da... | Scheduler is slow when dealing with lots of tasks
### Summary:
When there are a few thousand tasks in the config, beat becomes completely unreliable and unstable. Reconfiguration, syncing takes a lot of time, and with frequent schedule changes, hours can pass by without sending any task to the queue.
* Celery Ver... | I am open to review and accept performance related improvement
| 1,735,849,295,000 | [] | Performance Issue | [
"django_celery_beat/schedulers.py:DatabaseScheduler.all_as_schedule"
] | [
"django_celery_beat/schedulers.py:DatabaseScheduler.get_excluded_hours_for_crontab_tasks"
] | 1 |
JarbasHiveMind/HiveMind-core | JarbasHiveMind__HiveMind-core-9 | 6933f090e6498c11290815b1b0e425990ff489e2 | diff --git a/README.md b/README.md
index 49de279..31a787c 100644
--- a/README.md
+++ b/README.md
@@ -25,7 +25,7 @@ Demo videos in [youtube](https://www.youtube.com/channel/UCYoV5kxp2zrH6pnoqVZpKS
---
-## 🚀 Getting Started
+## 🚀 Quick Start
To get started, HiveMind Core provides a command-line interface (CLI) ... | Feature request: option to rename client
It would be nice if the client could be renamed afterwards.
| 1,734,835,068,000 | [
"feature"
] | Feature Request | [
"hivemind_core/scripts.py:allow_msg",
"hivemind_core/scripts.py:delete_client",
"hivemind_core/scripts.py:blacklist_skill",
"hivemind_core/scripts.py:unblacklist_skill",
"hivemind_core/scripts.py:blacklist_intent",
"hivemind_core/scripts.py:unblacklist_intent"
] | [
"hivemind_core/scripts.py:prompt_node_id",
"hivemind_core/scripts.py:rename_client",
"hivemind_core/scripts.py:blacklist_msg"
] | 6 | ||
Standard-Labs/real-intent | Standard-Labs__real-intent-27 | ed2a2cf98cbce1f571212aadd0cebf07f0bb5bd9 | diff --git a/bigdbm/validate/email.py b/bigdbm/validate/email.py
index 1afb4a7..417e45a 100644
--- a/bigdbm/validate/email.py
+++ b/bigdbm/validate/email.py
@@ -1,6 +1,8 @@
"""Validate emails using MillionVerifier."""
import requests
+from concurrent.futures import ThreadPoolExecutor
+
from bigdbm.schemas import M... | Email validation may be slow
Have a hunch that email validation with MillionVerifier is very slow.
Use threads to concurrently process emails and phones (check rate limits on Numverify and MillionVerifier).
| 1,723,076,595,000 | [] | Performance Issue | [
"bigdbm/validate/email.py:EmailValidator.__init__",
"bigdbm/validate/email.py:EmailValidator.validate",
"bigdbm/validate/phone.py:PhoneValidator.__init__",
"bigdbm/validate/phone.py:PhoneValidator.validate"
] | [] | 4 | ||
Standard-Labs/real-intent | Standard-Labs__real-intent-102 | 708f4a641280c351e6778b0172e4b8aa15ec9a49 | diff --git a/real_intent/deliver/followupboss/vanilla.py b/real_intent/deliver/followupboss/vanilla.py
index 2db5ad1..6795ebb 100644
--- a/real_intent/deliver/followupboss/vanilla.py
+++ b/real_intent/deliver/followupboss/vanilla.py
@@ -3,6 +3,7 @@
from enum import StrEnum
import base64
+from concurrent.futures imp... | Improve integration efficiency
Multithreading
| 1,727,040,247,000 | [] | Performance Issue | [
"real_intent/deliver/followupboss/vanilla.py:FollowUpBossDeliverer._deliver",
"real_intent/deliver/kvcore/__init__.py:KVCoreDeliverer._deliver"
] | [] | 2 | ||
vllm-project/vllm | vllm-project__vllm-7874 | 1248e8506a4d98b4f15cbfe729cf2af42fb4223a | diff --git a/vllm/core/scheduler.py b/vllm/core/scheduler.py
index 4c2f715820317..81c78bda3b505 100644
--- a/vllm/core/scheduler.py
+++ b/vllm/core/scheduler.py
@@ -1027,16 +1027,21 @@ def _schedule_chunked_prefill(self) -> SchedulerOutputs:
# Update waiting requests.
self.waiting.extendleft(running... | diff --git a/tests/basic_correctness/test_chunked_prefill.py b/tests/basic_correctness/test_chunked_prefill.py
index fc6f829c37b06..a63ac380e8598 100644
--- a/tests/basic_correctness/test_chunked_prefill.py
+++ b/tests/basic_correctness/test_chunked_prefill.py
@@ -116,6 +116,9 @@ def test_models_with_fp8_kv_cache(
... | [Performance]: vllm 0.5.4 with enable_chunked_prefill =True, throughput is slightly lower than 0.5.3~0.5.0.
### Your current environment
<details>
<summary>environment</summary>
# Hardware & Nvidia driver & OS
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.107.02... | update
I managed to compile vllm 0.5.4 with torch == 2.3.1 xformers == 0.0.27 vllm-flash-attn == 2.5.9.post1 (from vllm 0.5.3 requirements ), the bug is still there.

The bug only affects chunked_prefill, ever... | 1,724,682,556,000 | [
"ready"
] | Performance Issue | [
"vllm/core/scheduler.py:Scheduler._schedule_chunked_prefill"
] | [] | 1 |
NCSU-High-Powered-Rocketry-Club/AirbrakesV2 | NCSU-High-Powered-Rocketry-Club__AirbrakesV2-151 | 2d85a6c6bf946636c5e6b4aa74388b9d376289f3 | diff --git a/airbrakes/airbrakes.py b/airbrakes/airbrakes.py
index a4519053..f52d7200 100644
--- a/airbrakes/airbrakes.py
+++ b/airbrakes/airbrakes.py
@@ -1,7 +1,6 @@
"""Module which provides a high level interface to the air brakes system on the rocket."""
import time
-from collections import deque
from typing im... | diff --git a/tests/auxil/utils.py b/tests/auxil/utils.py
index 3db14c63..24b1d76f 100644
--- a/tests/auxil/utils.py
+++ b/tests/auxil/utils.py
@@ -3,6 +3,7 @@
from airbrakes.telemetry.packets.apogee_predictor_data_packet import ApogeePredictorDataPacket
from airbrakes.telemetry.packets.context_data_packet import Cont... | Refactor logging to be more clear and performant
After the changes in #146 in the logger to get #147 resolved, our logging process has slightly slowed down due to use of `msgpec.structs.asdict()` instead of `msgspec.to_builtins()`. We can still keep up with the IMU process, but the margin has reduced.
I propose the f... | 1,738,633,910,000 | [
"enhancement"
] | Performance Issue | [
"airbrakes/airbrakes.py:AirbrakesContext.__init__",
"airbrakes/airbrakes.py:AirbrakesContext.update",
"airbrakes/hardware/base_imu.py:BaseIMU.get_imu_data_packets",
"airbrakes/telemetry/logger.py:Logger.__init__",
"airbrakes/telemetry/logger.py:Logger._prepare_log_dict",
"airbrakes/telemetry/logger.py:Log... | [
"airbrakes/telemetry/logger.py:Logger._prepare_logger_packets"
] | 9 | |
Lightning-AI/lightning-thunder | Lightning-AI__lightning-thunder-1747 | 8163863787a5e2b20834f4751ba00b968c7b18dd | diff --git a/thunder/core/utils.py b/thunder/core/utils.py
index dc4e6f345d..cc5c5995cc 100644
--- a/thunder/core/utils.py
+++ b/thunder/core/utils.py
@@ -17,7 +17,7 @@
from thunder.core.proxies import Proxy, NumberProxy, TensorProxy, variableify, CONSTRAINT, Variable
from thunder.core.baseutils import *
from thunde... | diff --git a/thunder/tests/test_dynamo.py b/thunder/tests/test_dynamo.py
index cd3f0b5462..9df96dafc5 100644
--- a/thunder/tests/test_dynamo.py
+++ b/thunder/tests/test_dynamo.py
@@ -1072,7 +1072,6 @@ def foo(x):
assert "Failed to save reproducer" in captured.out
-@pytest.mark.parametrize("use_benchmark", ... | [Reporting tool] Adds the Thunder segmentation and fusion information to the FXReport
There are two immediate next options I can think of:
One option is to make thunder segmentation and fusion information programmatically available. This could be done by having a function like `analyze_with_thunder` that takes an FX... | 1,738,774,926,000 | [] | Feature Request | [
"thunder/dynamo/report.py:FXReport.__str__"
] | [
"thunder/core/utils.py:create_python_callable_from_bsym",
"thunder/dynamo/report.py:FXReport.__repr__",
"thunder/dynamo/report.py:ThunderSplitGraphReport.__init__",
"thunder/dynamo/report.py:ThunderSplitGraphReport.__repr__",
"thunder/dynamo/report.py:ThunderSplitGraphReport._create_thunder_traces",
"thun... | 1 | |
NextGenContributions/nitpick | NextGenContributions__nitpick-24 | ca76bc6fde1a7b8eae73cb82016af9b2bc7286e4 | diff --git a/docs/nitpick_section.rst b/docs/nitpick_section.rst
index 2d3b8c56..5352e1b2 100644
--- a/docs/nitpick_section.rst
+++ b/docs/nitpick_section.rst
@@ -60,6 +60,24 @@ Multiple keys can be added.
[nitpick.files.comma_separated_values]
"setup.cfg" = ["flake8.ignore", "isort.some_key", "another_sectio... | diff --git a/tests/test_ini.py b/tests/test_ini.py
index f2a11a78..31c84dcc 100644
--- a/tests/test_ini.py
+++ b/tests/test_ini.py
@@ -514,3 +514,178 @@ def test_falsy_values_should_be_reported_and_fixed(tmp_path, datadir):
]
).assert_file_contents(filename, datadir / "falsy_values/expected.ini")
pro... | Support multiline csv value in INI config
- `flake8` config typically has multiline config value such as `per-file-ignores`
- each line has value that is usually comma separated
We should support this type of config for INI format as it's frequently used to incrementally fine-tune flake8 rules
| 1,738,766,438,000 | [] | Feature Request | [
"src/nitpick/plugins/ini.py:IniPlugin.enforce_section",
"src/nitpick/plugins/ini.py:IniPlugin.show_missing_keys"
] | [
"src/nitpick/plugins/ini.py:IniPlugin.enforce_comma_separated_values_multiline",
"src/nitpick/plugins/ini.py:IniPlugin._add_missing_option_in_multiline_config_value",
"src/nitpick/plugins/ini.py:IniPlugin._add_missing_values_in_existing_option_of_multiline_config_value"
] | 2 | |
mllam/neural-lam | mllam__neural-lam-92 | 659f23f48b99db310e39de1ab70606236ee0b79c | diff --git a/CHANGELOG.md b/CHANGELOG.md
index b16ee732..45bb97c9 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -15,6 +15,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Add support for multi-node training.
[\#103](https://github.com/mllam/neural-lam/pull/103) @simonkamuk @sa... | diff --git a/tests/datastore_examples/mdp/danra_100m_winds/config.yaml b/tests/datastore_examples/mdp/danra_100m_winds/config.yaml
index 0bb5c5ec..8b3362e0 100644
--- a/tests/datastore_examples/mdp/danra_100m_winds/config.yaml
+++ b/tests/datastore_examples/mdp/danra_100m_winds/config.yaml
@@ -7,3 +7,12 @@ training:
... | Feature Request: Add Functionality to Apply Constraints to Predictions
I am proposing the addition of a new method to our model class, designed to apply constraints to predictions to ensure that the values fall within specified bounds. This functionality would be useful for maintaining the integrity of our model's pred... | I would want to do this by constraining the model output itself, also for what is used during training (a clamp does not give gradient). Specifically, variables with constraints should be handled by rescaling a sigmoid output or a softplus (for > 0). As this applies to all models, it would be nice with it included on a... | 1,732,871,751,000 | [] | Feature Request | [
"neural_lam/models/base_graph_model.py:BaseGraphModel.__init__",
"neural_lam/models/base_graph_model.py:BaseGraphModel.predict_step"
] | [
"neural_lam/models/base_graph_model.py:BaseGraphModel.prepare_clamping_params",
"neural_lam/models/base_graph_model.py:BaseGraphModel.get_clamped_new_state",
"neural_lam/utils.py:inverse_softplus",
"neural_lam/utils.py:inverse_sigmoid"
] | 2 |
NVIDIA/cuda-python | NVIDIA__cuda-python-360 | 27317a00c8406a542716072f1f0956d66672147b | diff --git a/cuda_core/cuda/core/experimental/_module.py b/cuda_core/cuda/core/experimental/_module.py
index 36178f5d..0274b300 100644
--- a/cuda_core/cuda/core/experimental/_module.py
+++ b/cuda_core/cuda/core/experimental/_module.py
@@ -3,6 +3,8 @@
# SPDX-License-Identifier: LicenseRef-NVIDIA-SOFTWARE-LICENSE
+f... | diff --git a/cuda_core/tests/test_module.py b/cuda_core/tests/test_module.py
index 7db017f1..9f126fa1 100644
--- a/cuda_core/tests/test_module.py
+++ b/cuda_core/tests/test_module.py
@@ -10,14 +10,75 @@
import pytest
from conftest import can_load_generated_ptx
-from cuda.core.experimental import Program, ProgramOpt... | Add `Kernel` attribute getter/setter
Need a nice way to get/set `CUfunction_attribute`. Make them maybe nice Python properties, instead of using the C enumerator names literally.
To check:
- For attributes that can be set either through launch config (ex: #204) or kernel attribute, what's the right way to go?
| User request (to CuPy) for setting `cudaFuncAttributeNonPortableClusterSizeAllowed`: https://github.com/cupy/cupy/issues/8851
I think we should provide both avenues for specifying clusters or other configs which can be done via the kernel attributes of launch configs. There are 3 reasons for this:
1) Runtime vs compil... | 1,736,205,857,000 | [
"P0",
"feature",
"cuda.core"
] | Feature Request | [
"cuda_core/cuda/core/experimental/_module.py:_lazy_init",
"cuda_core/cuda/core/experimental/_module.py:Kernel.__init__",
"cuda_core/cuda/core/experimental/_module.py:Kernel._from_obj"
] | [
"cuda_core/cuda/core/experimental/_module.py:KernelAttributes.__init__",
"cuda_core/cuda/core/experimental/_module.py:KernelAttributes._init",
"cuda_core/cuda/core/experimental/_module.py:KernelAttributes._get_cached_attribute",
"cuda_core/cuda/core/experimental/_module.py:KernelAttributes.max_threads_per_blo... | 3 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-8478 | 312f64053d7249a326a19a07fa635ef5b5c6ed99 | diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index 2e7dcba82e846..243c63ab0c7e2 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -646,6 +646,7 @@ Kernels:
:template: class.rst
impute.SimpleImputer
+ impute.MICEImputer
.. _kernel_approximation_ref:
diff --git a/doc... | diff --git a/sklearn/tests/test_impute.py b/sklearn/tests/test_impute.py
index f2bf5912e2213..954a016a835bb 100644
--- a/sklearn/tests/test_impute.py
+++ b/sklearn/tests/test_impute.py
@@ -1,14 +1,19 @@
+from __future__ import division
+
+import pytest
import numpy as np
from scipy import sparse
-from sklearn.uti... | MICE imputer
Proceeding from https://github.com/scikit-learn/scikit-learn/pull/4844#issuecomment-253043238 is a suggestion that MICE imputation be included in scikit-learn. @sergeyf has implemented it [here](https://github.com/hammerlab/fancyimpute/blob/master/fancyimpute/mice.py).
Here we will discuss issues relate... | @sergeyf [wrote](https://github.com/scikit-learn/scikit-learn/pull/4844#issuecomment-259055975):
> @amueller and @jnothman do either of you have a sense of what the right structure for MICE would be? It's pretty involved, so I don't want to just add a new option to `Imputer`. What about adding a new `MICE` class to `i... | 1,488,322,557,000 | [] | Feature Request | [
"sklearn/utils/estimator_checks.py:_yield_non_meta_checks"
] | [
"examples/plot_missing_values.py:get_results",
"sklearn/impute.py:MICEImputer.__init__",
"sklearn/impute.py:MICEImputer._impute_one_feature",
"sklearn/impute.py:MICEImputer._get_neighbor_feat_idx",
"sklearn/impute.py:MICEImputer._get_ordered_idx",
"sklearn/impute.py:MICEImputer._get_abs_corr_mat",
"skle... | 1 |
cupy/cupy | cupy__cupy-3730 | 4c81aebbec5d481ddee8685d982b24372e5f272b | diff --git a/cupy/cuda/cufft.pyx b/cupy/cuda/cufft.pyx
index 8df8ba92019..166ea4993de 100644
--- a/cupy/cuda/cufft.pyx
+++ b/cupy/cuda/cufft.pyx
@@ -279,6 +279,17 @@ class Plan1d(object):
else:
self._multi_gpu_get_plan(
plan, nx, fft_type, batch, devices, out)
+ ... | diff --git a/tests/cupy_tests/fft_tests/test_cache.py b/tests/cupy_tests/fft_tests/test_cache.py
new file mode 100644
index 00000000000..fc882f882d2
--- /dev/null
+++ b/tests/cupy_tests/fft_tests/test_cache.py
@@ -0,0 +1,482 @@
+import contextlib
+import io
+import queue
+import threading
+import unittest
+
+import pyt... | Reuse cufft plan objects
Reusing a plan object have a significant impact in the complete fft performance. I measured a ~500us time drop.
We should try to create a `PlanAllocator` or a pool of plans to reuse objects according to the requested size.
(Plans allocate gpu memory so we have to be careful when reusing t... | Previous discussion was in #1669. @grlee77 mentioned he has a cache prototype. For the purpose of #3587 to speed up correlate/convolve, we should first focus on memoizing `Plan1d`, as it has less parameters to be used as the cache key.
PyTorch refs:
- usage: https://github.com/pytorch/pytorch/blob/master/docs/source/n... | 1,596,694,236,000 | [
"cat:enhancement",
"to-be-backported"
] | Performance Issue | [
"cupy/fft/config.py:set_cufft_gpus",
"cupy/fft/fft.py:_exec_fft",
"cupy/fft/fft.py:_get_cufft_plan_nd",
"cupyx/scipy/fftpack/_fft.py:get_fft_plan"
] | [] | 4 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-17443 | c3db2cdbf5244229af2c5eb54f9216a69f77a146 | diff --git a/doc/modules/calibration.rst b/doc/modules/calibration.rst
index d0a9737dac612..1fcd1d501d100 100644
--- a/doc/modules/calibration.rst
+++ b/doc/modules/calibration.rst
@@ -30,11 +30,25 @@ approximately 80% actually belong to the positive class.
Calibration curves
------------------
-The following plot ... | diff --git a/sklearn/tests/test_calibration.py b/sklearn/tests/test_calibration.py
index 4fe08c27fb19e..8decff0cc96d5 100644
--- a/sklearn/tests/test_calibration.py
+++ b/sklearn/tests/test_calibration.py
@@ -18,7 +18,7 @@
)
from sklearn.utils.extmath import softmax
from sklearn.exceptions import NotFittedError
-fro... | Add calibration curve to plotting module
I think it would be nice to have something like
```python
def plot_calibration_curve(y_true, prob_y_pred, n_bins=5, ax=None):
prop_true, prop_pred = calibration_curve(y_true, prob_y_pred, n_bins=n_bins)
if ax is None:
ax = plt.gca()
ax.plot([0, 1], [0... | should plot functions return the artist? or the axes? hm...
Maybe adding a histogram as in http://scikit-learn.org/stable/auto_examples/calibration/plot_compare_calibration.html would also be nice
```python
def plot_calibration_curve(y_true, y_prob, n_bins=5, ax=None, hist=True, normalize=False):
prob_true, prob_... | 1,591,270,294,000 | [
"module:metrics"
] | Feature Request | [
"examples/calibration/plot_calibration_curve.py:plot_calibration_curve",
"sklearn/metrics/_plot/base.py:_get_response"
] | [
"examples/calibration/plot_calibration_curve.py:NaivelyCalibratedLinearSVC.fit",
"examples/calibration/plot_calibration_curve.py:NaivelyCalibratedLinearSVC.predict_proba",
"examples/calibration/plot_compare_calibration.py:NaivelyCalibratedLinearSVC.fit",
"examples/calibration/plot_compare_calibration.py:Naive... | 2 |
Qiskit/qiskit | Qiskit__qiskit-13141 | 90e92a46643c72a21c5852299243213907453c21 | diff --git a/crates/accelerate/src/euler_one_qubit_decomposer.rs b/crates/accelerate/src/euler_one_qubit_decomposer.rs
index 98333cad39d2..e6ca094186ff 100644
--- a/crates/accelerate/src/euler_one_qubit_decomposer.rs
+++ b/crates/accelerate/src/euler_one_qubit_decomposer.rs
@@ -579,7 +579,7 @@ pub fn generate_circuit(
... | diff --git a/test/python/transpiler/test_unitary_synthesis.py b/test/python/transpiler/test_unitary_synthesis.py
index 4abf6511d8d2..aaad7b71279b 100644
--- a/test/python/transpiler/test_unitary_synthesis.py
+++ b/test/python/transpiler/test_unitary_synthesis.py
@@ -18,6 +18,7 @@
import unittest
import numpy as np
... | Port `UnitarySynthesis` to Rust
This issue tracks porting the `UnitarySynthesis` pass to rust as part of the #12208 epic. This pass in particular will still always require a fairly large python component, as the unitary synthesis plugin interface necessitates a Python execution mode. But when the a plugin is not specif... | 1,726,135,509,000 | [
"performance",
"Changelog: None",
"Rust",
"mod: transpiler"
] | Feature Request | [
"qiskit/transpiler/passes/synthesis/unitary_synthesis.py:UnitarySynthesis.run"
] | [] | 1 | |
dask/dask | dask__dask-8040 | 4a59a6827578fdf8e105d1fb7e9d50d428c9d5fa | diff --git a/dask/local.py b/dask/local.py
index 9f035ef277a..664ae93aba2 100644
--- a/dask/local.py
+++ b/dask/local.py
@@ -181,7 +181,7 @@ def start_state_from_dask(dsk, cache=None, sortkey=None):
waiting_data = dict((k, v.copy()) for k, v in dependents.items() if v)
ready_set = set([k for k, v in waiting... | diff --git a/dask/tests/test_local.py b/dask/tests/test_local.py
index 8830a216e27..a45aab7b991 100644
--- a/dask/tests/test_local.py
+++ b/dask/tests/test_local.py
@@ -1,3 +1,5 @@
+import pytest
+
import dask
from dask.local import finish_task, get_sync, sortkey, start_state_from_dask
from dask.order import order
@... | Regression: poor memory management in threads and sync schedulers
Reopen after triage from https://github.com/pydata/xarray/issues/5165
```python
import dask.array as da
d = da.random.random((15000, 250000), chunks=(1, -1)).sum()
d.compute(optimize_graph=False)
```
Peak RAM usage:
Before #6322 (2021.3.1): 200 ... | Also cc @mrocklin @jrbourbeau . IMHO this should be treated as very high priority as it is something that is very likely to hit in the face a newbie user when he plays around with dask.array or dask.dataframe for the very first time.
Hi,
I'm also experiencing increased memory usage since ad0e5d140dfc934d94583f8ce50d... | 1,628,956,822,000 | [] | Performance Issue | [
"dask/local.py:start_state_from_dask",
"dask/local.py:finish_task",
"dask/local.py:get_async"
] | [] | 3 |
django/django | django__django-18158 | 8c118c0e00846091c261b97dbed9a5b89ceb79bf | diff --git a/django/core/management/commands/shell.py b/django/core/management/commands/shell.py
index f55b346406aa..1619561fea2e 100644
--- a/django/core/management/commands/shell.py
+++ b/django/core/management/commands/shell.py
@@ -2,7 +2,9 @@
import select
import sys
import traceback
+from collections import def... | diff --git a/tests/shell/models.py b/tests/shell/models.py
new file mode 100644
index 000000000000..85b40bf2058e
--- /dev/null
+++ b/tests/shell/models.py
@@ -0,0 +1,9 @@
+from django.db import models
+
+
+class Marker(models.Model):
+ pass
+
+
+class Phone(models.Model):
+ name = models.CharField(max_length=50)
... | Auto-importing models feature for shell-command
Description
This would be an update of the existing Django shell that auto-imports models for you from your app/project. Also, the goal would be to allow the user to subclass this shell to customize its behavior and import extra things.
wiki
proposal
Auto-importing mo... | ['In dfac15d5: Fixed #35517, Refs #35515 -- Improved test coverage of shell command.', 1719481381.0]
['Discussion to decide on whether to have additional imports (and which ones): \u200bhttps://forum.djangoproject.com/t/default-automatic-imports-in-the-shell/33708', 1723261632.0]
['Based off the discussion, it looks li... | 1,715,436,304,000 | [] | Feature Request | [
"django/core/management/commands/shell.py:Command.add_arguments",
"django/core/management/commands/shell.py:Command.ipython",
"django/core/management/commands/shell.py:Command.bpython",
"django/core/management/commands/shell.py:Command.python",
"django/core/management/commands/shell.py:Command.handle"
] | [
"django/core/management/commands/shell.py:Command.get_and_report_namespace",
"django/core/management/commands/shell.py:Command.get_namespace"
] | 5 |
Happy-Algorithms-League/hal-cgp | Happy-Algorithms-League__hal-cgp-180 | 5421d9cdf0812ab3098d54c201ee115fa3129bce | diff --git a/cgp/genome.py b/cgp/genome.py
index 3c18a87f..4e8f035d 100644
--- a/cgp/genome.py
+++ b/cgp/genome.py
@@ -107,7 +107,7 @@ def dna(self) -> List[int]:
def dna(self, value: List[int]) -> None:
self._validate_dna(value)
self._dna = value
- self._initialize_unkown_parameters()
+ ... | diff --git a/test/test_ea_mu_plus_lambda.py b/test/test_ea_mu_plus_lambda.py
index cf0e1b2b..8f0521ea 100644
--- a/test/test_ea_mu_plus_lambda.py
+++ b/test/test_ea_mu_plus_lambda.py
@@ -196,6 +196,8 @@ def objective(individual):
individual.fitness = float(individual.idx)
return individual
+ popu... | Issue warning when `mutation_rate` is high and `mutate` potentially takes long
#157 made sure that exactly the correct number of mutations occurs in a genome by checking that the mutated genome differs from the original one in `n_mutations` position. when using a high mutation rate and small number of primitives, this ... | btw, an alternative solution to this problem would be to have a maximum number iterations in the while loop (https://github.com/Happy-Algorithms-League/hal-cgp/blob/master/cgp/genome.py#L377) and return even if the desired number of differences was not reached while issuing a warning to the user
What about keeping trac... | 1,594,391,036,000 | [] | Performance Issue | [
"cgp/genome.py:Genome.dna",
"cgp/genome.py:Genome.mutate",
"cgp/genome.py:Genome._mutate_output_region",
"cgp/genome.py:Genome._mutate_hidden_region",
"cgp/population.py:Population.__init__"
] | [
"cgp/genome.py:Genome._is_hidden_input_gene",
"cgp/genome.py:Genome._select_gene_indices_for_mutation",
"cgp/genome.py:Genome._determine_alternative_permissible_values",
"cgp/genome.py:Genome._determine_alternative_permissible_values_hidden_gene",
"cgp/genome.py:Genome._determine_alternative_permissible_val... | 5 |
sgkit-dev/sgkit | sgkit-dev__sgkit-447 | 9150392b3b38f575d12b5f555877fc059ee44591 | diff --git a/sgkit/distance/api.py b/sgkit/distance/api.py
index 70141b598..7df70eb16 100644
--- a/sgkit/distance/api.py
+++ b/sgkit/distance/api.py
@@ -1,14 +1,20 @@
+import typing
+
import dask.array as da
import numpy as np
+from typing_extensions import Literal
from sgkit.distance import metrics
from sgkit.ty... | diff --git a/sgkit/tests/test_distance.py b/sgkit/tests/test_distance.py
index 3432b6421..f5bd44861 100644
--- a/sgkit/tests/test_distance.py
+++ b/sgkit/tests/test_distance.py
@@ -10,7 +10,7 @@
squareform,
)
-from sgkit.distance.api import pairwise_distance
+from sgkit.distance.api import MetricTypes, pairwise... | Pairwise distance scalability
Raising this issue to revisit the scalability of our pairwise distance calculation and whether it's worth returning to a map-reduce style implementation that would allow chunking along both dimensions.
In the work that @aktech is doing on early scalability demonstrations (#345) there ar... | > Secondly, do we ever really need to run pairwise distance on arrays that are large in the variants dimension? I.e., do we care about scaling this up to large numbers of variants? xref #306 (comment)
I think you mean in the *samples* dimension? (If so, I agree - ~10K is as many samples as we could hope to support ... | 1,611,845,371,000 | [] | Performance Issue | [
"sgkit/distance/api.py:pairwise_distance",
"sgkit/distance/metrics.py:correlation",
"sgkit/distance/metrics.py:euclidean"
] | [
"sgkit/distance/metrics.py:euclidean_map",
"sgkit/distance/metrics.py:euclidean_reduce",
"sgkit/distance/metrics.py:correlation_map",
"sgkit/distance/metrics.py:correlation_reduce"
] | 3 |
numpy/numpy | numpy__numpy-17394 | 43683b3256a86659f230dcadbcde1f8020398bfa | diff --git a/doc/release/upcoming_changes/17394.new_function.rst b/doc/release/upcoming_changes/17394.new_function.rst
new file mode 100644
index 000000000000..50c9d1db310d
--- /dev/null
+++ b/doc/release/upcoming_changes/17394.new_function.rst
@@ -0,0 +1,5 @@
+``sliding_window_view`` provides a sliding window view for... | diff --git a/numpy/lib/tests/test_stride_tricks.py b/numpy/lib/tests/test_stride_tricks.py
index 10d7a19abec0..efec5d24dad4 100644
--- a/numpy/lib/tests/test_stride_tricks.py
+++ b/numpy/lib/tests/test_stride_tricks.py
@@ -6,8 +6,10 @@
)
from numpy.lib.stride_tricks import (
as_strided, broadcast_arrays, _br... | Suggestion: Sliding Window Function
Using `np.lib.stride_tricks.as_stride` one can very efficiently create a sliding window that segments an array as a preprocessing step for vectorized applications. For example a moving average of a window length `3`, stepsize `1`:
```
a = numpy.arange(10)
a_strided = numpy.lib.strid... | 1,601,380,131,000 | [
"01 - Enhancement",
"62 - Python API"
] | Feature Request | [
"numpy/lib/stride_tricks.py:as_strided"
] | [
"numpy/lib/stride_tricks.py:_sliding_window_view_dispatcher",
"numpy/lib/stride_tricks.py:sliding_window_view"
] | 1 | |
Ouranosinc/xclim | Ouranosinc__xclim-477 | cc17fde69b757443602872d1156caf00b34ba161 | diff --git a/HISTORY.rst b/HISTORY.rst
index 1d6a6ebc5..0bd3055e1 100644
--- a/HISTORY.rst
+++ b/HISTORY.rst
@@ -4,6 +4,7 @@ History
0.18.x
------
+* Optimization options for `xclim.sdba` : different grouping for the normalization steps of DQM and save training or fitting datasets to temporary files.
* `xclim.sdba... | diff --git a/tests/test_sdba/test_adjustment.py b/tests/test_sdba/test_adjustment.py
index c897ea3ca..532f8a67f 100644
--- a/tests/test_sdba/test_adjustment.py
+++ b/tests/test_sdba/test_adjustment.py
@@ -190,7 +190,7 @@ def test_mon_U(self, mon_series, series, mon_triangular, kind, name, spatial_dim
if spatia... | sdba - improve the speed of DetrendedQuantileMapping
### Description
Doing a few preliminary tests **on a single pixel and without dask**, I obtained the following times :
DetrendedQuantileMapping, groups='month', 150 years of daily data : ~15 seconds
DetrendedQuantileMapping, groups='day', window=31, 150 years o... | To clear out my thoughts:
- The normalization process, as all non-grouping grouped operations, has additional overhead because it needs to resort the data along the time axis. And to rechunk if dask is used.
- `interp_on_quantiles` currently calls griddata through a _vectorized_ `xr.apply_ufunc`, which is far from ... | 1,592,246,670,000 | [] | Performance Issue | [
"xclim/sdba/adjustment.py:DetrendedQuantileMapping._train",
"xclim/sdba/adjustment.py:DetrendedQuantileMapping._adjust",
"xclim/sdba/base.py:Grouper.get_index",
"xclim/sdba/detrending.py:BaseDetrend.fit",
"xclim/sdba/detrending.py:BaseDetrend.get_trend",
"xclim/sdba/detrending.py:BaseDetrend._set_fitds",
... | [
"xclim/sdba/adjustment.py:DetrendedQuantileMapping.__init__",
"xclim/sdba/detrending.py:BaseDetrend._set_ds"
] | 9 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-10280 | 1557cb8a1910bce6dc33cd5cd3a08c380bdec566 | diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index 68494051041be..99731c7fda599 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -955,6 +955,7 @@ See the :ref:`metrics` section of the user guide for further details.
metrics.pairwise_distances
metrics.pairwise_distances_arg... | diff --git a/sklearn/metrics/tests/test_pairwise.py b/sklearn/metrics/tests/test_pairwise.py
index 799b3e4fe9bf7..0ef089c7a3619 100644
--- a/sklearn/metrics/tests/test_pairwise.py
+++ b/sklearn/metrics/tests/test_pairwise.py
@@ -1,11 +1,15 @@
+from types import GeneratorType
+
import numpy as np
from numpy import lin... | [MRG] ENH: Added block_size parameter for lesser memory consumption
#### Reference Issue
Fixes #7287
#### What does this implement/fix? Explain your changes.
This intends to add a function `pairwise_distances_blockwise` with an additional block_size parameter to avoid storing all the O(n^2) pairs' distances. Thi... | 1,512,901,651,000 | [] | Feature Request | [
"sklearn/_config.py:set_config",
"sklearn/_config.py:config_context",
"sklearn/metrics/pairwise.py:pairwise_distances_argmin_min",
"sklearn/metrics/pairwise.py:pairwise_distances_argmin",
"sklearn/metrics/pairwise.py:cosine_similarity",
"sklearn/metrics/pairwise.py:pairwise_distances"
] | [
"sklearn/metrics/pairwise.py:_argmin_min_reduce",
"sklearn/metrics/pairwise.py:_check_chunk_size",
"sklearn/metrics/pairwise.py:pairwise_distances_chunked",
"sklearn/utils/__init__.py:get_chunk_n_rows"
] | 6 | |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-10471 | 52aaf8269235d4965022b8ec970243bdcb59c9a7 | diff --git a/doc/whats_new/v0.20.rst b/doc/whats_new/v0.20.rst
index a7ae9f7415243..b4a8046d02956 100644
--- a/doc/whats_new/v0.20.rst
+++ b/doc/whats_new/v0.20.rst
@@ -125,6 +125,13 @@ Classifiers and regressors
only require X to be an object with finite length or shape.
:issue:`9832` by :user:`Vrishank Bhardwaj... | diff --git a/sklearn/cluster/tests/test_k_means.py b/sklearn/cluster/tests/test_k_means.py
index f0f2b56bd591c..de8772d761e22 100644
--- a/sklearn/cluster/tests/test_k_means.py
+++ b/sklearn/cluster/tests/test_k_means.py
@@ -169,7 +169,8 @@ def _check_fitted_model(km):
assert_greater(km.inertia_, 0.0)
# che... | KMeans optimisation for array C/F contiguity (was: Make sure that the output of PCA.fit_transform is C contiguous)
otherwise, I would rather use:
``` Python
pca.fit(X)
X_new = pca.transform(X)
```
Because of FORTRAN data for inner product is very slow. (for example, in the KMeans)
KMeans optimisation for array C/F c... | Can you give an example?
Maybe we should then rather change the behavior in KMeans.
I wouldn't change the output format, since we don't know what the user wants to do next.
They should be the same that the output and input format of PCA.fit_transform, isn't it?
Because PCA.transform is such.
In _k_means._assign_label... | 1,515,928,846,000 | [] | Performance Issue | [
"sklearn/cluster/k_means_.py:k_means",
"sklearn/cluster/k_means_.py:_kmeans_single_elkan",
"sklearn/cluster/k_means_.py:KMeans._check_fit_data",
"sklearn/cluster/k_means_.py:KMeans.fit",
"sklearn/cluster/k_means_.py:KMeans.fit_transform",
"sklearn/cluster/k_means_.py:MiniBatchKMeans.fit",
"sklearn/clust... | [] | 7 |
pandas-dev/pandas | pandas-dev__pandas-22762 | c8ce3d01e9ffafc24c6f9dd568cd9eb7e42c610c | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3d82dd042da20..29b766e616b3b 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -48,7 +48,7 @@ Pandas has gained the ability to hold integer dtypes with missing values. This l
Here is an example of t... | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 621de3ffd4b12..e84657a79b51a 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -131,6 +131,30 @@ def all_arithmetic_operators(request):
return request.param
+_all_numeric_reductions = ['sum', 'max', 'min',
+ 'mean', ... | ENH: add reduce op (and groupby reduce) support to EA
after #21160
Reductions for ExtensionArray
Creeping up to this in https://github.com/pandas-dev/pandas/pull/22345
A few questions
1. What should this look like for EA authors? What helpers can / should we provide?
2. How does this affect users? Specifically,... | 1,537,363,868,000 | [
"Numeric Operations",
"ExtensionArray"
] | Feature Request | [
"pandas/core/arrays/categorical.py:Categorical._reduce",
"pandas/core/series.py:Series._reduce"
] | [
"pandas/core/arrays/base.py:ExtensionArray._reduce",
"pandas/core/arrays/integer.py:IntegerArray._reduce"
] | 2 | |
tarantool/ansible-cartridge | tarantool__ansible-cartridge-172 | 087f6126c1cccc83e67c36ceab6868b43251e04b | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 978b7e7a..a6396c0b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -26,6 +26,9 @@ to use the newest tag with new release
to expel. It can be called second time to set up failover priority for
replicasets where new instances were joined.
As a result, `replicaset_he... | diff --git a/unit/test_tags.py b/unit/test_tags.py
index 01e1f68e..74281892 100644
--- a/unit/test_tags.py
+++ b/unit/test_tags.py
@@ -48,6 +48,7 @@ def test_without_tags(self):
'Set remote_user for delegated tasks',
'Validate config',
'Set instance facts',
+ 'Get one i... | [2pt] Too long "Install package" task
I have 100 Tarantool instances in a cluster. When I deploy them via ansible-cartridge, it takes a very long time to complete the "Install package" task, since it is performed for each instance. Moreover, this task is performed even despite the fact that it is written in the logs th... | 1,612,778,877,000 | [] | Performance Issue | [
"filter_plugins/filter_hosts.py:get_machine_identifier",
"filter_plugins/filter_hosts.py:get_one_not_expelled_instance_for_machine"
] | [
"filter_plugins/filter_hosts.py:get_machine_hostname"
] | 2 | |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-23197 | 652a6278c45f8fedfe55581503941ffc8efffaca | diff --git a/doc/whats_new/v1.2.rst b/doc/whats_new/v1.2.rst
index 832a4e4389b19..22aecf64a85b8 100644
--- a/doc/whats_new/v1.2.rst
+++ b/doc/whats_new/v1.2.rst
@@ -44,6 +44,11 @@ Changes impacting all modules
NumPy's SIMD optimized primitives.
:pr:`23446` by :user:`Meekail Zain <micky774>`
+- |Enhancement| Fin... | Optimizing assert_all_finite check
Have done some work to come up with a Cython implementation of [`_assert_all_finite`]( https://github.com/scikit-learn/scikit-learn/blob/6d15840432c08b2c628babe62d7bef6d6a01fcf6/sklearn/utils/validation.py#L40 ) as the current check is rather slow. Have come up with [this implementati... | Because we are have check by default, and it can be relatively slow (https://github.com/scikit-learn/scikit-learn/pull/11487#issuecomment-405034867) in some cases, it would be definitely nice to make it faster.
Ideally, it would be great to have such an improvement in numpy (with possible temporary backport in scik... | 1,650,671,054,000 | [
"module:utils",
"cython"
] | Performance Issue | [
"sklearn/utils/setup.py:configuration",
"sklearn/utils/validation.py:_assert_all_finite"
] | [] | 2 | |
numpy/numpy | numpy__numpy-8206 | 6feb18ce79bacdac09945cf2ca0ddf85f8294298 | diff --git a/doc/release/1.16.0-notes.rst b/doc/release/1.16.0-notes.rst
index 2b84bb90a311..6a73ed3548da 100644
--- a/doc/release/1.16.0-notes.rst
+++ b/doc/release/1.16.0-notes.rst
@@ -107,6 +107,12 @@ Previously, a ``LinAlgError`` would be raised when an empty matrix/empty
matrices (with zero rows and/or columns) i... | diff --git a/numpy/lib/tests/test_function_base.py b/numpy/lib/tests/test_function_base.py
index d5faed6aea49..40cca1dbb4bb 100644
--- a/numpy/lib/tests/test_function_base.py
+++ b/numpy/lib/tests/test_function_base.py
@@ -734,6 +734,58 @@ def test_subclass(self):
assert_array_equal(out3.mask, [[], [], [], [],... | Add "insert_zero" option for np.diff
I often find myself wanting the difference of an array along some axis, where the 0th element should just be the first entry of the array along that axis, so that the shape of the output matches that of the input. This is a pretty common requirement for people working with finite ... | The inverse of cumsum problem would also be solved by adding `include_identity` to cumsum (#6044), which is something I've felt a stronger need for.
I prefer the ediff1d arguments to_begin and to_end, allowing for non zero and/or multiple elements to be inserted on either side.
Maybe the best solution is to add the p... | 1,477,311,409,000 | [
"01 - Enhancement",
"component: numpy.lib",
"56 - Needs Release Note."
] | Feature Request | [
"numpy/lib/function_base.py:diff"
] | [] | 1 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-14800 | acb8ac5145cfd88fdbd2d381b34883b2c212c8c5 | diff --git a/doc/whats_new/v0.24.rst b/doc/whats_new/v0.24.rst
index a647de6dac628..ba137fb76698f 100644
--- a/doc/whats_new/v0.24.rst
+++ b/doc/whats_new/v0.24.rst
@@ -224,6 +224,11 @@ Changelog
:meth:`tree.DecisionTreeRegressor.fit`, and has not effect.
:pr:`17614` by :user:`Juan Carlos Alfaro Jiménez <alfaro96... | diff --git a/sklearn/datasets/tests/data/openml/1/api-v1-json-data-1.json.gz b/sklearn/datasets/tests/data/openml/1/api-v1-json-data-1.json.gz
index f75912bf2def7..ba544db491637 100644
Binary files a/sklearn/datasets/tests/data/openml/1/api-v1-json-data-1.json.gz and b/sklearn/datasets/tests/data/openml/1/api-v1-json-d... | Validate MD5 checksum of downloaded ARFF in fetch_openml
`fetch_openml` downloads the `data_set_description` from openml.org which includes an `md5_checksum` field. This should correspond to the MD5 checksum of the downloaded ARFF data. We should (optionally?) use `hashlib` to verify that the MD5 matches.
| I'll give this a go.
thanks. you may need to hack the md5s currently in
sklearn/datasets/tests/data/openml to match the truncated ARFF data there
Picking this to work on while in NYC WiMLDS Sprint Aug 24, 2019
I think this check should be optional. What do other core developers think? Forcing the verification ensures ... | 1,566,681,065,000 | [
"module:datasets"
] | Feature Request | [
"sklearn/datasets/_openml.py:_load_arff_response",
"sklearn/datasets/_openml.py:_download_data_to_bunch",
"sklearn/datasets/_openml.py:fetch_openml"
] | [] | 3 |
pandas-dev/pandas | pandas-dev__pandas-18330 | d421a09e382109c1bbe064107c4024b065839de2 | diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst
index 1209c4a8d6be8..1b81d83bb76c7 100644
--- a/doc/source/reshaping.rst
+++ b/doc/source/reshaping.rst
@@ -240,7 +240,7 @@ values will be set to ``NaN``.
df3
df3.unstack()
-.. versionadded: 0.18.0
+.. versionadded:: 0.18.0
Alternatively, uns... | diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index 2722c3e92d85a..5d4aa048ae303 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -218,35 +218,51 @@ def test_multiindex(self):
class TestGetDummies(object):
- sparse = False... | ENH: allow get_dummies to accept dtype argument
- [x] closes #18330 (there's no issue for this one)
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Update in version 0.19.0 made `get_dummies` return uint8 values instead of floats (#8725). While ... | 1,510,867,741,000 | [
"Reshaping",
"Dtype Conversions"
] | Feature Request | [
"pandas/core/generic.py:NDFrame._set_axis_name",
"pandas/core/reshape/reshape.py:get_dummies",
"pandas/core/reshape/reshape.py:_get_dummies_1d"
] | [] | 3 | |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-11179 | a1fabce6531bc75964588ef9f7f95527cae899bc | diff --git a/doc/modules/classes.rst b/doc/modules/classes.rst
index fb050fd87e88c..473ea1c6a3539 100644
--- a/doc/modules/classes.rst
+++ b/doc/modules/classes.rst
@@ -846,6 +846,7 @@ details.
metrics.jaccard_similarity_score
metrics.log_loss
metrics.matthews_corrcoef
+ metrics.multilabel_confusion_matri... | diff --git a/sklearn/metrics/tests/test_classification.py b/sklearn/metrics/tests/test_classification.py
index 8e18af7128350..3152521f23b77 100644
--- a/sklearn/metrics/tests/test_classification.py
+++ b/sklearn/metrics/tests/test_classification.py
@@ -46,6 +46,7 @@
from sklearn.metrics import recall_score
from sklea... | [WIP] ENH Multilabel confusion matrix
This PR considers a helper for multilabel/set-wise evaluation metrics such as precision, recall, fbeta, jaccard (#10083), fall-out, miss rate and specificity (#5516). It also incorporates suggestions from #8126 regarding efficiency of multilabel true positives calculation (but does... |
What does confusion matrix mean in a multilabel context?
On 20 July 2014 20:34, Arnaud Joly notifications@github.com wrote:
> Currently the confusion_matrix support binary and multi-class
> classification, but not multi-label data yet.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/... | 1,527,811,029,000 | [] | Feature Request | [
"sklearn/metrics/classification.py:f1_score",
"sklearn/metrics/classification.py:fbeta_score",
"sklearn/metrics/classification.py:precision_recall_fscore_support",
"sklearn/metrics/classification.py:precision_score",
"sklearn/metrics/classification.py:recall_score",
"sklearn/metrics/classification.py:clas... | [
"sklearn/metrics/classification.py:multilabel_confusion_matrix"
] | 6 |
pandas-dev/pandas | pandas-dev__pandas-19074 | 24d95098789c8685e7c0ce6a57982d0372ffc3e0 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 246eab386b2ab..efb4707649f08 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -380,6 +380,7 @@ Performance Improvements
- Improved performance of ``DatetimeIndex`` and ``Series`` arithmetic operati... | diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 9664d73651185..aedc957ec67da 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -1258,6 +1258,17 @@ def test_get_loc_level(self):
assert result == expected
assert new_ind... | New engine for MultiIndex?
Currently, ``MultiIndex.get_loc()`` and ``MultiIndex.get_indexer()`` both rely on an ``_engine`` which is either a ``MultiIndexObjectEngine`` or a ``MultiIndexHashEngine``: but both of these are thin layers over the flat ``ObjectEngine``. This means that the actual structure of labels and lev... | See also previous discussion in #1752.
Aha, and #16324, which was a response to #16319, which was a reaction to #15245, which was a response to #13904, which was a follow-up to the issue @chris-b1 mentioned.
And while the ``MultiIndexHashEngine`` is not what I described above (it's probably better), indeed only now ... | 1,515,062,485,000 | [
"Indexing",
"MultiIndex"
] | Feature Request | [
"pandas/core/indexes/multi.py:MultiIndex._engine",
"pandas/core/indexes/multi.py:MultiIndex.get_indexer",
"pandas/core/indexes/multi.py:MultiIndex.get_loc",
"pandas/core/indexes/multi.py:MultiIndex.get_loc_level"
] | [
"pandas/core/indexes/multi.py:MultiIndexUIntEngine._codes_to_ints",
"pandas/core/indexes/multi.py:MultiIndexPyIntEngine._codes_to_ints"
] | 4 |
rapidsai/dask-cuda | rapidsai__dask-cuda-98 | f4ffc5782330ac1d83d196cba78142ffd761fe1c | diff --git a/dask_cuda/device_host_file.py b/dask_cuda/device_host_file.py
index 02c826f7..ac0eff00 100644
--- a/dask_cuda/device_host_file.py
+++ b/dask_cuda/device_host_file.py
@@ -1,28 +1,94 @@
from zict import Buffer, File, Func
from zict.common import ZictBase
-from distributed.protocol import deserialize_bytes,... | diff --git a/dask_cuda/tests/test_device_host_file.py b/dask_cuda/tests/test_device_host_file.py
index d53cbc3c..4562a00e 100644
--- a/dask_cuda/tests/test_device_host_file.py
+++ b/dask_cuda/tests/test_device_host_file.py
@@ -1,10 +1,11 @@
import numpy as np
import cupy
-from dask_cuda.device_host_file import Device... | Spill quickly from device to host
I'm curious how fast we are at moving data back and forth between device and host memory. I suspect that currently we end up serializing with standard Dask serialization, which may not be aware of device memory. We should take a look at using the `"cuda"` serialization family within ... | Yes, I haven't done any profiling, and I'm sure this is something that could be improved. Thanks for pointing me to the CUDA serialization, I wasn't aware of that specialization.
If the is not of utmost urgency, I can take a look at this next week, this week I'm mostly busy with EuroPython presentation and at the co... | 1,564,067,150,000 | [] | Performance Issue | [
"dask_cuda/device_host_file.py:_serialize_if_device",
"dask_cuda/device_host_file.py:_deserialize_if_device",
"dask_cuda/device_host_file.py:DeviceHostFile.__init__",
"dask_cuda/device_host_file.py:DeviceHostFile.__getitem__",
"dask_cuda/device_host_file.py:DeviceHostFile.__setitem__",
"dask_cuda/device_h... | [
"dask_cuda/device_host_file.py:register_numba",
"dask_cuda/device_host_file.py:DeviceSerialized.__init__",
"dask_cuda/device_host_file.py:DeviceSerialized.__sizeof__",
"dask_cuda/device_host_file.py:_",
"dask_cuda/device_host_file.py:device_to_host",
"dask_cuda/device_host_file.py:host_to_device"
] | 6 |
django/django | django__django-6478 | 84c1826ded17b2d74f66717fb745fc36e37949fd | diff --git a/django/db/backends/oracle/compiler.py b/django/db/backends/oracle/compiler.py
index 3ae567669f1b..9aa4acc0fe57 100644
--- a/django/db/backends/oracle/compiler.py
+++ b/django/db/backends/oracle/compiler.py
@@ -31,10 +31,17 @@ def as_sql(self, with_limits=True, with_col_aliases=False):
high_whe... | diff --git a/tests/expressions/tests.py b/tests/expressions/tests.py
index 18d003d57d74..8399e3c0a92c 100644
--- a/tests/expressions/tests.py
+++ b/tests/expressions/tests.py
@@ -12,8 +12,8 @@
Avg, Count, Max, Min, StdDev, Sum, Variance,
)
from django.db.models.expressions import (
- Case, Col, ExpressionWrap... | Allow using a subquery in QuerySet.filter()
Description
(last modified by MikiSoft)
The following function is used for filtering by generic relation (and also by one column in the model where it is) which isn't natively supported by Django.
APP_LABEL = os.path.basename(os.path.dirname(__file__))
def generic_rel_fi... | 1,461,135,635,000 | [] | Feature Request | [
"django/db/backends/oracle/compiler.py:SQLCompiler.as_sql"
] | [
"django/db/models/expressions.py:ResolvedOuterRef.as_sql",
"django/db/models/expressions.py:ResolvedOuterRef._prepare",
"django/db/models/expressions.py:OuterRef.resolve_expression",
"django/db/models/expressions.py:OuterRef._prepare",
"django/db/models/expressions.py:Subquery.__init__",
"django/db/models... | 1 | |
oppia/oppia | oppia__oppia-14784 | 16da6ea6778a7b54b601eb6b6a73abff28855766 | diff --git a/core/domain/opportunity_services.py b/core/domain/opportunity_services.py
index 71a8c378ca4c..5a0be0a9da4c 100644
--- a/core/domain/opportunity_services.py
+++ b/core/domain/opportunity_services.py
@@ -18,6 +18,7 @@
from __future__ import annotations
+import collections
import logging
from core.co... | diff --git a/core/domain/opportunity_services_test.py b/core/domain/opportunity_services_test.py
index e09248ae769b..680026817561 100644
--- a/core/domain/opportunity_services_test.py
+++ b/core/domain/opportunity_services_test.py
@@ -191,35 +191,33 @@ def test_get_translation_opportunities_with_translations_in_review(... | Slow load times for translation Opportunities
<!--
- Thanks for taking the time to report a bug in the Oppia project.
- Before filing a new issue, please do a quick search to check that it hasn't
- already been filed on the [issue tracker](https://github.com/oppia/oppia/issues)._
-->
**Describe the bug**... | Seeing the same issue for displaying reviewable suggestions, which is probably due to fetching all available reviewable suggestions up to the query limit of 1000 in core/storage/suggestion/gae_models.py.
@sagangwee Hmm, It's strange when i tested this change I had about 200 suggestions, and it seemed to display pretty... | 1,643,236,262,000 | [
"PR: LGTM"
] | Performance Issue | [
"core/domain/opportunity_services.py:get_exp_opportunity_summary_with_in_review_translations_from_model",
"core/domain/opportunity_services.py:get_translation_opportunities"
] | [
"core/domain/opportunity_services.py:_build_exp_id_to_translation_suggestion_in_review_count",
"core/domain/suggestion_services.py:get_translation_suggestions_in_review_by_exp_ids",
"core/storage/suggestion/gae_models.py:GeneralSuggestionModel.get_in_review_translation_suggestions_by_exp_ids"
] | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.