repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | fixed_commit stringclasses 20
values | patch stringlengths 400 56.6k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels listlengths 0 7 ⌀ | category stringclasses 4
values | edit_functions listlengths 1 10 | added_functions listlengths 0 19 | edit_functions_length int64 1 10 | __index_level_0__ int64 1 659 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zulko/moviepy | Zulko__moviepy-2253 | c88852f6f3753469d4aeed677dd0b772764ccf42 | null | diff --git a/moviepy/video/io/ffmpeg_reader.py b/moviepy/video/io/ffmpeg_reader.py
index f871bd8fd..536024371 100644
--- a/moviepy/video/io/ffmpeg_reader.py
+++ b/moviepy/video/io/ffmpeg_reader.py
@@ -35,8 +35,10 @@ def __init__(
decode_file=decode_file,
print_infos=print_infos,
)
- ... | MoviePy 2.0 throws exception on loading video previous version worked with
#### Expected Behavior
MoviePy should continue to work with the same videos it did previously, even if those videos aren't fully compliant (e.g. are missing some metadata).
#### Actual Behavior
The same video crashes on MoviePy 2.0 bu... | 1,732,332,301,000 | null | Bug Report | [
"moviepy/video/io/ffmpeg_reader.py:FFMPEG_VideoReader.__init__",
"moviepy/video/io/ffmpeg_reader.py:FFmpegInfosParser.parse"
] | [] | 2 | 484 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11827 | 2037a6414f81db8080ca724dca506fde91974c5d | null | diff --git a/yt_dlp/update.py b/yt_dlp/update.py
index ca2ec5f376a0..dfab132afdfe 100644
--- a/yt_dlp/update.py
+++ b/yt_dlp/update.py
@@ -525,11 +525,16 @@ def filename(self):
@functools.cached_property
def cmd(self):
"""The command-line to run the executable, if known"""
+ argv = None
... | noise downloads
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update in... | 
Using `--audio-format mp3` alongside `--extract-audio` instructs yt-dlp to convert the audio track to mp3. This is lossy. To get the files as streamed by the site don't pass `--audio-format`.
Do note that nearly all sites don't... | 1,734,290,043,000 | null | Bug Report | [
"yt_dlp/update.py:Updater.cmd"
] | [] | 1 | 485 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11821 | 2037a6414f81db8080ca724dca506fde91974c5d | null | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index fd9c7107c7f7..b12a22d852ab 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -1495,7 +1495,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
},
# Age-gate videos. See https://github.com/yt-dlp/yt-dlp... | [youtube] Age-restricted videos now always require sign-in
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified th... | please provide a log without the plugin. run `set YTDLP_NO_PLUGINS=1` and then your download command again
Log without the AGP plugin
```
C:\Users\Casey>yt-dlp https://www.youtube.com/watch?v=7Do70nztRNE -vU
[debug] Command-line config: ['https://www.youtube.com/watch?v=7Do70nztRNE', '-vU']
[debug] Encodings: loc... | 1,734,231,296,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeIE._extract_player_responses"
] | [] | 1 | 486 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11819 | 2037a6414f81db8080ca724dca506fde91974c5d | null | diff --git a/yt_dlp/update.py b/yt_dlp/update.py
index ca2ec5f376a0..9ccd44b5e77d 100644
--- a/yt_dlp/update.py
+++ b/yt_dlp/update.py
@@ -65,9 +65,14 @@ def _get_variant_and_executable_path():
machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else ''
else:
machin... | --update flag updates to the wrong software architecture
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated ... | Is the verbose log you've provided the armv7l binary updating to the aarch64 binary?
If so, the armv7l binary is detecting itself as being the aarch64
>Is the verbose log you've provided the armv7l binary updating to the aarch64 binary?
Yes.
>If so, the armv7l binary is detecting itself as being the aarch64
... | 1,734,229,904,000 | null | Bug Report | [
"yt_dlp/update.py:_get_variant_and_executable_path"
] | [] | 1 | 487 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11818 | 2037a6414f81db8080ca724dca506fde91974c5d | null | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index fd9c7107c7f7..e12f728ea323 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -518,11 +518,12 @@ def ucid_or_none(self, ucid):
return self._search_regex(rf'^({self._YT_CHANNEL_UCID_RE})$', ucid, 'UC-id', def... | The `uploader_id` template does not print asian characters or letters with diacritical marks on the yt site.
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on ... | 1,734,228,450,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeBaseInfoExtractor.handle_or_none",
"yt_dlp/extractor/youtube.py:YoutubeBaseInfoExtractor.handle_from_url"
] | [] | 2 | 488 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11782 | 6fef824025b3c2f0ca8af7ac9fa04b10d09a3591 | null | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index e69373ba2f42..0814d0a0621b 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -5282,6 +5282,7 @@ def _extract_entries(self, parent_renderer, continuation_list):
'channelRenderer': lambda x: self.... | [Youtube] Playlist search broken
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nig... | did you update to nightly/master like the issue template told you to, though?
yes, i am on master and compiled it myself.
> yes, i am on master and compiled it myself.
that verbose log tells me that you are on yt-dlp stable branch and not nightly/master branch
how can i be on stable if i clone the master bra... | 1,733,842,458,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._extract_entries"
] | [] | 1 | 489 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11734 | 354cb4026cf2191e1a130ec2a627b95cabfbc60a | null | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 91619d9d5ca9..2db951a6084d 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -681,12 +681,6 @@ def _real_extract(self, url):
old_video_id = format_field(aid, None, f'%s_part{part_id or 1}')
cid... | [BiliBili] extract 720p/1080p format without logging in by passing `'try_look': 1` to the api
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I'... | With `'try_look': 1` passed to the api, it gives:
```
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1fK4y1t7hj/?spm_id_from=333.337.search-card.all.click&vd_source=...c145ee572cfa536d2947
[BiliBili] 1fK4y1t7hj: Downloading webpage
[BiliBili] BV1fK4y1t7hj: Extracting videos in anthology
[BiliBili] BV1... | 1,733,343,965,000 | null | Feature Request | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 490 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11711 | d8fb3490863653182864d2a53522f350d67a9ff8 | null | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 72d5f20cf36b..e538e5308946 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -652,13 +652,6 @@ def _real_extract(self, url):
else:
video_data = initial_state['videoData']
- if vide... | [bilibili] supporter-only videos broken after 239f5f3
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I ... | Well, there's the problem. If you don't have a supporter account, of course it can't download a video meant for supporters only. | 1,733,169,578,000 | null | Bug Report | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 492 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11683 | 00dcde728635633eee969ad4d498b9f233c4a94e | null | diff --git a/yt_dlp/extractor/mitele.py b/yt_dlp/extractor/mitele.py
index 3573a2a3fd72..76fef337a2ea 100644
--- a/yt_dlp/extractor/mitele.py
+++ b/yt_dlp/extractor/mitele.py
@@ -80,9 +80,9 @@ class MiTeleIE(TelecincoBaseIE):
def _real_extract(self, url):
display_id = self._match_id(url)
webpage ... | [MiTele]: Failed to parse JSON (caused by JSONDecodeError('Extra data in \'d":false}}</script> \': line 1 column 9378 (char 9377)'));
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting ... | 1,732,917,815,000 | null | Bug Report | [
"yt_dlp/extractor/mitele.py:MiTeleIE._real_extract"
] | [] | 1 | 493 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11667 | 00dcde728635633eee969ad4d498b9f233c4a94e | null | diff --git a/yt_dlp/extractor/bilibili.py b/yt_dlp/extractor/bilibili.py
index 02ea67707fcd..f01befcc0b6f 100644
--- a/yt_dlp/extractor/bilibili.py
+++ b/yt_dlp/extractor/bilibili.py
@@ -18,7 +18,6 @@
InAdvancePagedList,
OnDemandPagedList,
bool_or_none,
- clean_html,
determine_ext,
filter_di... | [BiliBili] unable to extract play info
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp ... | I also encountered the same problem
> I also encountered the same problem
My cue is this.
```
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1ALzVYUEZf
[BiliBili] 1ALzVYUEZf: Downloading webpage
WARNING: [BiliBili] unable to extract play info; please report this issue on https://github.com/yt-dlp/y... | 1,732,786,310,000 | null | Bug Report | [
"yt_dlp/extractor/bilibili.py:BiliBiliIE._real_extract"
] | [] | 1 | 494 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11645 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | null | diff --git a/yt_dlp/extractor/tiktok.py b/yt_dlp/extractor/tiktok.py
index ba15f08b6d85..9e53b3407220 100644
--- a/yt_dlp/extractor/tiktok.py
+++ b/yt_dlp/extractor/tiktok.py
@@ -413,15 +413,6 @@ def extract_addr(addr, add_meta={}):
for f in formats:
self._set_cookie(urllib.parse.urlparse(... | [TikTok] ERROR: Postprocessing: Conversion failed! when embedding thumbnail
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've v... | The thumbnail that yt-dlp is attempting to embed is an animated webp, and ffmpeg is choking on it.
We could deprioritize them like this:
```diff
diff --git a/yt_dlp/extractor/tiktok.py b/yt_dlp/extractor/tiktok.py
index ba15f08b6..721d36e49 100644
--- a/yt_dlp/extractor/tiktok.py
+++ b/yt_dlp/extractor/tiktok.p... | 1,732,592,483,000 | null | Bug Report | [
"yt_dlp/extractor/tiktok.py:TikTokBaseIE._parse_aweme_video_app",
"yt_dlp/extractor/tiktok.py:TikTokBaseIE._parse_aweme_video_web"
] | [] | 2 | 495 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11644 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | null | diff --git a/yt_dlp/extractor/dacast.py b/yt_dlp/extractor/dacast.py
index 4e81aa4a7bca..537352e5f78b 100644
--- a/yt_dlp/extractor/dacast.py
+++ b/yt_dlp/extractor/dacast.py
@@ -1,3 +1,4 @@
+import functools
import hashlib
import re
import time
@@ -51,6 +52,15 @@ class DacastVODIE(DacastBaseIE):
'thumb... | DacastVOD - ERROR: Strings must be encoded before hashing
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I hav... | Looks like a combination of a coding mistake plus an outdated key (so it would've needed a fix even without the mistake):
```diff
diff --git a/yt_dlp/extractor/dacast.py b/yt_dlp/extractor/dacast.py
index 4e81aa4a7..537352e5f 100644
--- a/yt_dlp/extractor/dacast.py
+++ b/yt_dlp/extractor/dacast.py
@@ -1,3 +1,4 ... | 1,732,592,344,000 | null | Bug Report | [
"yt_dlp/extractor/dacast.py:DacastVODIE._real_extract"
] | [
"yt_dlp/extractor/dacast.py:DacastVODIE._usp_signing_secret"
] | 1 | 496 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11636 | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | null | diff --git a/yt_dlp/extractor/dropbox.py b/yt_dlp/extractor/dropbox.py
index c122096230be..2bfeebc7cbba 100644
--- a/yt_dlp/extractor/dropbox.py
+++ b/yt_dlp/extractor/dropbox.py
@@ -48,32 +48,30 @@ def _real_extract(self, url):
webpage = self._download_webpage(url, video_id)
fn = urllib.parse.unquote... | Dropbox "No video formats found!" Error for password protected videos
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verifie... | 1,732,577,785,000 | null | Bug Report | [
"yt_dlp/extractor/dropbox.py:DropboxIE._real_extract"
] | [] | 1 | 497 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11624 | fe70f20aedf528fdee332131bc9b6710e54e6f10 | null | diff --git a/yt_dlp/extractor/chaturbate.py b/yt_dlp/extractor/chaturbate.py
index a40b7d39c7f4..d031d3985e33 100644
--- a/yt_dlp/extractor/chaturbate.py
+++ b/yt_dlp/extractor/chaturbate.py
@@ -59,17 +59,16 @@ def _extract_from_api(self, video_id, tld):
'Accept': 'application/json',
}, fa... | [chaturbate] Support downloading non-public rooms again
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to n... | 1,732,485,081,000 | null | Feature Request | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_api"
] | [] | 1 | 498 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11615 | e0f1ae813b36e783e2348ba2a1566e12f5cd8f6e | null | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
index a02a2428ab05..7a9133466d9b 100644
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -4986,6 +4986,10 @@ def _grid_entries(self, grid_renderer):
for item in grid_renderer['items']:
if not isinstance(it... | [youtube:tab] Tab/playlist extraction intermittently yielding 0 items
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've ... | My first thought is that either the YouTube API is actually returning no data for some reason (which seems unlikely, but possible), or yt-dlp is silently failing somewhere and the error is being ignored.
YT is rolling out changes; with the new response, yt-dlp's extractor is looking in the wrong place for entry and con... | 1,732,392,313,000 | null | Bug Report | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._grid_entries",
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._rich_entries"
] | [
"yt_dlp/extractor/youtube.py:YoutubeTabBaseInfoExtractor._extract_lockup_view_model"
] | 2 | 499 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11596 | f9197295388b44ee0a8992cb00f361c7ef42acdb | null | diff --git a/yt_dlp/extractor/stripchat.py b/yt_dlp/extractor/stripchat.py
index 31c8afbc6268..84846042f38f 100644
--- a/yt_dlp/extractor/stripchat.py
+++ b/yt_dlp/extractor/stripchat.py
@@ -28,24 +28,21 @@ class StripchatIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
... | stripchat extractor not working: "No active stream found"
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I hav... | Looks like they're using .live as a host now
https://edge-hls.doppiocdn.com/hls/164812713/master/164812713_auto.m3u8 doesn't work but https://edge-hls.doppiocdn.live/hls/164812713/master/164812713_auto.m3u8 does so Stripchat extractor needs .live as a fallback I think. Also is there a way to also use xHamsterLive as i... | 1,732,141,453,000 | null | Bug Report | [
"yt_dlp/extractor/stripchat.py:StripchatIE._real_extract"
] | [] | 1 | 500 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11555 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | null | diff --git a/yt_dlp/extractor/chaturbate.py b/yt_dlp/extractor/chaturbate.py
index 864d61f9c2b8..aa70f26a1bcb 100644
--- a/yt_dlp/extractor/chaturbate.py
+++ b/yt_dlp/extractor/chaturbate.py
@@ -5,6 +5,7 @@
ExtractorError,
lowercase_escape,
url_or_none,
+ urlencode_postdata,
)
@@ -40,14 +41,48 @@... | [Chaturbate] Consider using the API
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or mas... | 1,731,692,715,000 | null | Feature Request | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._real_extract"
] | [
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_api",
"yt_dlp/extractor/chaturbate.py:ChaturbateIE._extract_from_webpage"
] | 1 | 501 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11542 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | null | diff --git a/yt_dlp/extractor/spankbang.py b/yt_dlp/extractor/spankbang.py
index 6805a72deb7b..05f0bb1468ed 100644
--- a/yt_dlp/extractor/spankbang.py
+++ b/yt_dlp/extractor/spankbang.py
@@ -71,9 +71,11 @@ class SpankBangIE(InfoExtractor):
def _real_extract(self, url):
mobj = self._match_valid_url(url)
... | spankbang - 403 Forbidden errors
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that a **supported** site is broken
- [X] I've verified that I'm running yt-dlp version **2023... | > My yt-dlp version is completely up to date.
> [debug] yt-dlp version **2022.02.04** [c1653e9ef] (zip)
> Latest version: **2023.03.04**, Current version: **2022.02.04**
> Updating to version 2023.03.04 ...
> **ERROR: Unable to write to /usr/local/bin/yt-dlp; Try running as administrator**
I see in the logs t... | 1,731,609,440,000 | null | Bug Report | [
"yt_dlp/extractor/spankbang.py:SpankBangIE._real_extract"
] | [] | 1 | 503 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11534 | f9d98509a898737c12977b2e2117277bada2c196 | null | diff --git a/yt_dlp/extractor/ctvnews.py b/yt_dlp/extractor/ctvnews.py
index 08d76d303b04..c3ddcdbee4ba 100644
--- a/yt_dlp/extractor/ctvnews.py
+++ b/yt_dlp/extractor/ctvnews.py
@@ -1,11 +1,24 @@
+import json
import re
+import urllib.parse
from .common import InfoExtractor
-from ..utils import orderedSet
+from .ni... | [CTVNews] Does not find video on page
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp t... | This patch gets the problem video.
```diff
--- old/yt_dlp/extractor/ctvnews.py
+++ new/yt_dlp/extractor/ctvnews.py
if 'getAuthStates("' in webpage:
entries = [ninecninemedia_url_result(clip_id) for clip_id in
self._search_regex(r'getAuthStates\... | 1,731,543,182,000 | null | Bug Report | [
"yt_dlp/extractor/ctvnews.py:CTVNewsIE._real_extract"
] | [
"yt_dlp/extractor/ctvnews.py:CTVNewsIE._ninecninemedia_url_result"
] | 1 | 504 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11530 | f2a4983df7a64c4e93b56f79dbd16a781bd90206 | null | diff --git a/yt_dlp/extractor/patreon.py b/yt_dlp/extractor/patreon.py
index 4d668cd37dc0..6bdeaf15710d 100644
--- a/yt_dlp/extractor/patreon.py
+++ b/yt_dlp/extractor/patreon.py
@@ -16,10 +16,10 @@
parse_iso8601,
smuggle_url,
str_or_none,
- traverse_obj,
url_or_none,
urljoin,
)
+from ..uti... | Patreon: --write-comments is broken
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to ... | 1,731,521,166,000 | null | Bug Report | [
"yt_dlp/extractor/patreon.py:PatreonIE._get_comments"
] | [] | 1 | 505 | ||
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11527 | a9f85670d03ab993dc589f21a9ffffcad61392d5 | null | diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py
index f5a55efc4ff1..2849d9fd5b0d 100644
--- a/yt_dlp/extractor/archiveorg.py
+++ b/yt_dlp/extractor/archiveorg.py
@@ -205,6 +205,26 @@ class ArchiveOrgIE(InfoExtractor):
},
},
],
+ }, {
+ # The ... | [archive.org] ERROR: can only concatenate str (not "NoneType") to str - sporadic, only on certain URLs
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **su... | ideally, `join_nonempty` would've been used here
```diff
diff --git a/yt_dlp/extractor/archiveorg.py b/yt_dlp/extractor/archiveorg.py
index f5a55efc4..52fd02acc 100644
--- a/yt_dlp/extractor/archiveorg.py
+++ b/yt_dlp/extractor/archiveorg.py
@@ -335,7 +335,7 @@ def _real_extract(self, url):
i... | 1,731,452,423,000 | null | Bug Report | [
"yt_dlp/extractor/archiveorg.py:ArchiveOrgIE._real_extract"
] | [] | 1 | 506 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11513 | a9f85670d03ab993dc589f21a9ffffcad61392d5 | null | diff --git a/yt_dlp/extractor/facebook.py b/yt_dlp/extractor/facebook.py
index 2bcb5a8411f1..91e2f3489cea 100644
--- a/yt_dlp/extractor/facebook.py
+++ b/yt_dlp/extractor/facebook.py
@@ -563,13 +563,13 @@ def extract_from_jsmods_instances(js_data):
return extract_video_data(try_get(
... | [facebook] ERROR: No video formats found (on >= 2024.11.04)
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I h... | The debug output during cookies extraction is a bit concerning; are you sure the facebook cookies are being successfully extracted/passed? Have you tried with `--cookies` instead?
I didn't, but here's the debug output pulling cookies from chrome giving the same end result without all the cookie parsing output:
```
[d... | 1,731,379,665,000 | null | Bug Report | [
"yt_dlp/extractor/facebook.py:FacebookIE._extract_from_url"
] | [] | 1 | 507 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11478 | be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8 | null | diff --git a/yt_dlp/extractor/cloudflarestream.py b/yt_dlp/extractor/cloudflarestream.py
index 8a409461a8bc..9e9e89a801fa 100644
--- a/yt_dlp/extractor/cloudflarestream.py
+++ b/yt_dlp/extractor/cloudflarestream.py
@@ -8,7 +8,7 @@ class CloudflareStreamIE(InfoExtractor):
_DOMAIN_RE = r'(?:cloudflarestream\.com|(?:... | CloudFlareStream "No video formats found!"
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **upda... | i still meet this issue too
I am able to manually download the video.mpd file with
https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/manifest/video.mpd
So I'm not sure what's going wrong, maybe the extractor is malforming the url?
When running the command with "--no-check-certificate" I get a 404 error wh... | 1,731,071,079,000 | null | Bug Report | [
"yt_dlp/extractor/cloudflarestream.py:CloudflareStreamIE._real_extract"
] | [] | 1 | 509 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11472 | 282e19db827f0951c783ac946429f662bcf2200c | null | diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py
index 7cc15ec7b6f2..f1b87792713f 100644
--- a/yt_dlp/extractor/adobepass.py
+++ b/yt_dlp/extractor/adobepass.py
@@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
def _download_webpage_h... | [NBC]/[adobepass] ERROR: 'NoneType' object is not iterable
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I ha... | Regression introduced in dcfeea4dd5e5686821350baa6c7767a011944867
This should be the fix:
```diff
diff --git a/yt_dlp/extractor/adobepass.py b/yt_dlp/extractor/adobepass.py
index 7cc15ec7b..f1b877927 100644
--- a/yt_dlp/extractor/adobepass.py
+++ b/yt_dlp/extractor/adobepass.py
@@ -1362,7 +1362,7 @@ class Adob... | 1,730,927,829,000 | null | Bug Report | [
"yt_dlp/extractor/adobepass.py:AdobePassIE._download_webpage_handle"
] | [] | 1 | 510 | |
yt-dlp/yt-dlp | yt-dlp__yt-dlp-11466 | 282e19db827f0951c783ac946429f662bcf2200c | null | diff --git a/yt_dlp/extractor/goplay.py b/yt_dlp/extractor/goplay.py
index dfe5afe63514..32300f75c2f5 100644
--- a/yt_dlp/extractor/goplay.py
+++ b/yt_dlp/extractor/goplay.py
@@ -5,56 +5,63 @@
import hmac
import json
import os
+import re
+import urllib.parse
from .common import InfoExtractor
from ..utils import ... | [GoPlay] ERROR: [GoPlay] Unable to extract video_data
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I ... | Do **not** download the above spam links, they are malware
> Do not download these spam links, they are malware
The respond was too quick, so I didn't downloaded them. Thx :-)
Maybe some more information:
The video has many advertising at the begin and in the middle of the video.
> The video is protected by DRM
... | 1,730,837,991,000 | null | Bug Report | [
"yt_dlp/extractor/goplay.py:GoPlayIE._real_extract"
] | [
"yt_dlp/extractor/goplay.py:GoPlayIE._find_json"
] | 1 | 511 | |
gaogaotiantian/viztracer | gaogaotiantian__viztracer-528 | 2ed22b5b16dc232f966235a6a89fa678515a50a4 | null | diff --git a/src/viztracer/main.py b/src/viztracer/main.py
index 7cbf972c..eb996124 100644
--- a/src/viztracer/main.py
+++ b/src/viztracer/main.py
@@ -676,7 +676,7 @@ def exit_routine(self) -> None:
self.save()
if self.options.open: # pragma: no cover
import subpr... | Cannot import name 'viewer_main' from 'viztracer' in 1.0.0
### Phenomenon:
I've been using viztracer through the viztracer plugin in vscode, but after upgrading to 1.0.0 ,viztracer doesn't work.
### Error message:
```powershell
C:\ProgramData\anaconda3\python.exe -m viztracer --ignore_frozen --open --log_print --... | You have multiple versions of viztracers. The `vizviewer` viztracer tried to use is a different version. `viztracer` is from conda but seems like `vizviewer` used the version from your system Python.
But this is still partially my fault, `viztracer` should always use the same version `vizviewer`. For now you can either... | 1,733,202,811,000 | null | Bug Report | [
"src/viztracer/main.py:VizUI.exit_routine"
] | [] | 1 | 512 | |
locustio/locust | locustio__locust-2976 | a8510a466dd358a5d2956079cf10f25dc9beb380 | null | diff --git a/locust/runners.py b/locust/runners.py
index 9552d519c7..a4165cfa40 100644
--- a/locust/runners.py
+++ b/locust/runners.py
@@ -1025,7 +1025,9 @@ def client_listener(self) -> NoReturn:
# if abs(time() - msg.data["time"]) > 5.0:
# warnings.warn("The worker node's clock see... | master crash with different version worker
### Prerequisites
- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)
- [X] I am reporting a bug, not asking a question
### Description
I ran distributed locust with master node locust version 2.32.2 and some worker node locust vers... | 1,731,139,675,000 | null | Bug Report | [
"locust/runners.py:MasterRunner.client_listener"
] | [] | 1 | 513 | ||
ranaroussi/yfinance | ranaroussi__yfinance-2173 | 3ac85397cbaee4b28baea8e900e1de6e7b2fbe52 | null | diff --git a/yfinance/base.py b/yfinance/base.py
index 81733ba9..c3150759 100644
--- a/yfinance/base.py
+++ b/yfinance/base.py
@@ -30,7 +30,7 @@
import pandas as pd
import requests
-from . import utils, cache, Search
+from . import utils, cache
from .data import YfData
from .exceptions import YFEarningsDateMissin... | Any way to get more news?
`ticker.news` seems to return 8 to 10 news articles.
However, Yahoo Finance can offer many more than 8 to 10 news articles per ticker: https://finance.yahoo.com/quote/MSFT/news/ (keep scrolling down).
Is there a way to get more than 8 to 10 news articles with yfinance?
| Someone began working on a solution but abandoned it: #1949 | 1,733,699,514,000 | null | Feature Request | [
"yfinance/base.py:TickerBase.get_news"
] | [] | 1 | 514 | |
ranaroussi/yfinance | ranaroussi__yfinance-2122 | f05f99c2b8101576911b35cbd3129afb04fb140d | null | diff --git a/yfinance/utils.py b/yfinance/utils.py
index 0968f9d1..ebc8b99a 100644
--- a/yfinance/utils.py
+++ b/yfinance/utils.py
@@ -613,7 +613,7 @@ def fix_Yahoo_returning_live_separate(quotes, interval, tz_exchange, repair=Fals
# - exception is volume, *slightly* greater on final row (and matches websi... | 0.2.42 and onwards fails to pull most recent trading days data for ASX stocks
### Describe bug
Pulling stock data using versions 0.2.42 and onwards fails to pull the last trading days data for ASX stocks. This could be related to timezones but the issue doesn't exist in 0.2.41.
### Simple code that reproduces your p... | 1,731,237,392,000 | null | Bug Report | [
"yfinance/utils.py:fix_Yahoo_returning_live_separate"
] | [] | 1 | 515 | ||
scipy/scipy | scipy__scipy-22106 | 15d6284e5a0f3333394ca4498eb56bce14a6245b | null | diff --git a/scipy/sparse/_construct.py b/scipy/sparse/_construct.py
index 0326c9963f0b..f483976badb7 100644
--- a/scipy/sparse/_construct.py
+++ b/scipy/sparse/_construct.py
@@ -349,7 +349,7 @@ def eye_array(m, n=None, *, k=0, dtype=float, format=None):
Parameters
----------
- m : int or tuple of ints
+... | DOC: sparse: `sparse.eye_array` does not accept `tuple[int, int]` as the docs say that it should
### Describe your issue.
`scipy.sparse.eye_array` does not accept `m: tuple[int, int]` as the docs suggest is should:
https://github.com/scipy/scipy/blob/964f0bb6701dc17b51b842382ced0fa2ee318377/scipy/sparse/_construct.... | Thank you for pointing this out!!
We should be using the [array_api specification](https://data-apis.org/array-api/latest/API_specification) for the [`eye` function](https://data-apis.org/array-api/latest/API_specification/generated/array_api.eye.html). That should also align us with the numpy interface. The functio... | 1,734,439,741,000 | null | Bug Report | [
"scipy/sparse/_construct.py:eye_array"
] | [] | 1 | 516 | |
scipy/scipy | scipy__scipy-22103 | caa7e2ab245a808a1c55a20fb5d5b49daf8bad93 | null | diff --git a/scipy/stats/_stats_py.py b/scipy/stats/_stats_py.py
index de7be104289b..71ae19acabc2 100644
--- a/scipy/stats/_stats_py.py
+++ b/scipy/stats/_stats_py.py
@@ -4298,7 +4298,7 @@ def pearsonr(x, y, *, alternative='two-sided', method=None, axis=0):
Axis along which to perform the calculation. Default ... | DOC: stats.pearsonr: incorrect `versionadded` for `axis` param
### Issue with current documentation:
Regarding the documentation of function scipy.stats.pearsonr. Typo in the version reference. The axis option is not in v1.13.0. It first appears in v1.14.0
### Idea or request for content:
Correct the version referen... | Thanks @biopzhang, agreed that this is a typo. Would you like to submit a PR to fix this? | 1,734,406,832,000 | null | Bug Report | [
"scipy/stats/_stats_py.py:pearsonr"
] | [] | 1 | 517 | |
scipy/scipy | scipy__scipy-22052 | 7f03fbaf30c400ff4bb14020f7f284ec2703c4d1 | null | diff --git a/scipy/sparse/linalg/_dsolve/linsolve.py b/scipy/sparse/linalg/_dsolve/linsolve.py
index d1ab77883163..560cb75bbf99 100644
--- a/scipy/sparse/linalg/_dsolve/linsolve.py
+++ b/scipy/sparse/linalg/_dsolve/linsolve.py
@@ -371,6 +371,10 @@ def splu(A, permc_spec=None, diag_pivot_thresh=None,
Notes
-... | sparse LU decomposition does not solve with complex right-hand side
The `solve` method of the sparse LU-decomposition `splu` or `spilu` throws a `TypeError` if called with a `numpy.array` of type `numpy.complex`. I am actually using `spilu` for preconditioning a gmres-solver required to perform a linear solve in a non-... | if you cast your A matrix as complex, then it works in both cases. So probably when the LHS is real it selects a real-typed solver and complains.
Thank you, you are right. Maybe some comments regarding this issue should be added in the documentation.
Good first issue, depending on familiarity with the math.
Hi I'm work... | 1,733,917,709,000 | null | Bug Report | [
"scipy/sparse/linalg/_dsolve/linsolve.py:splu",
"scipy/sparse/linalg/_dsolve/linsolve.py:spilu"
] | [] | 2 | 518 | |
DS4SD/docling | DS4SD__docling-528 | c830b92b2e043ea63d216f65b3f9d88d2a8c33f7 | null | diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py
index 05508712..bab956a7 100644
--- a/docling/backend/msword_backend.py
+++ b/docling/backend/msword_backend.py
@@ -133,7 +133,6 @@ def get_level(self) -> int:
def walk_linear(self, body, docx_obj, doc) -> DoclingDocument:
... | What is the meaning of `missing-text`?
### Question
When exporting docx documents as text, I always seem to get some `missing-text` in the output. I was not able to find this string in the project repository, `python-docx`, or documentation.
Snippet:
```py
doc_converter = DocumentConverter(allowed_formats=[I... | @Belval, thanks for sharing with sample documents, I will check this! | 1,733,475,107,000 | null | Bug Report | [
"docling/backend/msword_backend.py:MsWordDocumentBackend.handle_tables"
] | [] | 1 | 519 | |
DS4SD/docling | DS4SD__docling-472 | cc46c938b66b2d24f601acc9646782dc83326e1f | null | diff --git a/docling/models/tesseract_ocr_cli_model.py b/docling/models/tesseract_ocr_cli_model.py
index 9a50eee0..a6b2f7fb 100644
--- a/docling/models/tesseract_ocr_cli_model.py
+++ b/docling/models/tesseract_ocr_cli_model.py
@@ -1,3 +1,4 @@
+import csv
import io
import logging
import tempfile
@@ -95,7 +96,7 @@ def... | pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 656
### Bug
Trying to convert a PDF I get the following error, the same options works on other PDFs.
**Seems related to `pandas.read_csv()` on the TSV output of Tesseract.**
```
Encountered an error during conver... | 1,732,897,993,000 | null | Bug Report | [
"docling/models/tesseract_ocr_cli_model.py:TesseractOcrCliModel._run_tesseract"
] | [] | 1 | 520 | ||
DS4SD/docling | DS4SD__docling-442 | 6666d9ec070650df35a8b156643a78c32dcfefb5 | null | diff --git a/docling/backend/msword_backend.py b/docling/backend/msword_backend.py
index 496bdb7b..05508712 100644
--- a/docling/backend/msword_backend.py
+++ b/docling/backend/msword_backend.py
@@ -507,18 +507,19 @@ def get_docx_image(element, drawing_blip):
image_data = get_docx_image(element, drawing_blip... | Image location in Word Document is wrong
### Bug
The image placeholder in parsed docx documents is wrong. An incorrect index is used resulting in a wrong location for images in downstream export formats like markdown.
### Steps to reproduce
Parsing a simple .docx with docling
[image_within_text.docx](https://gi... | 1,732,630,531,000 | null | Bug Report | [
"docling/backend/msword_backend.py:MsWordDocumentBackend.handle_pictures"
] | [] | 1 | 521 | ||
DS4SD/docling | DS4SD__docling-322 | 2c0c439a4417d87aa712964acadb8618ea96ee65 | null | diff --git a/docling/models/ds_glm_model.py b/docling/models/ds_glm_model.py
index e63bad3a..0a066bfa 100644
--- a/docling/models/ds_glm_model.py
+++ b/docling/models/ds_glm_model.py
@@ -43,7 +43,8 @@ class GlmModel:
def __init__(self, options: GlmOptions):
self.options = options
- load_pretraine... | Unable to run.
### Bug
<!-- Describe the buggy behavior you have observed. -->
PS C:\Users\genco> & C:/ProgramData/anaconda3/envs/docling/python.exe c:/Users/genco/OneDrive/Documents/marker_new/docling_convertor_testing.py
Fetching 9 files: 100%|███████████████████████████████████████████████████████████████████████... | @ashunaveed Can you please tell us the exact version. There should be no need to download `crf_pos_model_en.bin`.
Please run,
```
docling --version
```
We suspect that you have by chance an older version, but we want to be 100% sure.
I'm trying to run Docling on a server without internet connection so I hav... | 1,731,480,648,000 | null | Bug Report | [
"docling/models/ds_glm_model.py:GlmModel.__init__"
] | [] | 1 | 523 | |
DS4SD/docling | DS4SD__docling-307 | 1239ade2750349d13d4e865d88449b232bbad944 | null | diff --git a/docling/backend/mspowerpoint_backend.py b/docling/backend/mspowerpoint_backend.py
index cbec761c..b71cd859 100644
--- a/docling/backend/mspowerpoint_backend.py
+++ b/docling/backend/mspowerpoint_backend.py
@@ -358,41 +358,36 @@ def walk_linear(self, pptx_obj, doc) -> DoclingDocument:
size = ... | In a specific PowerPoint, an issue with missing text occurred during parsing.
### Bug
<!-- In a specific PowerPoint, an issue with missing text occurred during parsing. -->
...
[specific PowerPoint]
[powerpoint_sample.pptx](https://github.com/user-attachments/files/17694015/powerpoint_sample.pptx)
...
### P... | @Crespo522 I'm working on the fix, in short - we need to handle grouped elements correctly. | 1,731,333,112,000 | null | Bug Report | [
"docling/backend/mspowerpoint_backend.py:MsPowerpointDocumentBackend.walk_linear"
] | [] | 1 | 524 | |
DS4SD/docling | DS4SD__docling-302 | 97f214efddcf66f0734a95c17c08936f6111d113 | null | diff --git a/docling/backend/html_backend.py b/docling/backend/html_backend.py
index 7d14c2eb..9cd1e29b 100644
--- a/docling/backend/html_backend.py
+++ b/docling/backend/html_backend.py
@@ -120,6 +120,8 @@ def analyse_element(self, element, idx, doc):
self.handle_header(element, idx, doc)
elif el... | Unable to extract code block in HTML page
When I try to extract the content from a webpage using ```docling```, I found it cannot extract **code blocks** in the webpage.
# Reproduce steps
HTML URL: https://requests.readthedocs.io/en/latest/user/quickstart/
```python
from docling.document_converter import Do... | 1,731,328,071,000 | null | Bug Report | [
"docling/backend/html_backend.py:HTMLDocumentBackend.analyse_element"
] | [
"docling/backend/html_backend.py:HTMLDocumentBackend.handle_code"
] | 1 | 525 | ||
certbot/certbot | certbot__certbot-10043 | 0e225dcba293441e7b8d420c9a210480f8c707d8 | null | diff --git a/tools/finish_release.py b/tools/finish_release.py
index 958d7672bc..56b92d2a1d 100755
--- a/tools/finish_release.py
+++ b/tools/finish_release.py
@@ -111,7 +111,7 @@ def get_snap_revisions(snap, channel, version):
print('Getting revision numbers for', snap, version)
cmd = ['snapcraft', 'status', ... | Fix regex in finish_release.py
```
(venv) certbot [3.0.0] » python3 tools/finish_release.py
certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s'
pattern = f'^\s+{channel}\s+{version}\s+(\d+)\s*'
certbot/tools/finish_release.py:114: SyntaxWarning: invalid escape sequence '\s'
patter... | 1,730,849,552,000 | null | Bug Report | [
"tools/finish_release.py:get_snap_revisions"
] | [] | 1 | 526 | ||
vitalik/django-ninja | vitalik__django-ninja-1349 | 97ef2914a7fffd058a311394a25af1fe489df722 | null | diff --git a/ninja/responses.py b/ninja/responses.py
index babd366e..6a0fd4ca 100644
--- a/ninja/responses.py
+++ b/ninja/responses.py
@@ -1,10 +1,11 @@
from enum import Enum
-from ipaddress import IPv4Address, IPv6Address
+from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network
from typing import An... | [BUG] Object of type Url is not JSON serializable
**Describe the bug**
django-ninja = "^1.3.0"
Using `HttpUrl` (or, I suspect, any *Url class) for a schema used in a response results in json serialization error. This is the same type of issue as #717.
```pytb
Traceback (most recent call last):
File "/home/ad... | 1,733,135,333,000 | null | Bug Report | [
"ninja/responses.py:NinjaJSONEncoder.default"
] | [] | 1 | 527 | ||
pandas-dev/pandas | pandas-dev__pandas-60577 | b0192c70610a9db593968374ea60d189daaaccc7 | null | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 3c0c5cc64c24c..5652d7fab0c7c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -241,7 +241,7 @@ def read_sql_table( # pyright: ignore[reportOverlappingOverload]
schema=...,
index_col: str | list[str] | None = ...,
coerce_float=...,
- parse_... | BUG: Type Annotation Inconsistency in read_sql_* Functions
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exist... | Thanks for the report!
> This problem is not always visible because the corresponding `pandas-stubs` already does this. The inconsistency appears however in some type checkers when additional stubs are not available or configured though.
It seems to me this is not appropriate. PEP 561 makes this quite clear I thi... | 1,734,286,166,000 | null | Bug Report | [
"pandas/io/sql.py:read_sql_table",
"pandas/io/sql.py:read_sql_query"
] | [] | 2 | 528 | |
pandas-dev/pandas | pandas-dev__pandas-60543 | 659eecf22a2e4c4a8f023c655a75a7135614a409 | null | diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 6fa21d9410187..b0c8ec1ffc083 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -430,7 +430,7 @@ def is_period_dtype(arr_or_dtype) -> bool:
Check whether an array-like or dtype is of the Period dtype.
... | DOC: Incorrect deprecation example for `is_period_dtype`
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pa... | 1,733,944,385,000 | null | Bug Report | [
"pandas/core/dtypes/common.py:is_period_dtype"
] | [] | 1 | 529 | ||
pandas-dev/pandas | pandas-dev__pandas-60526 | 8a286fa16f3160e939b192cbe8e218992a84e6fc | e6e1987b988857bb511d3797400b4d1873e86760 | diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index e2acd9a2c97c2..a2c3a706ae29c 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -65,23 +65,23 @@ def set_numexpr_threads(n=None) -> None:
ne.set_num_threads(n)
... | DOC: Update variables a and b to names consistent with comment documentation
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob... | 1,733,658,054,000 | null | Feature Request | [
"pandas/core/computation/expressions.py:_evaluate_standard",
"pandas/core/computation/expressions.py:_can_use_numexpr",
"pandas/core/computation/expressions.py:_evaluate_numexpr",
"pandas/core/computation/expressions.py:_where_standard",
"pandas/core/computation/expressions.py:_where_numexpr",
"pandas/cor... | [] | 8 | 530 | ||
pandas-dev/pandas | pandas-dev__pandas-60518 | 8a286fa16f3160e939b192cbe8e218992a84e6fc | 59f947ff40308bcfb6ecb65eb23b391d6f031c03 | diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index fe7e27f537b01..4a75acce46632 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -205,7 +205,7 @@ def generate(self, v) -> str:
val = v.tostring(self.encoding)
return ... | DOC: Convert v to conv_val in function for pytables.py
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
pandas\pandas\core\computation\pytables.py
### Documentation p... | 1,733,558,382,000 | null | Feature Request | [
"pandas/core/computation/pytables.py:BinOp.convert_value"
] | [] | 1 | 531 | ||
pandas-dev/pandas | pandas-dev__pandas-60512 | 659eecf22a2e4c4a8f023c655a75a7135614a409 | null | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d1aa20501b060..de7fb3682fb4f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -665,7 +665,7 @@ def size(self) -> int:
See Also
--------
- ndarray.size : Number of elements in the array.
+ numpy.ndarra... | DOC: methods in see also section in the pandas.DataFrame.size are not hyperlinks
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/refe... | take | 1,733,537,109,000 | null | Bug Report | [
"pandas/core/generic.py:NDFrame.size"
] | [] | 1 | 532 | |
pandas-dev/pandas | pandas-dev__pandas-60461 | a4fc97e92ed938260728e3f6c2b92df5ffb57b7f | null | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 137a49c4487f6..02b9291da9b31 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -87,8 +87,8 @@
if TYPE_CHECKING:
from collections.abc import (
+ Collection,
Sequence,
- Sized,
)
... | PERF: Melt 2x slower when future.infer_string option enabled
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this issue ... | @maver1ck Thanks for the report!
On main (and on my laptop), I see:
```
In [20]: pd.options.future.infer_string = False
In [21]: df = pd.DataFrame(data, columns=[f"column_name_{i}" for i in range(1, n_cols + 1)])
In [22]: df.insert(0, 'Id', ids)
In [23]: %timeit df_melted = df.melt(id_vars=['Id'], var_n... | 1,733,057,561,000 | null | Performance Issue | [
"pandas/core/dtypes/cast.py:construct_1d_object_array_from_listlike"
] | [] | 1 | 533 | |
pandas-dev/pandas | pandas-dev__pandas-60457 | 844b3191bd45b95cbaae341048bf7f367f086f2f | cfd0d3f010217939e412efdcfb7e669567e4d189 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a6be17a654aa7..3a48cc8a66076 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3878,6 +3878,14 @@ def to_csv(
>>> import os # doctest: +SKIP
>>> os.makedirs("folder/subfolder", exist_ok=True) # doctest: +SKIP
... | DOC: Add examples for float_format in to_csv documentation
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/referen... | take | 1,733,028,703,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.to_csv"
] | [] | 1 | 534 | |
pandas-dev/pandas | pandas-dev__pandas-60415 | 98f7e4deeff26a5ef993ee27104387a1a6e0d3d3 | 106f33cfce16f4e08f6ca5bd0e6e440ec9a94867 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 039bdf9c36ee7..a6be17a654aa7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -838,7 +838,7 @@ def pop(self, item: Hashable) -> Series | Any:
return result
@final
- def squeeze(self, axis: Axis | None = None):
+ ... | DOC: Missing type hint for squeeze method
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py
### D... | Can confirm, specifically this line: https://github.com/pandas-dev/pandas/blob/1c986d6213904fd7d9acc5622dc91d029d3f1218/pandas/core/generic.py#L841 | 1,732,555,390,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.squeeze"
] | [] | 1 | 535 | |
pandas-dev/pandas | pandas-dev__pandas-60398 | e62fcb15a70dfb6f4c408cf801f83b216578335b | null | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 35b576da87ed7..4fa8b86fa4c16 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -567,7 +567,7 @@ def __arrow_c_stream__(self, requested_schema=None):
Export the pandas Series as an Arrow C stream PyCapsule.
This relies o... | DOC: Fix docstring typo
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/series.py
### Documentation proble... | take | 1,732,301,626,000 | null | Bug Report | [
"pandas/core/series.py:Series.__arrow_c_stream__",
"pandas/core/series.py:Series.drop_duplicates",
"pandas/core/series.py:Series.sort_values",
"pandas/core/series.py:Series.swaplevel"
] | [] | 4 | 536 | |
pandas-dev/pandas | pandas-dev__pandas-60310 | 61f800d7b69efa632c5f93b4be4b1e4154c698d7 | null | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b35e2c8497fb7..34eb198b4b4da 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2115,8 +2115,8 @@ def from_records(
"""
Convert structured or record ndarray to DataFrame.
- Creates a DataFrame object from a structure... | DOC: Dataframe.from_records should not say that passing in a DataFrame for data is allowed
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/d... | Thanks for the report, PRs to fix are welcome!
take | 1,731,578,353,000 | null | Bug Report | [
"pandas/core/frame.py:DataFrame.from_records"
] | [] | 1 | 537 | |
pandas-dev/pandas | pandas-dev__pandas-60277 | 4fcee0e431135bf6fa97440d4d7e17a96630fe6e | 61f800d7b69efa632c5f93b4be4b1e4154c698d7 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 35014674565ff..3a83a3997f881 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2211,8 +2211,9 @@ def to_excel(
via the options ``io.excel.xlsx.writer`` or
``io.excel.xlsm.writer``.
- merge_cells : bo... | DOC: Document merge_cells="columns" in to_excel
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_excel.html
The `merge_cells` argument can also take `"columns"` due to #35384. This should be added to the docstring.
| take | 1,731,306,243,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.to_excel"
] | [] | 1 | 538 | |
pandas-dev/pandas | pandas-dev__pandas-60247 | 5f23aced2f97f2ed481deda4eaeeb049d6c7debe | 73da90c14b124aab05b20422b066794738024a4d | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7c2cc5d33a5db..56031f20faa16 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7668,8 +7668,12 @@ def interpolate(
* 'linear': Ignore the index and treat the values as equally
spaced. This is the only metho... | DOC: Improve documentation df.interpolate() for methods ‘time’, ‘index’ and ‘values’
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/... | Thanks for the report, agreed this could use clarification. PRs to improve are welcome!
take | 1,731,082,540,000 | null | Feature Request | [
"pandas/core/generic.py:NDFrame.interpolate"
] | [] | 1 | 539 | |
pandas-dev/pandas | pandas-dev__pandas-60187 | dbeeb1f05bca199b3c1aed979e6ae72074a82243 | cbf6e420854e6bfba9d4b8896f879dd24997223f | diff --git a/pandas/core/series.py b/pandas/core/series.py
index fe2bb0b5aa5c3..d83d9715878f8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2482,6 +2482,7 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series:
--------
numpy.around : Round values of an np.array.
... | DOC: Distinguish between Series.round and Series.dt.round
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/p... | I think it worth changing, can I take it?
take | 1,730,742,670,000 | null | Feature Request | [
"pandas/core/series.py:Series.round"
] | [] | 1 | 540 | |
huggingface/accelerate | huggingface__accelerate-3279 | cb8b7c637a8588668c52bd306f9b2828f69d9585 | null | diff --git a/src/accelerate/utils/modeling.py b/src/accelerate/utils/modeling.py
index 5f88e54e3c9..806f930acaa 100644
--- a/src/accelerate/utils/modeling.py
+++ b/src/accelerate/utils/modeling.py
@@ -1101,6 +1101,7 @@ def _init_infer_auto_device_map(
special_dtypes: Optional[Dict[str, Union[str, torch.device]]] =... | Calling infer_auto_device_map() with max_memory=None throws an error in version 1.2.0
### System Info
```Shell
accelerate==1.2.0
```
### Reproduction
Bug is from this commit:
https://github.com/huggingface/accelerate/commit/d7b1b368e9f484a18636a71600566b757d5cf87e
`max_memory` initialization was moved in... | @Nech-C
Sorry for the oversight. I will fix it ASAP. Thanks for pointing it out! | 1,733,630,086,000 | null | Bug Report | [
"src/accelerate/utils/modeling.py:_init_infer_auto_device_map",
"src/accelerate/utils/modeling.py:infer_auto_device_map"
] | [] | 2 | 541 | |
huggingface/accelerate | huggingface__accelerate-3261 | 29be4788629b772a3b722076e433b5b3b5c85da3 | null | diff --git a/examples/by_feature/megatron_lm_gpt_pretraining.py b/examples/by_feature/megatron_lm_gpt_pretraining.py
index 18488ec41e2..c9d4787ed83 100644
--- a/examples/by_feature/megatron_lm_gpt_pretraining.py
+++ b/examples/by_feature/megatron_lm_gpt_pretraining.py
@@ -252,7 +252,7 @@ def main():
if args.with... | [BUG] Accelerator.__init__() got an unexpected keyword argument 'logging_dir'
### System Info
```Shell
accelerate version: main
python version: 3.11
torch version: 2.4
numpy version: 1.26.4
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] One of the scripts i... | Thanks for pointing this out. I think it should be `project_dir` instead. Are you interested in submitting a PR to fix this?
For clarity, the file is at `https://github.com/huggingface/accelerate/blob/main/examples/by_feature/megatron_lm_gpt_pretraining.py` :)
of course
Thanks for pointing this out. I think it should b... | 1,732,582,927,000 | null | Bug Report | [
"examples/by_feature/megatron_lm_gpt_pretraining.py:main"
] | [] | 1 | 542 | |
huggingface/trl | huggingface__trl-2433 | 9ff79a65e3d1c28b7ee8bc0912b2fbdceb3dbeec | null | diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py
index 106426073f..f2e3eb9674 100644
--- a/trl/trainer/rloo_trainer.py
+++ b/trl/trainer/rloo_trainer.py
@@ -279,7 +279,7 @@ def repeat_generator():
# trainer state initialization
self.state.global_step = 0
self.state.episo... | RLOO Trainer Stopping After 1 Epoch
### System Info
- Platform: Linux-3.10.0-693.11.6.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.5
- PyTorch version: 2.4.0
- CUDA device(s): not available
- Transformers version: 4.46.2
- Accelerate version: 1.1.1
- Accelerate config: not found
- Datasets version: 3.... | 1,733,253,459,000 | null | Bug Report | [
"trl/trainer/rloo_trainer.py:RLOOTrainer.train"
] | [] | 1 | 543 | ||
huggingface/trl | huggingface__trl-2417 | 9c5388b69e0842f76edc46a2ff9d0b51e1db4337 | null | diff --git a/trl/trainer/online_dpo_trainer.py b/trl/trainer/online_dpo_trainer.py
index 7830d3fe64..56edd22be5 100644
--- a/trl/trainer/online_dpo_trainer.py
+++ b/trl/trainer/online_dpo_trainer.py
@@ -284,7 +284,10 @@ def __init__(
self.reward_model = prepare_deepspeed(
self.rewa... | Online DPO Meets Error When Using Deepspeed for Speed Up.
### System Info
!pip install git+https://github.com/huggingface/trl.git
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give... | Sorry, I use "deepspeed_zero2.yaml" and it should be
!ACCELERATE_LOG_LEVEL=info accelerate launch --config_file deepspeed_zero2.yaml
online_dpo.py
--model_name_or_path mistralai/Mistral-7B-v0.1
--reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft
--dataset_name nvidia/HelpSteer2
--learning_rate 5.0e-6
-... | 1,732,904,159,000 | null | Bug Report | [
"trl/trainer/online_dpo_trainer.py:OnlineDPOTrainer.__init__"
] | [] | 1 | 544 | |
huggingface/trl | huggingface__trl-2332 | 74e20cbbbcbac7ac8d426df09eda5f310c637def | null | diff --git a/trl/trainer/dpo_trainer.py b/trl/trainer/dpo_trainer.py
index b563cab2f5..0c9883387a 100644
--- a/trl/trainer/dpo_trainer.py
+++ b/trl/trainer/dpo_trainer.py
@@ -1086,10 +1086,10 @@ def concatenated_forward(self, model: nn.Module, batch: Dict[str, Union[List, to
# Get the first column idx th... | Wrong tensor index for roll and truncate in DPOTrainer fn concatenated_forward( ).
### System Info
it is a tensor index error
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give det... | Good catch! Thanks! Do you mind opening a PR to fix that? | 1,730,897,529,000 | null | Bug Report | [
"trl/trainer/dpo_trainer.py:DPOTrainer.concatenated_forward"
] | [] | 1 | 545 | |
huggingface/trl | huggingface__trl-2325 | 74e20cbbbcbac7ac8d426df09eda5f310c637def | null | diff --git a/trl/trainer/rloo_trainer.py b/trl/trainer/rloo_trainer.py
index 7bbd39264d..e33899f5d9 100644
--- a/trl/trainer/rloo_trainer.py
+++ b/trl/trainer/rloo_trainer.py
@@ -263,7 +263,6 @@ def repeat_generator():
approxkl_stats = torch.zeros(stats_shape, device=device)
pg_clipfrac_stats = torch.... | Several problems in RLOOTrainer
### System Info
main
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
1. metrics["loss/value_avg"] = self.accele... | 1,730,747,016,000 | null | Bug Report | [
"trl/trainer/rloo_trainer.py:RLOOTrainer.train"
] | [] | 1 | 546 | ||
sympy/sympy | sympy__sympy-27301 | a7719e719c0b43ec1dbb964b01b57c4f3783be8d | null | diff --git a/sympy/plotting/plot.py b/sympy/plotting/plot.py
index 63da0440dabb..50029392a1ac 100644
--- a/sympy/plotting/plot.py
+++ b/sympy/plotting/plot.py
@@ -301,8 +301,8 @@ def plot(*args, show=True, **kwargs):
:external:meth:`~matplotlib.axes.Axes.fill_between` method.
adaptive : bool, optional
-... | DOC: outdated information about adaptive sampling in plot() function
I have recently learned (https://github.com/mgeier/python-audio/issues/4) that SymPy doesn't use adaptive sampling by default anymore.
Therefore, this documentation is outdated:
https://github.com/sympy/sympy/blob/a7719e719c0b43ec1dbb964b01b57c4f378... | 1,732,293,434,000 | null | Bug Report | [
"sympy/plotting/plot.py:plot"
] | [] | 1 | 547 | ||
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1198 | b568faec40eef1fee88f8aeb27ac3f9d6e006ba4 | null | diff --git a/faster_whisper/vad.py b/faster_whisper/vad.py
index 9605931c..1f7d2057 100644
--- a/faster_whisper/vad.py
+++ b/faster_whisper/vad.py
@@ -260,8 +260,9 @@ def __init__(self, encoder_path, decoder_path):
) from e
opts = onnxruntime.SessionOptions()
- opts.inter_op_num_threads =... | OOM when using VAD
Hi, does somebody else experience issues with memory consumption when transcribing audio files containing a lot of speech (~ 4 hours long)? I am running the latest version of faster-whisper in a Kubernetes pod on a g4dn AWS instance. The server has 4 cores, 1 GPU, and 16GB RAM, but the pod is limited... | 1,733,855,723,000 | null | Performance Issue | [
"faster_whisper/vad.py:SileroVADModel.__init__",
"faster_whisper/vad.py:SileroVADModel.__call__"
] | [] | 2 | 548 | ||
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1157 | bcd8ce0fc72d1fa4e42bdf5fd34d5d17bae680c2 | null | diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py
index 067527f1..763d64ac 100644
--- a/faster_whisper/transcribe.py
+++ b/faster_whisper/transcribe.py
@@ -1699,12 +1699,14 @@ def find_alignment(
# array([0.])
# This results in crashes when we lookup jump_times w... | IndexError: list index out of range in add_word_timestamps function
Hi,
I found a rare condition, with a specific wav file, specific language and prompt, when I try to transcribe with word_timestamps=True, there is a list index out of range error in add_word_timestamps function:
```
File "/usr/local/src/transcr... | I'm aware that this error exists but I had no luck in reproducing it, can you write the exact steps to reproduce and upload the audio file?
Yes. The sample python code that generates the issue:
```
import torch
from faster_whisper import WhisperModel
asr_model = WhisperModel("large-v3-turbo", device="cuda", compu... | 1,732,098,639,000 | null | Bug Report | [
"faster_whisper/transcribe.py:WhisperModel.find_alignment",
"faster_whisper/transcribe.py:merge_punctuations"
] | [] | 2 | 549 | |
SYSTRAN/faster-whisper | SYSTRAN__faster-whisper-1141 | 85e61ea11173dce3f10ce05e4b4bc1a2939d9e4e | null | diff --git a/faster_whisper/transcribe.py b/faster_whisper/transcribe.py
index 6d18a173..80e5d92c 100644
--- a/faster_whisper/transcribe.py
+++ b/faster_whisper/transcribe.py
@@ -174,6 +174,9 @@ def forward(self, features, chunks_metadata, **forward_params):
compression_ratio=get_compression_ra... | Some segment has a 1 second shifted after PR #856
appreciate your hard work
---
audio (2 minutes): [01.aac.zip](https://github.com/user-attachments/files/17751633/01.aac.zip)
The correct SRT result (using commit fbcf58b, which is before the huge PR #856): [01.old.srt.zip](https://github.com/user-attachments/fi... | 1,731,607,572,000 | null | Bug Report | [
"faster_whisper/transcribe.py:BatchedInferencePipeline.forward",
"faster_whisper/transcribe.py:BatchedInferencePipeline._batched_segments_generator",
"faster_whisper/transcribe.py:WhisperModel.generate_segments",
"faster_whisper/transcribe.py:WhisperModel.add_word_timestamps"
] | [] | 4 | 550 | ||
mlflow/mlflow | mlflow__mlflow-13821 | 15dbca59de6974d1ed9ce1e801edefd86b6a87ef | null | diff --git a/mlflow/models/model.py b/mlflow/models/model.py
index 2326c3df57402..7ae1fbede42db 100644
--- a/mlflow/models/model.py
+++ b/mlflow/models/model.py
@@ -1116,9 +1116,20 @@ def update_model_requirements(
def _validate_langchain_model(model):
- from mlflow.langchain import _validate_and_prepare_lc_mod... | [BUG] MLflow langchain does not support logging RunnableWithMessageHistory
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Databricks
###... | @VarunUllanat The workaround is to use `models from code` for saving the langchain model https://mlflow.org/docs/latest/models.html#models-from-code. This will be the recommended way for saving langchain models.
Thanks for the response, when I set that:
`mlflow.models.set_model(model=conversational_rag_chain)`
I ... | 1,731,987,688,000 | null | Bug Report | [
"mlflow/models/model.py:_validate_langchain_model"
] | [] | 1 | 551 | |
jax-ml/jax | jax-ml__jax-25487 | c73f3060997ac3b1c6de4f075111b684ea20b6ac | null | diff --git a/jax/_src/random.py b/jax/_src/random.py
index 13c4ab4dbce4..12aa5b93efbf 100644
--- a/jax/_src/random.py
+++ b/jax/_src/random.py
@@ -291,15 +291,18 @@ def split(key: ArrayLike, num: int | tuple[int, ...] = 2) -> Array:
return _return_prng_keys(wrapped, _split(typed_key, num))
-def _key_impl(keys: A... | `jax.random.beta` 3 orders of magnitude slower from 0.4.36 on GPU
### Description
My code runs substantially slower from one month ago, and I figued out a key bottleneck: sampling from beta distribution has gotten around 1000 times slower on GPU.
On Colab, I run the following code on different versions of jax
``... | I can reproduce this, but I'm not totally sure where this would be coming from. Perhaps @jakevdp or @froystig could take a look re: recent changes to PRNGs?
My bisection points to https://github.com/jax-ml/jax/pull/24593 | 1,734,133,002,000 | null | Performance Issue | [
"jax/_src/random.py:_key_impl",
"jax/_src/random.py:key_impl"
] | [
"jax/_src/random.py:_key_spec"
] | 2 | 552 | |
jax-ml/jax | jax-ml__jax-24733 | 4b4fb9dae9eb7e2740d70de5b4a610f979530382 | null | diff --git a/jax/_src/numpy/reductions.py b/jax/_src/numpy/reductions.py
index fa8d73361e2b..be1e55675079 100644
--- a/jax/_src/numpy/reductions.py
+++ b/jax/_src/numpy/reductions.py
@@ -2360,7 +2360,8 @@ def _quantile(a: Array, q: Array, axis: int | tuple[int, ...] | None,
index[axis] = high
high_value = a[t... | median FloatingPointError: invalid value (nan) encountered in jit(convert_element_type)
### Description
Hello,
I got this error in jnp.median when I set JAX_DISABLE_JIT=True and JAX_DEBUG_NANS=True.
```
Traceback (most recent call last):
File "/data1/home/hhu17/zyl/PINE/H2+/3/test.py", line 29, in <module>
... | Looks like it's coming from the NaN introduced on this line:
https://github.com/jax-ml/jax/blob/4b4fb9dae9eb7e2740d70de5b4a610f979530382/jax/_src/numpy/reductions.py#L2363
@jakevdp Can I tag you here since you wrote the implementation for _quantile? | 1,730,849,863,000 | null | Bug Report | [
"jax/_src/numpy/reductions.py:_quantile"
] | [] | 1 | 553 | |
jax-ml/jax | jax-ml__jax-24717 | 34b4787e2eff9edbd8eca242a74f1c165388b871 | null | diff --git a/jax/_src/scipy/stats/_core.py b/jax/_src/scipy/stats/_core.py
index 08d1c0b6b538..f7b28d3ac301 100644
--- a/jax/_src/scipy/stats/_core.py
+++ b/jax/_src/scipy/stats/_core.py
@@ -198,13 +198,12 @@ def rankdata(
return jnp.apply_along_axis(rankdata, axis, a, method)
arr = jnp.ravel(a)
- sorter = j... | scipy.stats.rankdata causes constant folding warning for method='dense' but not method='ordinal'
### Description
[`scipy.stats.rankdata`](https://jax.readthedocs.io/en/latest/_autosummary/jax.scipy.stats.rankdata.html) causes a constant folding warning for `method='dense'` but not `method='ordinal'`:
```
$ py -c "... | 1,730,812,512,000 | null | Performance Issue | [
"jax/_src/scipy/stats/_core.py:rankdata"
] | [] | 1 | 554 | ||
phidatahq/phidata | phidatahq__phidata-1589 | 2c18b480f349eee62e16a794a250ed8549558cb1 | null | diff --git a/phi/document/chunking/recursive.py b/phi/document/chunking/recursive.py
index 662a9218c..47c552294 100644
--- a/phi/document/chunking/recursive.py
+++ b/phi/document/chunking/recursive.py
@@ -38,6 +38,7 @@ def chunk(self, document: Document) -> List[Document]:
chunk_id = None
if d... | Duplicate key value violates unique constraint with recursive chunking
When use `RecursiveChunking` with large files, some errors happen:
```
ERROR Error with batch starting at index 0: (psycopg.errors.UniqueViolation) duplicate key value violates unique constraint "recipes_agentic_recursive_chunking_pkey"
... | 1,734,420,482,000 | null | Bug Report | [
"phi/document/chunking/recursive.py:RecursiveChunking.chunk"
] | [] | 1 | 555 | ||
phidatahq/phidata | phidatahq__phidata-1583 | 54f7a22970f66c32409607e2f1e3474a7a11a395 | null | diff --git a/phi/memory/agent.py b/phi/memory/agent.py
index 6bfd6c185..5f3a7dea1 100644
--- a/phi/memory/agent.py
+++ b/phi/memory/agent.py
@@ -1,5 +1,6 @@
from enum import Enum
from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
from pydantic import BaseModel, ConfigDict
@@ -357,8 +3... | Agents with memory dont work in playground
Repro Steps
```
memory_db = SqliteMemoryDb(table_name="memories", db_file="tmp/agents.db")
agent = Agent(
name="my_agent",
agent_id="my_agent",
model=models["gpt-4o"],
debug_mode=True,
memory=AgentMemory(
db=memory_db,
create_use... | Hey @nikhil-pandey, did you push a fix for this in your PR? or are you still countering this issue?
@manthanguptaa I have same issue
```
File "/Users/fireharp/.pyenv/versions/3.11.9/lib/python3.11/copy.py", line 161, in deepcopy
rv = reductor(4)
^^^^^^^^^^^
TypeError: cannot pickle 'module' object... | 1,734,372,194,000 | null | Bug Report | [
"phi/memory/agent.py:AgentMemory.deep_copy"
] | [] | 1 | 556 | |
phidatahq/phidata | phidatahq__phidata-1582 | 54f7a22970f66c32409607e2f1e3474a7a11a395 | null | diff --git a/phi/tools/function.py b/phi/tools/function.py
index 24d103165..89520833e 100644
--- a/phi/tools/function.py
+++ b/phi/tools/function.py
@@ -175,7 +175,7 @@ def process_entrypoint(self, strict: bool = False):
except Exception as e:
logger.warning(f"Could not parse args for {self.name}:... | Bedrock - Claude 3.5 Sonnet not working for Multi Agent Team
**When trying to run a Multi-Agent Team using Amazon Bedrock Claude 3.5 Sonnet, then I get the following error.**
Traceback (most recent call last):
File "/Users/RyanBlake/Desktop/Source Control/PhiData Agents/FinanceAgentTeam.py", line 34, in <module>
... | hey @billybobpersonal, I am going to try to replicate the issue today. Allow me some time
@manthanguptaa thanks.
Were you able to replicate it?
Or would you like me to send more info.
Hey @billybobpersonal, I was able to replicate it. I am working on a fix for it | 1,734,369,612,000 | null | Bug Report | [
"phi/tools/function.py:Function.process_entrypoint"
] | [] | 1 | 557 | |
phidatahq/phidata | phidatahq__phidata-1563 | 8f55f8b1d3fc13d46ad840666225ff2f9885cb68 | null | diff --git a/phi/tools/crawl4ai_tools.py b/phi/tools/crawl4ai_tools.py
index a7ca95c78..172953744 100644
--- a/phi/tools/crawl4ai_tools.py
+++ b/phi/tools/crawl4ai_tools.py
@@ -1,9 +1,10 @@
+import asyncio
from typing import Optional
from phi.tools import Toolkit
try:
- from crawl4ai import WebCrawler
+ fr... | Crawl4AI tool has error
I tweaked example code from here:
https://docs.phidata.com/tools/crawl4ai
and used this code:
```
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.crawl4ai_tools import Crawl4aiTools
from dotenv import load_dotenv
load_dotenv()
agent = Agent(
m... | Hey @vanetreg, I am able to replicate this error. Allow me some time to fix this issue. | 1,734,095,142,000 | null | Bug Report | [
"phi/tools/crawl4ai_tools.py:Crawl4aiTools.web_crawler"
] | [
"phi/tools/crawl4ai_tools.py:Crawl4aiTools._async_web_crawler"
] | 1 | 558 | |
phidatahq/phidata | phidatahq__phidata-1562 | bd734bc8528aec12d1387064ab9cac571508fc7f | null | diff --git a/phi/model/google/gemini.py b/phi/model/google/gemini.py
index 4a11c1c43..263d3afb0 100644
--- a/phi/model/google/gemini.py
+++ b/phi/model/google/gemini.py
@@ -23,7 +23,7 @@
GenerateContentResponse as ResultGenerateContentResponse,
)
from google.protobuf.struct_pb2 import Struct
-except ... | ToolKit functions with no arguments cause an error when using Gemini models.
phidata version: 2.7.2
**To reproduce**: Use a Gemini model and provide a toolkit with a registered method that takes no arguments.
**Expected behaviour**: Model can successfully use the tool.
**Actual behaviour**: The gemini library ret... | 1,734,091,646,000 | null | Bug Report | [
"phi/model/google/gemini.py:Gemini.add_tool"
] | [
"phi/model/google/gemini.py:Gemini._build_function_declaration"
] | 1 | 559 | ||
nltk/nltk | nltk__nltk-3335 | 9a5622f8a5b228df9499cd03181d9f8491e39f17 | null | diff --git a/nltk/app/wordnet_app.py b/nltk/app/wordnet_app.py
index 48fe1e30f6..437eb0f755 100644
--- a/nltk/app/wordnet_app.py
+++ b/nltk/app/wordnet_app.py
@@ -414,7 +414,7 @@ def get_relations_data(word, synset):
),
),
)
- elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ... | Missing procedure call in line 417
Line 417 of the file "nltk/app/wordnet_app.py" should look like this:
elif synset.pos() == wn.ADJ or synset.pos() == wn.ADJ_SAT:
but instead looks like this:
elif synset.pos() == wn.ADJ or synset.pos == wn.ADJ_SAT:
which will generate this error (complete with spel... | Thanks @drewvid, would you consider correcting both spelling errors in a PR?
Sure | 1,729,499,882,000 | null | Bug Report | [
"nltk/app/wordnet_app.py:get_relations_data"
] | [] | 1 | 560 | |
kedro-org/kedro | kedro-org__kedro-4299 | 84b71b1436942d70f181a083991806cf75d5cd6d | null | diff --git a/kedro/framework/cli/cli.py b/kedro/framework/cli/cli.py
index f5917e1b87..6ad4e24e97 100644
--- a/kedro/framework/cli/cli.py
+++ b/kedro/framework/cli/cli.py
@@ -217,7 +217,7 @@ def global_groups(self) -> Sequence[click.MultiCommand]:
combines them with the built-in ones (eventually overriding the... | `kedro --version` isn't working
## Description
Reported by @noklam, since adding lazy loading of Kedro subcommands, the `--version`/`-V` option isn't working.
## Context
This bug is originating in Kedro 0.19.7 -> https://github.com/kedro-org/kedro/pull/3883
| > Usage: kedro [OPTIONS] COMMAND [ARGS]...
> Try 'kedro -h' for help.
>
> Error: No such option: -v
>
This is the stack trace when run `kedro -V`, `kedro -v ` or `kedro --version`
While investgating this issue, I think it's worth checking why CI didn't catch this error, we have this test inplace.
```python... | 1,730,797,930,000 | null | Bug Report | [
"kedro/framework/cli/cli.py:KedroCLI.global_groups"
] | [] | 1 | 561 | |
dask/dask | dask__dask-11608 | 24c492095a791696ce6611e9d2294274f4592911 | null | diff --git a/dask/_task_spec.py b/dask/_task_spec.py
index 316f1805aa6..c108bbb5b6b 100644
--- a/dask/_task_spec.py
+++ b/dask/_task_spec.py
@@ -799,6 +799,7 @@ def __init__(
None,
self.to_container,
*args,
+ klass=self.klass,
_dependencies=_dependencies,
... | `NestedContainer.to_container` method gets tracked individually per NestedContainer object
Looking into https://github.com/dask/distributed/issues/8958, I've noticed that for each `NestedContainer` object, its bound `to_container` method is tracked individually by the GC. This accounts for ~500k of 9MM objects in my wo... | On top, that is very likely a self referencing cycle so breaking this will benefit GC in more than one way | 1,734,442,914,000 | null | Performance Issue | [
"dask/_task_spec.py:NestedContainer.__init__",
"dask/_task_spec.py:NestedContainer.to_container"
] | [] | 2 | 562 | |
dask/dask | dask__dask-11539 | 5b115c4360fec6a4aa6e0edf8ad1d89a87c986dd | null | diff --git a/dask/array/core.py b/dask/array/core.py
index 10736af6f9d..0a7ebeb1b7c 100644
--- a/dask/array/core.py
+++ b/dask/array/core.py
@@ -3754,9 +3754,9 @@ def from_zarr(
store = zarr.storage.FSStore(url, **storage_options)
else:
store = url
- z = zarr.open_array(sto... | Warning raised with default `from_zarr` settings
**Describe the issue**:
Reading a zarr array with `dask.array.from_zarr` raises a `UserWarning`, but I'm not doing anything wrong.
**Minimal Complete Verifiable Example**:
```python
import dask.array
import zarr
zarr_arr = zarr.open(shape=(6, 6, 6), store=... | 1,732,053,620,000 | null | Bug Report | [
"dask/array/core.py:from_zarr"
] | [] | 1 | 563 | ||
dask/dask | dask__dask-11491 | fa8fecf10a94971f2f31df57d504d25bef4dd57e | null | diff --git a/dask/array/core.py b/dask/array/core.py
index fdf65bd24a4..3065406a922 100644
--- a/dask/array/core.py
+++ b/dask/array/core.py
@@ -562,7 +562,9 @@ def map_blocks(
Dimensions lost by the function.
new_axis : number or iterable, optional
New dimensions created by the function. Note th... | `map_blocks()` with `new_axis` output has incorrect shape
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2... | I don't think that we can guess the output shape with a high degree of fidelity. We should probably either set all chunks to NaN or force the specification of chunks.
Being able to specify the size of new output dimensions if known would be nice. e.g., in the above toy example we know the size of the new dimension is g... | 1,730,759,801,000 | null | Bug Report | [
"dask/array/core.py:map_blocks"
] | [] | 1 | 564 | |
feast-dev/feast | feast-dev__feast-4727 | e9cd3733f041da99bb1e84843ffe5af697085c34 | null | diff --git a/sdk/python/feast/feature_server.py b/sdk/python/feast/feature_server.py
index 26ee604e79..1f4918fe7a 100644
--- a/sdk/python/feast/feature_server.py
+++ b/sdk/python/feast/feature_server.py
@@ -24,6 +24,7 @@
FeastError,
FeatureViewNotFoundException,
)
+from feast.feast_object import FeastObject
... | Wrong permission asserts on materialize endpoints
## Expected Behavior
The `assert_permissions` function expects a `resources` of type `FeastObject`.
## Current Behavior
Materialization endpoints in `feature_server` module receive instead a `str`, as in [/materialize](https://github.com/feast-dev/feast/blob/60fb... | 1,730,404,565,000 | null | Bug Report | [
"sdk/python/feast/feature_server.py:get_app"
] | [] | 1 | 565 | ||
python/mypy | python__mypy-18292 | c4f5056d6c43db556b5215cb3c330fcde25a77cd | null | diff --git a/mypy/main.py b/mypy/main.py
index e1c9f20400bc..d2a28a18c6a8 100644
--- a/mypy/main.py
+++ b/mypy/main.py
@@ -9,6 +9,7 @@
import time
from collections import defaultdict
from gettext import gettext
+from io import TextIOWrapper
from typing import IO, Any, Final, NoReturn, Sequence, TextIO
from mypy ... | Error when displaying error that contains unicode characters in Windows
<!--
If you're new to mypy and you're not sure whether what you're experiencing is a mypy bug, please see the "Question and Help" form
instead.
Please also consider:
- checking our common issues page: https://mypy.readthedocs.io/en/st... | My 'fix' doesn't really work perfectly. Something in Windows+emacs+flycheck doesn't decode the mypy output as unicode, and what I see in Emacs is `file.py:1:5: error: Name "γ" is not defined`. But that's probably not a mypy issue.
Update: I tested this with updated mypy 0.950 in Windows and Ubuntu, and couldn't reprod... | 1,734,121,592,000 | null | Bug Report | [
"mypy/main.py:main"
] | [] | 1 | 566 | |
albumentations-team/albumentations | albumentations-team__albumentations-2183 | 47c24503e0636f258e2af2b18e552d52271308bf | null | diff --git a/albumentations/augmentations/functional.py b/albumentations/augmentations/functional.py
index 52adf80df..2dc1dd07f 100644
--- a/albumentations/augmentations/functional.py
+++ b/albumentations/augmentations/functional.py
@@ -925,7 +925,12 @@ def add_sun_flare_overlay(
overlay = img.copy()
output =... | [RandomSunFlare] Add transparency to RandomSunFlare

Sunflare obscures the object
| Can I assume explore.albumentations.ai hosts latest commit on main?
Typically yes, unless I forget to update the explore.albumentations.ai
Right now it is the latest. | 1,733,844,294,000 | null | Feature Request | [
"albumentations/augmentations/functional.py:add_sun_flare_overlay"
] | [] | 1 | 567 | |
bridgecrewio/checkov | bridgecrewio__checkov-6826 | 24535627d7315014328ec034daa3362a72948d09 | null | diff --git a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
index 563798a01d0..d2011578ec6 100644
--- a/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
+++ b/checkov/terraform/checks/resource/aws/EKSPlatformVersion.py
@@ -24,7 +24,8 ... | Add EKS 1.31 as a supported version
**Describe the issue**
EKS 1.31 has been released. However `CKV_AWS_339` fails as this is not listed as a supported version.
**Examples**
```
resource "aws_eks_cluster" "eks_cluster" {
...
version = "1.31"
```
**Version (please complete the following ... | @zvickery thanks for the comment, please feel free to contribute as this is the fastest way our checks could be updated :) | 1,731,355,205,000 | null | Feature Request | [
"checkov/terraform/checks/resource/aws/EKSPlatformVersion.py:EKSPlatformVersion.get_expected_values"
] | [] | 1 | 568 | |
spotify/luigi | spotify__luigi-3324 | 80549f6b6f8c143effb81f3cf4a411b6068d9e2c | null | diff --git a/luigi/contrib/postgres.py b/luigi/contrib/postgres.py
index 719b80a4d7..19e96e8180 100644
--- a/luigi/contrib/postgres.py
+++ b/luigi/contrib/postgres.py
@@ -356,16 +356,15 @@ def copy(self, cursor, file):
else:
raise Exception('columns must consist of column strings or (column string... | [contrib.postgres] copy_from does not accept schema.table notation in most recent psycopg2 versions
<!---
We use GitHub issues mainly for tracking bugs and feature requests.
Questions for how to use luigi can be sent to the mailing list.
Currently, there are no strict procedures or guidelines for submitting issues... | 1,732,801,325,000 | null | Bug Report | [
"luigi/contrib/postgres.py:CopyToTable.copy"
] | [] | 1 | 569 | ||
robotframework/robotframework | robotframework__robotframework-5265 | 6f58c00b10bd0b755657eb2a615b9a29a063f6ce | null | diff --git a/src/robot/output/pyloggingconf.py b/src/robot/output/pyloggingconf.py
index fdccb16329d..b2300a5ad21 100644
--- a/src/robot/output/pyloggingconf.py
+++ b/src/robot/output/pyloggingconf.py
@@ -36,6 +36,7 @@ def robot_handler_enabled(level):
return
handler = RobotHandler()
old_raise = logg... | `logging` module log level is not restored after execution
Hi,
It seems like that the robot handler is changing the root logger log level via ``set_level`` function (``robot.output.pyloggingconf``) but the original root logger level is not restored back after the end of the ``robot.running.model.TestSuite.run`` meth... | Restoring old configuration sounds good to me. Interested to create a PR?
Definitely! Thank you @pekkaklarck ! | 1,731,601,814,000 | null | Bug Report | [
"src/robot/output/pyloggingconf.py:robot_handler_enabled"
] | [] | 1 | 570 | |
ShishirPatil/gorilla | ShishirPatil__gorilla-754 | 3b240551fe7ecb57ddd2c415b40872ce17dfb784 | null | diff --git a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py b/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py
index c58812641..c3fc3c8e5 100644
--- a/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py
+++ b/berkele... | [BFCL] bugs in function def _multi_threaded_inference(self, test_case, include_input_log: bool, include_state_log: bool):
**Describe the issue**
I encountered an error while running bfcl generate. The error occurred in the file gorilla/berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py... | 1,731,442,886,000 | null | Bug Report | [
"berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py:OSSHandler._multi_threaded_inference"
] | [] | 1 | 571 | ||
Netflix/metaflow | Netflix__metaflow-2141 | 0bc4a9683ba67eedd756a8dc777916020587d5f7 | null | diff --git a/metaflow/cli.py b/metaflow/cli.py
index 1fc6a14953..a318b84a3e 100644
--- a/metaflow/cli.py
+++ b/metaflow/cli.py
@@ -282,31 +282,21 @@ def dump(obj, input_path, private=None, max_value_size=None, include=None, file=
else:
ds_list = list(datastore_set) # get all tasks
- tasks_processed ... | BUG: Data store error - AWS batch/step execution
**Environment:**
metaflow version: 2.12.29
Python 3.11 (Docker Image from public.ecr.aws/docker/library/python:3.11)
Running on AWS Batch
**Description:**
Tested with version 2.12.28 and it runs successfully, with this latest version we get:
Data store error: No ... | I also got this error when running on argo workflows. My flow does not use `IncludeFile` but just usual parameters.
I can also confirm it happens for `2.12.29` but not `2.12.28`
And another confirmation with step on batch. 2.12.29 displays the error, 2.12.28 does not.
I also got this error on Argo Workflows. Same p... | 1,731,502,413,000 | null | Bug Report | [
"metaflow/cli.py:dump"
] | [] | 1 | 572 | |
ray-project/ray | ray-project__ray-49071 | f498afc76dfafcf447106471e8df33578a6293be | null | diff --git a/rllib/examples/rl_modules/classes/action_masking_rlm.py b/rllib/examples/rl_modules/classes/action_masking_rlm.py
index 992802ebb13a..626554a6434c 100644
--- a/rllib/examples/rl_modules/classes/action_masking_rlm.py
+++ b/rllib/examples/rl_modules/classes/action_masking_rlm.py
@@ -1,10 +1,11 @@
import gym... | [RLlib] action_masking_example.py fails - RLModule build fails with "unexpected keyword argument 'observation_space'"
### What happened + What you expected to happen
Running the `action_masking_rl_module.py` example, which is shipped with 2.39 release, fails at RLModule instantiation.
> File "C:\Users\Philipp\ana... | 1,733,320,563,000 | null | Bug Report | [
"rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingRLModule.__init__",
"rllib/examples/rl_modules/classes/action_masking_rlm.py:ActionMaskingTorchRLModule.compute_values"
] | [] | 2 | 573 | ||
ray-project/ray | ray-project__ray-48891 | 37aa0c66110fc235762c29612b90f1c73869e6cf | null | diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py
index 1f26a483a7aa..eed702bb7438 100644
--- a/python/ray/scripts/scripts.py
+++ b/python/ray/scripts/scripts.py
@@ -622,6 +622,15 @@ def debug(address: str, verbose: bool):
type=str,
help="a JSON serialized dictionary mapping label nam... | [Core] Logs are duplicated if multiple nodes are running on same machine
### What happened + What you expected to happen
I encountered this https://github.com/ray-project/ray/issues/10392 issue when I was experimenting with ray.
This issue was closed due to the inability to provide a reproducible example.
### ... | thank you for reporting the issue! | 1,732,341,280,000 | null | Bug Report | [
"python/ray/scripts/scripts.py:start"
] | [] | 1 | 574 | |
ray-project/ray | ray-project__ray-48793 | 4b4f3c669bc71027cbae99d5b12ec750b70d96d4 | null | diff --git a/python/ray/setup-dev.py b/python/ray/setup-dev.py
index 31d722b89984..d26d377a65f5 100755
--- a/python/ray/setup-dev.py
+++ b/python/ray/setup-dev.py
@@ -73,9 +73,27 @@ def do_link(package, force=False, skip_list=None, local_path=None):
print("You don't have write permission " f"to {package_ho... | ray/serve/generated file is missing after running setup-dev.py
### What happened + What you expected to happen
When running `python setup-dev.py`, it creates softlink for each python package. However, since the generated folder is not part of the repository, creating the symbolic link for the `serve` package inadverte... | 1,731,977,311,000 | null | Bug Report | [
"python/ray/setup-dev.py:do_link"
] | [] | 1 | 575 | ||
ray-project/ray | ray-project__ray-48790 | e70b37a435122609f88e02ce3377b8dd7f780e6b | null | diff --git a/python/ray/serve/api.py b/python/ray/serve/api.py
index 182795889d47..13b92c7fcaae 100644
--- a/python/ray/serve/api.py
+++ b/python/ray/serve/api.py
@@ -474,6 +474,7 @@ def _run(
else:
client = _private_api.serve_start(
http_options={"location": "EveryNode"},
+ global... | [serve] logging_config specified in `serve.run` is not propagated cluster-wide
### Description
Specifying `logging_config` in `serve.run(..., logging_config={...})` does not configure logging for the cluster, as is expected. This is because we don't propagate `logging_config` to `.serve_start(...)` here:
https://gi... | 1,731,975,189,000 | null | Bug Report | [
"python/ray/serve/api.py:_run"
] | [] | 1 | 576 | ||
ray-project/ray | ray-project__ray-48786 | 5cd8967f1c0c16d3ae5fedb8449d0d25dd4f9f3e | null | diff --git a/python/ray/autoscaler/_private/commands.py b/python/ray/autoscaler/_private/commands.py
index 3c03738854f7..9a9b9d91cc2f 100644
--- a/python/ray/autoscaler/_private/commands.py
+++ b/python/ray/autoscaler/_private/commands.py
@@ -1153,16 +1153,15 @@ def exec_cluster(
},
docker_config=conf... | [Ray Clusters] `ray exec ... --stop --tmux ...` doesn't work with both `--stop` and `--tmux` specified
### What happened + What you expected to happen
When running `ray exec ...` with both `--stop` and `--tmux` flags, the `sudo shutdown -h now` command gets incorrectly left outside the tmux command and thus the mach... | @hartikainen do you want to create a PR to fix it? We are happy to review the PR. | 1,731,968,780,000 | null | Bug Report | [
"python/ray/autoscaler/_private/commands.py:exec_cluster"
] | [] | 1 | 577 | |
ray-project/ray | ray-project__ray-48756 | e70b37a435122609f88e02ce3377b8dd7f780e6b | null | diff --git a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py b/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py
index a65050212950..cf7cb31c3607 100644
--- a/python/ray/dashboard/modules/metrics/install_and_start_prometheus.py
+++ b/python/ray/dashboard/modules/metrics/install_a... | [ray metrics launch-prometheus] Incorrect download URL generation for aarch64 architecture
### What happened + What you expected to happen
<img width="802" alt="image" src="https://github.com/user-attachments/assets/e370ab29-db28-432b-b2c5-4c50e8e2dcf6">
- When executing the "ray metrics launch-prometheus" command ... | 1,731,661,282,000 | null | Bug Report | [
"python/ray/dashboard/modules/metrics/install_and_start_prometheus.py:get_system_info"
] | [] | 1 | 578 | ||
optuna/optuna | optuna__optuna-5828 | 81d1d36cce68e7de0384951689cdbcd4ae8b6866 | null | diff --git a/optuna/cli.py b/optuna/cli.py
index 16fa3a6df1..7246a86e21 100644
--- a/optuna/cli.py
+++ b/optuna/cli.py
@@ -215,7 +215,10 @@ def _dump_table(records: list[dict[str, Any]], header: list[str]) -> str:
for t in value_types:
if t == ValueType.STRING:
value_type = ValueT... | CLI for empty DB raises `ValueError`
### Expected behavior
CLI for empty DB should output empty result, but the current implementation raises `ValueError`.
### Environment
- Optuna version:4.2.0.dev
- Python version:3.13.0
- OS:macOS-15.1-x86_64-i386-64bit-Mach-O
- (Optional) Other libraries and their versions:
... | 1,733,375,129,000 | null | Bug Report | [
"optuna/cli.py:_dump_table"
] | [] | 1 | 579 | ||
BerriAI/litellm | BerriAI__litellm-6915 | fd2d4254bcd01e924ca4dded36ee4714c33734af | null | diff --git a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
index 4d5b2d6eb3ba..10d8a5913328 100644
--- a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
+++ b/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py
@@ -2... | [Bug]: supported params are out of date for fireworks AI
### What happened?
when calling fireworks models, litellm is complianing: logprobs is not supproted but it's actually supported by fireworks ai.
ref: https://docs.fireworks.ai/api-reference/post-completions
### Relevant log output
_No response_
### Twitter /... | 1,732,617,202,000 | null | Bug Report | [
"litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.__init__",
"litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py:FireworksAIConfig.get_supported_openai_params"
] | [] | 2 | 580 | ||
matplotlib/matplotlib | matplotlib__matplotlib-29265 | 0406a56b051a371ccf81d2946126580651a645f2 | cc82d4f8138529f562f5589dd68c363783664199 | diff --git a/lib/matplotlib/collections.py b/lib/matplotlib/collections.py
index a78f1838357e..f18d5a4c3a8c 100644
--- a/lib/matplotlib/collections.py
+++ b/lib/matplotlib/collections.py
@@ -1612,14 +1612,13 @@ def __init__(self, segments, # Can be None.
"""
Parameters
----------
- se... | Improve LineCollection docstring further
(M, 2)
I would perhaps completely drop the "list of points" and just write
```
A sequence ``[line0, line1, ...]`` where each line is a (N, 2)-shape
array-like of points::
line0 = [(x0, y0), (x1, y1), ...]
Each line can...
```
_Originally posted ... | 1,733,753,596,000 | null | Feature Request | [
"lib/matplotlib/collections.py:LineCollection.__init__"
] | [] | 1 | 581 | ||
matplotlib/matplotlib | matplotlib__matplotlib-29254 | 671177c08613136fd5004092b8b56449d419c12a | null | diff --git a/lib/matplotlib/figure.py b/lib/matplotlib/figure.py
index e5cf88131178..3d6f9a7f4c16 100644
--- a/lib/matplotlib/figure.py
+++ b/lib/matplotlib/figure.py
@@ -1382,8 +1382,8 @@ def align_xlabels(self, axs=None):
Notes
-----
- This assumes that ``axs`` are from the same `.GridSpec`... | [Bug]: Figure.align_labels() confused by GridSpecFromSubplotSpec
### Bug summary
In a composite figure with nested gridspecs, `Figure.align_labels()` (and `align_xlabels()`, `align_ylabels()`) can end up aligning labels that should not intuitively be. Likewise with `align_titles()`.
### Code for reproduction
```Pyth... | This is definitely an issue, but not sure we would prioritize or accept a complicated fix for this. Note the docs say
> Align the xlabels of subplots in the same subplot row if label alignment is being done automatically (i.e. the label position is not manually set).
This issue with subgridspecs not having a cl... | 1,733,617,371,000 | null | Bug Report | [
"lib/matplotlib/figure.py:FigureBase.align_xlabels",
"lib/matplotlib/figure.py:FigureBase.align_ylabels",
"lib/matplotlib/figure.py:FigureBase.align_titles",
"lib/matplotlib/figure.py:FigureBase.align_labels"
] | [] | 4 | 582 | |
matplotlib/matplotlib | matplotlib__matplotlib-29236 | 84fbae8eea3bb791ae9175dbe77bf5dee3368275 | null | diff --git a/lib/matplotlib/animation.py b/lib/matplotlib/animation.py
index 47f2f0f9515b..2be61284073a 100644
--- a/lib/matplotlib/animation.py
+++ b/lib/matplotlib/animation.py
@@ -492,8 +492,15 @@ def grab_frame(self, **savefig_kwargs):
buf = BytesIO()
self.fig.savefig(
buf, **{**savef... | [Bug]: inconsistent ‘animation.FuncAnimation’ between display and save
### Bug summary
when i want to save images to gif, it's inconsistent between display and save;
It seems that the color information has been lost:

... | Do you mind also including the data points that you plotted?
I updated the code and uploaded the data file:
[example.zip](https://github.com/user-attachments/files/17945028/example.zip)
Thank you. I was able to reproduce the behavior now. It does seem like a bug.
It may be because the PillowWriter is renormalizin... | 1,733,387,401,000 | null | Bug Report | [
"lib/matplotlib/animation.py:PillowWriter.grab_frame"
] | [] | 1 | 583 | |
tobymao/sqlglot | tobymao__sqlglot-4526 | 946cd4234a2ca403785b7c6a026a39ef604e8754 | null | diff --git a/sqlglot/planner.py b/sqlglot/planner.py
index 2e42b32c4..687bffb9f 100644
--- a/sqlglot/planner.py
+++ b/sqlglot/planner.py
@@ -201,11 +201,13 @@ def set_ops_and_aggs(step):
aggregate.add_dependency(step)
step = aggregate
+ else:
+ aggregate = None
o... | getting UnboundLocalError: cannot access local variable 'aggregate' where it is not associated with a value when running sqlglot.planner.Plan
**Before you file an issue**
- Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")`
- Make sure you specify the "write" dialect eg. `ast.sql(dialect="duc... | You need to run the optimizer first:
```python
>>> import sqlglot
>>> import sqlglot.planner
>>>
>>> r = 'select suma from ( select sum(a) as suma from table1) order by suma'
>>> parsed = sqlglot.parse_one(r, dialect='snowflake')
>>> p = sqlglot.planner.Plan(parsed)
Traceback (most recent call last):
File ... | 1,734,394,253,000 | null | Bug Report | [
"sqlglot/planner.py:Step.from_expression"
] | [] | 1 | 584 | |
tobymao/sqlglot | tobymao__sqlglot-4369 | a665030323b200f3bed241bb928993b9807c4100 | null | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index f04cece117..b0c2a7f560 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -767,6 +767,7 @@ def and_(
*expressions: t.Optional[ExpOrStr],
dialect: DialectType = None,
copy: bool = True,
+ wrap: bool = Tr... | Excessive Recursion in Query Optimization with Multiple OR Clauses
## Context
We are encountering an issue where a query with a high number of OR operators is causing excessive recursion during the optimization phase. The resulting recursion depth leads to stack overflow errors. As a temporary workaround, we increas... | this is because to_s is showing the full nested tree. if you do is_equal.sql() it should be ok | 1,731,329,118,000 | null | Bug Report | [
"sqlglot/expressions.py:Expression.and_",
"sqlglot/expressions.py:Expression.or_",
"sqlglot/expressions.py:_combine",
"sqlglot/expressions.py:xor"
] | [] | 4 | 585 | |
flet-dev/flet | flet-dev__flet-4554 | be58db6a4120596c45172933432678105785d94a | null | diff --git a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py b/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py
index 218705576..f39561bfc 100644
--- a/sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py
+++ b/sdk/python/packages/flet-cli/src/flet_cli/u... | `flet build` fails to parse file and git dependencies from `tool.poetry.dependencies` in `pyproject.toml`
### Discussed in https://github.com/flet-dev/flet/discussions/4546
<div type='discussions-op-text'>
<sup>Originally posted by **amcraig** December 11, 2024</sup>
### Question
Hi all,
I've tried includin... | 1,734,034,325,000 | null | Bug Report | [
"sdk/python/packages/flet-cli/src/flet_cli/utils/project_dependencies.py:get_poetry_dependencies"
] | [] | 1 | 586 | ||
flet-dev/flet | flet-dev__flet-4452 | f62b5066ab79f3b99241e9c234baeac71fd60f95 | null | diff --git a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
index 0dcd8539a..212157549 100644
--- a/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
+++ b/sdk/python/packages/flet-cli/src/flet_cli/commands/build.py
@@ -1271,6 +1271,7 ... | `flet build` creates bundle but running it gives `ImportError: No module named main` error
### Duplicate Check
- [X] I have searched the [opened issues](https://github.com/flet-dev/flet/issues) and there are no duplicates
### Describe the bug
Traceback (most recent call last):
File "<string>", line 47, in <module... | What do you have in pyproject.toml and what is the file structure of your project?
### `pyproject.toml`
```toml
[project]
name = "weather-app"
version = "0.1.0"
description = ""
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
"flet"
]
[tool.flet]
# org name in reverse domain name n... | 1,732,904,343,000 | null | Bug Report | [
"sdk/python/packages/flet-cli/src/flet_cli/commands/build.py:Command.package_python_app"
] | [] | 1 | 587 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.