repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Avaiga/taipy | automation | 2,455 | Add new calendar component for selecting week/month | ### Description
I'm building an app where users want to see their sales data at a daily, weekly or monthly aggregated frequency. Having a component per the title would help with a nicer UI for the user to select their desired week/month. Thanks!
### Solution Proposed
The solution could look [something like this](https://mui.com/x/react-date-pickers/date-calendar/#week-picker):

Add parameter to specify the starting day of the week. In my case, it should be Monday.
### Impact of Solution
Maybe a new feature of the date control, rather than a new control.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2025-02-21T08:57:22Z | 2025-03-06T11:05:35Z | https://github.com/Avaiga/taipy/issues/2455 | [
"🖰 GUI",
"🆘 Help wanted",
"🟧 Priority: High",
"✨New feature"
] | arcanaxion | 2 |
PedroBern/django-graphql-auth | graphql | 146 | Upgrade compatibility with django-graphql-jwt>=0.3.2 | Can you change the lib dependency requirement for be compatible with django-graphql-jwt>=0.3.2 ?
| open | 2022-03-12T11:31:13Z | 2022-03-16T12:06:38Z | https://github.com/PedroBern/django-graphql-auth/issues/146 | [] | Miguiz | 2 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 105 | 请问基于transformer的gradio_demo有输入示例吗,自行测试似乎不对。 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型推理
### 基础模型
Alpaca-2-7B
### 操作系统
Linux
### 详细描述问题
```
# 请在此处粘贴运行代码(如没有可删除该代码块)
```
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
### 运行日志或截图
![Uploading 2.png…]()
| closed | 2023-08-09T06:20:06Z | 2023-08-21T01:04:46Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/105 | [
"stale"
] | xq25478 | 3 |
sigmavirus24/github3.py | rest-api | 399 | issue.closed_by returns None instead of an actual user object | So I have this code:
``` python
closer = issue.closed_by
#We have to iterate over all events to find the closed event and get the actor for that event
for event in issue.iter_events():
if event.event == 'closed':
closer = event.actor
#print(event)
print("Closed by",closer)
```
Basically, the two parts should return the same value, but simply asking for issue.closed_by returns None (a Nonetype object) but the second way (looping over all the issue's events), I get the result I would expect from the first way.
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/22764713-issue-closed_by-returns-none-instead-of-an-actual-user-object?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| closed | 2015-06-22T19:48:03Z | 2018-03-22T16:10:49Z | https://github.com/sigmavirus24/github3.py/issues/399 | [] | jonathanwcrane | 6 |
deepfakes/faceswap | deep-learning | 570 | OriginalHighResRC4fix Report errors | 1/02/2019 11:01:07 ERROR Failed to convert image: 'C:\faceswap_rc4\dataA\video-frame-1.png'. Reason: Error when checking input: expected input_5 to have shape (128, 128, 3) but got array with shape (192, 192, 3)
How to synthesize 192 model?
| closed | 2019-01-02T03:16:39Z | 2019-01-08T13:59:53Z | https://github.com/deepfakes/faceswap/issues/570 | [] | shuai700 | 1 |
yt-dlp/yt-dlp | python | 11,833 | Error postprocessing stream copy YOUTUBE | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Brazil
### Provide a description that is worded well enough to be understood
Using the nightly version (2024.12.15.232913) show this error: ERROR: Postprocessing: Stream #1:0 -> #0:1 (copy)
I think when it goes to be merged?
Please see the output
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
\> yt-dlp.exe -vU https://www.youtube.com/watch?v=HqKu4aGF3d4
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=HqKu4aGF3d4']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.12.15.232913 from yt-dlp/yt-dlp-nightly-builds [d298693b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-91510-gd134b8d85f, ffprobe N-91510-gd134b8d85f
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.12.15.232913 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.12.15.232913 from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=HqKu4aGF3d4
[youtube] HqKu4aGF3d4: Downloading webpage
[youtube] HqKu4aGF3d4: Downloading ios player API JSON
[youtube] HqKu4aGF3d4: Downloading mweb player API JSON
[debug] Loading youtube-nsig.2f1832d2 from cache
[debug] [youtube] Decrypted nsig -mQvyTItm0dkN7rcoGMn0 => WkEE6h7PSyozQQ
[debug] Loading youtube-nsig.2f1832d2 from cache
[debug] [youtube] Decrypted nsig zib6ExGEuxHbpnXYOeg3x => GHz4XTcgibRbVA
[youtube] HqKu4aGF3d4: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] HqKu4aGF3d4: Downloading 1 format(s): 396+251
[debug] Invoking http downloader on "https://rr2---sn-oxvgxqx-4aoe.googlevideo.com/videoplayback?expire=1734395892&ei=lHNgZ826G9KWobIP3qeZsA8&ip=2804%3A1530%3A106%3A22ca%3Acdf5%3A38b6%3A58d5%3A2d77&id=o-ADo4ya_Nr5DcPZ_TUeBgjxDBV9HGsKMZnNKg4puo8cgv&itag=396&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734374292%2C&mh=DI&mm=31%2C29&mn=sn-oxvgxqx-4aoe%2Csn-bg0ezne7&ms=au%2Crdu&mv=m&mvi=2&pl=40&rms=au%2Cau&pcm2=yes&initcwndbps=2431250&bui=AfMhrI8-_fCNAHqlW5XcRLvCYT-9LU3zgnTKAqg93dVs2Tb1RTdCXI2VgTNjjrzGIyZIZRIiRUKkVhqZ&spc=x-caUHKREsm1Ol05Rh7HavpR9g8Oc181_QpiCrn63wnTxVNqLg&vprv=1&svpuc=1&mime=video%2Fmp4&rqh=1&gir=yes&clen=107665992&dur=6536.162&lmt=1706420040127155&mt=1734373788&fvip=4&keepalive=yes&fexp=51326932%2C51335594%2C51347747&c=IOS&txp=543G434&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cpcm2%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRQIhAJ6mE3mZPa8ssAPpuBd8txhJcYFB8SkYeQRjMNOrTkjRAiA_iGeRQJJBXxWJOI3tVeuOzkk6sx6YjjiuD9tB4PlxNA%3D%3D&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRQIgOASmMQpaNHezWeJkDe7ecklQH1oRsP5TjPRq9kLLrCsCIQCNVZldh6Hu8lekO_0_e-cPoJ1BRCh8j0lEjSFAO6cDpQ%3D%3D"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f396.mp4
[download] 100% of 102.68MiB in 00:00:02 at 35.49MiB/s
[debug] Invoking http downloader on "https://rr2---sn-oxvgxqx-4aoe.googlevideo.com/videoplayback?expire=1734395892&ei=lHNgZ826G9KWobIP3qeZsA8&ip=2804%3A1530%3A106%3A22ca%3Acdf5%3A38b6%3A58d5%3A2d77&id=o-ADo4ya_Nr5DcPZ_TUeBgjxDBV9HGsKMZnNKg4puo8cgv&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734374292%2C&mh=DI&mm=31%2C29&mn=sn-oxvgxqx-4aoe%2Csn-bg0ezne7&ms=au%2Crdu&mv=m&mvi=2&pl=40&rms=au%2Cau&pcm2=yes&initcwndbps=2431250&bui=AfMhrI8-_fCNAHqlW5XcRLvCYT-9LU3zgnTKAqg93dVs2Tb1RTdCXI2VgTNjjrzGIyZIZRIiRUKkVhqZ&spc=x-caUHKREsm1Ol05Rh7HavpR9g8Oc181_QpiCrn63wnTxVNqLg&vprv=1&svpuc=1&mime=audio%2Fwebm&rqh=1&gir=yes&clen=110385905&dur=6536.221&lmt=1706419948751974&mt=1734373788&fvip=4&keepalive=yes&fexp=51326932%2C51335594%2C51347747&c=IOS&txp=5432434&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cpcm2%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRQIgVzzRyAxeeUpIBbQUCFDccCr1B9i6dEOWSmEDJYbToTUCIQD731qIsFs4-vnXrEVNUuRtv346S7tMSgOzuGUIgIA83g%3D%3D&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRQIgOASmMQpaNHezWeJkDe7ecklQH1oRsP5TjPRq9kLLrCsCIQCNVZldh6Hu8lekO_0_e-cPoJ1BRCh8j0lEjSFAO6cDpQ%3D%3D"
[download] Destination: Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f251.webm
[download] 100% of 105.27MiB in 00:00:02 at 41.57MiB/s
[Merger] Merging formats into "Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].webm"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i "file:Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f396.mp4" -i "file:Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f251.webm" -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart "file:Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].temp.webm"
[debug] ffmpeg version N-91510-gd134b8d85f Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7.3.1 (GCC) 20180722
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
libavutil 56. 18.102 / 56. 18.102
libavcodec 58. 21.106 / 58. 21.106
libavformat 58. 17.101 / 58. 17.101
libavdevice 58. 4.101 / 58. 4.101
libavfilter 7. 26.100 / 7. 26.100
libswscale 5. 2.100 / 5. 2.100
libswresample 3. 2.100 / 3. 2.100
libpostproc 55. 2.100 / 55. 2.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001c0be05ae00] Unknown AV1 Codec Configuration Box version 129
[libaom-av1 @ 000001c0be06d700] 1.0.0-183-gee1205ada
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'file:Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f396.mp4':
Metadata:
major_brand : dash
minor_version : 0
compatible_brands: iso6av01mp41
creation_time : 2024-01-28T05:03:12.000000Z
Duration: 01:48:56.16, start: 0.000000, bitrate: 131 kb/s
Stream #0:0(und): Video: av1 (Main) (av01 / 0x31307661), yuv420p(tv, bt709), 640x360, 0 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 30k tbc (default)
Metadata:
creation_time : 2024-01-28T05:03:12.000000Z
handler_name : ISO Media file produced by Google Inc.
Input #1, matroska,webm, from 'file:Operação Prato, entrevista Uyrangê Hollanda [HqKu4aGF3d4].f251.webm':
Metadata:
encoder : google/video-file
Duration: 01:48:56.22, start: -0.007000, bitrate: 135 kb/s
Stream #1:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
[webm @ 000001c0be09a640] Only VP8 or VP9 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (copy)
ERROR: Postprocessing: Stream #1:0 -> #0:1 (copy)
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 3557, in process_info
File "yt_dlp\YoutubeDL.py", line 3741, in post_process
File "yt_dlp\YoutubeDL.py", line 3723, in run_all_pps
File "yt_dlp\YoutubeDL.py", line 3701, in run_pp
File "yt_dlp\postprocessor\common.py", line 22, in run
File "yt_dlp\postprocessor\common.py", line 127, in wrapper
File "yt_dlp\postprocessor\ffmpeg.py", line 839, in run
File "yt_dlp\postprocessor\ffmpeg.py", line 329, in run_ffmpeg_multiple_files
File "yt_dlp\postprocessor\ffmpeg.py", line 367, in real_run_ffmpeg
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Stream #1:0 -> #0:1 (copy)
```
| closed | 2024-12-16T18:39:07Z | 2024-12-21T03:41:20Z | https://github.com/yt-dlp/yt-dlp/issues/11833 | [
"question",
"external issue"
] | terremoth | 7 |
apache/airflow | data-science | 47,844 | Airflow API for 'airflow dags reserialize' | ### Description
An Airflow API endpoint to programatically reserialize dags, aligned with the `airflow dags reserialize `CLI command:
https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#reserialize
### Use case/motivation
Currently, I must exec into my Airflow webserver in order to run the CLI command to refresh dags.
However, I already have an Airflow client programatically communicating to the Airflow webserver API. It would be great to be able to request the API to refresh dags when required.
As a temporary solution, I have set a low interval of 1s for:
- min_serialized_dag_update_interval
- min_serialized_dag_fetch_interval
But have found high constant resource usage as a result to the Airflow services.
### Related issues
An extension of https://github.com/apache/airflow/issues/19432
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-17T06:16:40Z | 2025-03-17T20:30:19Z | https://github.com/apache/airflow/issues/47844 | [
"kind:feature",
"area:API"
] | BenjaminYong | 3 |
Anjok07/ultimatevocalremovergui | pytorch | 872 | I'd like to split my already existing vocal-only inputs into lead and backing vocals | Is that possible? I tried to _not_ select any MDX-NET model by setting it to the "Choose Model" option while turning on vocal splitter, but that gave me the "You must select an model to continue" error. | open | 2023-10-08T02:10:34Z | 2023-10-12T22:38:36Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/872 | [] | HaneulCheong | 4 |
FlareSolverr/FlareSolverr | api | 1,387 | Using docker image 21hsmw/flaresolverr, container is 1.2G RAM after 24h | Try with this image, no need to modify the file anymore
21hsmw/flaresolverr:nodriver
;)
_Originally posted by @nanarCss in https://github.com/FlareSolverr/FlareSolverr/issues/1385#issuecomment-2408545656_
| closed | 2024-10-13T16:22:49Z | 2024-10-14T05:34:01Z | https://github.com/FlareSolverr/FlareSolverr/issues/1387 | [
"duplicate"
] | caublet | 1 |
tiangolo/uwsgi-nginx-flask-docker | flask | 91 | Wrong interpreter used | I was trying out this Dockerfile (tiangolo/uwsgi-nginx-flask:python3.7-alpine3.7) and it turns out it's running flask in python 3.6.5 instead of 3.7.0.
I changed the app to look like this and actually report the version of the interpreter:
```python
@app.route("/")
def hello():
return "Hello World from Flask in a uWSGI Nginx Docker \
container with Python {}".format(sys.version)
```
which results in:

If you run from an interactive shell in the container, `python -V` reports the expected version

I have tried to track where it installs the wrong version and it seems it does so when uwsgi is installed (in the parent Dockerfile)

I guess there is a way to tell uwsgi which interpreter to use?
| closed | 2018-10-10T18:26:46Z | 2019-01-12T19:44:48Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/91 | [] | ChacheGS | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 344 | Error when running demo_cli.py. Please Help! | When I run demo_cli.py I get this error:
Traceback (most recent call last): File ".\demo_cli.py", line 96, in <module> mels = synthesizer.synthesize_spectrograms(texts, embeds) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 77, in synthesize_spectrograms self.load() File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 58, in load self._model = Tacotron2(self.checkpoint_fpath, hparams) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\tacotron2.py", line 28, in __init__ split_infos=split_infos) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\tacotron.py", line 146, in initialize zoneout=hp.tacotron_zoneout_rate, scope="encoder_LSTM")) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 221, in __init__ name="encoder_fw_LSTM") File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 114, in __init__ self._cell = tf.contrib.cudnn_rnn.CudnnLSTM(num_units, name=name) TypeError: __init__() missing 1 required positional argument: 'num_units'
Can somone help fix it? | closed | 2020-05-14T14:18:02Z | 2020-06-24T08:39:02Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/344 | [] | CosmonautNinja | 2 |
gradio-app/gradio | data-visualization | 10,281 | Dragging in an image a second time will not replace the original image, it will open in a new tab | ### Describe the bug
Dragging in an image a second time will not replace the original image, it will open in a new tab
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
input_image = gr.Image(
label="输入图像",
type="pil",
height=600,
width=400,
interactive=True
)
if __name__ == "__main__":
demo.launch(
server_name="0.0.0.0",
server_port=37865
)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio 5.6.0
gradio_client 1.4.3
```
### Severity
I can work around it | open | 2025-01-03T02:42:32Z | 2025-01-05T06:32:07Z | https://github.com/gradio-app/gradio/issues/10281 | [
"bug"
] | Dazidingo | 2 |
sammchardy/python-binance | api | 1,257 | limit parameter is ignored on start_futures_depth_socket | **Describe the bug**
When creating a ThreadedDepthCacheManager and starting a future depth socket (`start_futures_depth_socket`), the `limit` parameter is not being passed deep enough. So the subscription message to Binance uses the default `"10"` value.
This isn't happening if starting a spot depth socket.
**To Reproduce**
symbol = 'BTCUSDT'
dcm = ThreadedDepthCacheManager(api_key=api_key, api_secret=api_secret)
dcm.start()
def handle_dcm_message(depth_cache):
print("Hi GitHub")
dcm.start_futures_depth_socket(callback=handle_dcm_message, symbol=symbol, limit=20)
dcm.join()
**Expected behavior**
The limit parameter should be taken in consideration when subscription to futures book updates.
**Environment (please complete the following information):**
- Python version: [e.g. 3.5]
- Virtual Env: [e.g. virtualenv, conda]
- OS: [e.g. Mac, Ubuntu]
- python-binance version 1.0.16
| open | 2022-10-08T11:12:42Z | 2022-10-08T11:12:42Z | https://github.com/sammchardy/python-binance/issues/1257 | [] | yarimiz | 0 |
horovod/horovod | deep-learning | 3,324 | Process Sets performance issues | I‘m glad to see that horovod supports the Process Sets.
But there is a logic that needs to be confirmed:
In the RunLoopOnce function, a series of logic such as negotiation and wait_fore_data are executed for each process_set, so whether there is a performance problem with the execution of each process_set sequence?
<img width="899" alt="屏幕快照 2021-12-15 下午7 20 23" src="https://user-images.githubusercontent.com/41471499/146177908-2b477d43-826a-4968-90f6-6e7b50af26f9.png">
Suppose I have two Process_sets: [0,2] [1,3]
The first iteration: [0,2]Process_set will perform negotiation, wait_fore_data and other logic in turn;
The second iteration: [1,3] Process_set will perform negotiation, wait_fore_data and other logic in turn.
The logic of [1,3]Process_set will not be executed until [0,2]Process_set is executed on the host side.
In theory, such sequential execution should have an impact on performance. Especially in model parallelism, more process_set may be required, and the execution impact of this order will be greater.
Is my understanding correct? If so, can this part of the logic be optimized? | closed | 2021-12-15T11:36:42Z | 2021-12-22T03:18:19Z | https://github.com/horovod/horovod/issues/3324 | [
"question"
] | Richie-yan | 5 |
JaidedAI/EasyOCR | pytorch | 1,122 | output multiple possible results | Hi, the current code provides the most probable output for each input image and doesn't inherently offer multiple possible outputs. Is it possible to give multiple possible outputs sorted by confidence? It will be very useful and helpful. Thanks. | open | 2023-08-23T23:20:08Z | 2023-08-23T23:20:08Z | https://github.com/JaidedAI/EasyOCR/issues/1122 | [] | zzzghm | 0 |
streamlit/streamlit | python | 10,102 | on_change callback is triggered without user-interaction on in-script delays | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
My end-users noticed that checkboxes were very incidentally being automatically checked or inverted from their initial state.
Further investigation showed that some checkboxes their `on_change` callbacks are triggered without user-interaction.
I found that adding a delay in the scripts execution and setting `runner.fastReruns = false` allowed me to reproduce this problem consistently, see `Current Behavior`. Note that this issue also occurs when `runner.fastReruns = true`, but only sporadically.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10102)
```Python
import streamlit as st
from time import sleep
sleep(0.2)
def check_on_change(id):
print(f'on_change {id}')
st.text_input('text')
st.checkbox('check 1', key=f'check_1_input', on_change=check_on_change, args=(1,))
st.checkbox('check 2', key=f'check_2_input', on_change=check_on_change, args=(2,))
st.checkbox('check 3', key=f'check_3_input', on_change=check_on_change, args=(3,))
```
### Steps To Reproduce
1. Start streamlit with the '--runner.fastReruns false'
2. Type something in the text_input
3. Rather quickly after typing, check the first checkbox
### Expected Behavior
Given the code example, I expect only the on_change of the checkbox that I clicked to be triggered.
#### Without delay or first clicking outside of any element:
https://github.com/user-attachments/assets/0339737f-81ee-4db0-a5f4-5cee44d46cd9
### Current Behavior
https://github.com/user-attachments/assets/d85d80f8-039b-4634-8676-1c00370c7408
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version: 3.12
- Operating System: MacOS 15.2 (24C101)
- Browser: Chrome/Safari 131.0.6778.140 (Official Build) (arm64)
### Additional Information
_No response_ | open | 2025-01-03T14:58:06Z | 2025-01-13T14:46:30Z | https://github.com/streamlit/streamlit/issues/10102 | [
"type:bug",
"type:regression",
"status:confirmed",
"priority:P3",
"feature:st.checkbox",
"feature:callbacks",
"feature:fast-reruns"
] | MaxvandenBoom | 2 |
vitalik/django-ninja | rest-api | 518 | how to use django-filter in ninja | closed | 2022-07-29T09:54:46Z | 2022-08-01T02:24:08Z | https://github.com/vitalik/django-ninja/issues/518 | [] | afghanistanyn | 1 | |
google-research/bert | nlp | 1,087 | module 'tensorflow_core._api.v2.train' has no attribute 'Saver' | train(model, x_train, y_train, x_test, y_test)
export_model(tf.train.Saver(), model, ["conv2d_1_input"], "dense_2/Softmax")
AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'Saver' | closed | 2020-05-18T19:49:42Z | 2021-12-09T14:36:37Z | https://github.com/google-research/bert/issues/1087 | [] | adarshrana205 | 5 |
TencentARC/GFPGAN | pytorch | 51 | GAN训练 | 您好,有一个问题困扰我。在GAN的训练中,参考https://github.com/rosinality/stylegan2-pytorch/blob/master/train.py
在训练Discriminator时,rosinality将Generator的梯度更新关闭:
```
requires_grad(generator, False)
requires_grad(discriminator, True)
```
同样,训练Generator时,也会将Discriminator的梯度更新关闭:
```
requires_grad(generator, True)
requires_grad(discriminator, False)
```
我只在您的代码中找到了对Discriminator进行梯度控制,没有对Generator的梯度调节:
```
for p in self.net_d.parameters():
p.requires_grad = False
```
&
```
for p in self.net_d.parameters():
p.requires_grad = True
```
1、这是不是意味着Generator始终会得到梯度更新(哪怕是在训练Discriminator时)?如果是这样,是否等价于每份数据都会在Generator前降传播两次呢?
2、如果Generator的梯度更新也会受到调节,请问这是在哪个位置实现的呢?
| closed | 2021-08-25T12:10:02Z | 2021-08-30T03:15:59Z | https://github.com/TencentARC/GFPGAN/issues/51 | [] | SimKarras | 2 |
pytest-dev/pytest-cov | pytest | 403 | fail_under setting with precision is not working | # Summary
I have `report: precision` set to 2 and `fail_under` set to 97.47, and my test coverage total is reading as 97.47, but I'm getting a failure message and failure code (exit code 2).
## Expected vs actual result
Expected: test coverage passes
Actual: `FAIL Required test coverage of 97.47% not reached. Total coverage: 97.47%`
I even tried modifying `fail_under` to 97.469, in which case I got this even more nonsensical message:
`FAIL Required test coverage of 97.469% not reached. Total coverage: 97.47%`
# Reproducer
## Versions
Output of relevant packages `pip list`, `python --version`, `pytest --version` etc.
Make sure you include complete output of `tox` if you use it (it will show versions of various things).
```
Python 3.7.5
pipenv, version 2018.11.26
pytest version 5.4.1
pytest-cov 2.8.1
```
## Config
Include your `tox.ini`, `pytest.ini`, `.coveragerc`, `setup.cfg` or any relevant configuration.
```
# .coveragerc
[report]
fail_under = 97.47
precision = 2
skip_covered = true
show_missing = true
```
## Code
Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.
If you paste raw code make sure you quote it, eg:
https://github.com/votingworks/arlo/pull/447/commits/89c50e43216963f06af6e4c5104b67fd33e4ff36
| closed | 2020-04-24T00:27:28Z | 2024-09-17T22:53:01Z | https://github.com/pytest-dev/pytest-cov/issues/403 | [] | jonahkagan | 7 |
serengil/deepface | deep-learning | 1,093 | About find function | I hope that the find function can return source_regions data together to facilitate the improvement of the datastore.
I know that I can call the represent function to get the source_region data, but this requires one more calculation. | closed | 2024-03-11T01:44:35Z | 2024-03-11T09:16:02Z | https://github.com/serengil/deepface/issues/1093 | [
"question"
] | ZeeLyn | 2 |
OpenInterpreter/open-interpreter | python | 664 | Failed to fetch latest release from GitHub API | ### Describe the bug
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/bin/interpreter", line 5, in <module>
from interpreter import cli
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/interpreter/__init__.py", line 1, in <module>
from .core.core import Interpreter
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 6, in <module>
from ..cli.cli import cli
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/interpreter/cli/cli.py", line 6, in <module>
import ooba
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/ooba/__init__.py", line 1, in <module>
from .download import download
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/ooba/download.py", line 2, in <module>
from .utils.ensure_repo_exists import ensure_repo_exists
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/ooba/utils/ensure_repo_exists.py", line 8, in <module>
TAG = get_latest_release()
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniforge/base/envs/interpreter/lib/python3.11/site-packages/ooba/utils/get_latest_release.py", line 8, in get_latest_release
raise Exception("Failed to fetch latest release from GitHub API.")
Exception: Failed to fetch latest release from GitHub API.
### Reproduce
interpreter --model gpt-3.5-turbo
### Expected behavior
open interpreter cmd
### Screenshots
_No response_
### Open Interpreter version
0.4.14
### Python version
3.11
### Operating System name and version
Macos 13
### Additional context
_No response_ | closed | 2023-10-20T03:05:03Z | 2023-10-26T07:46:38Z | https://github.com/OpenInterpreter/open-interpreter/issues/664 | [
"Bug"
] | neoalvin | 3 |
collerek/ormar | pydantic | 724 | ForeignKeyViolationError in CASCADE clear() method | **Describe the bug:**
When I try to CASCADE delete relational models.
**To Reproduce:**
```python
# Models
import ormar
from uuid import UUID, uuid4
class A(ormar.Model):
id: UUID = ormar.UUID(uuid_format="string", primary_key=True, default=uuid4)
name: str = ormar.String(max_length=64, nullalbe=False)
class B(ormar.Model):
id: UUID = ormar.UUID(uuid_format="string", primary_key=True, default=uuid4)
name: str = ormar.String(max_length=64, nullalbe=False)
a: A = ormar.ForeignKey(to=A)
class C(ormar.Model):
id: UUID = ormar.UUID(uuid_format="string", primary_key=True, default=uuid4)
name: str = ormar.String(max_length=64, nullalbe=False)
b: B = ormar.ForeignKey(to=B)
# QuerySet
# (a is an object from A model class.)
>>> await a.bs.clear(keep_reversed=False)
~/Desktop/MyProject/venv/lib/python3.10/site-packages/asyncpg/protocol/protocol.pyx in bind_execute()
ForeignKeyViolationError: update or delete on table "bs" violates foreign key constraint "fk_cs_bs_id_b" on table "cs"
DETAIL: Key (id)=(592c8bd6-6796-48d0-9158-fe1fcd0cfd57) is still referenced from table "cs".
```
**Versions:**
- Database backend: PostgreSQL
- Python version: 3.10.4
- `ormar` version: 0.11.2
- `pydantic` version: 1.9.0 | closed | 2022-07-02T07:24:39Z | 2022-07-04T18:43:58Z | https://github.com/collerek/ormar/issues/724 | [
"bug"
] | SepehrBazyar | 4 |
Netflix/metaflow | data-science | 1,647 | How can I progress `@step` based on some condition? | For example, the following code seems not working:
```python
class SomeFlow(FlowSpec):
some_condition = Parameter( ... )
@step
def start(self):
if self.some_condition == ... :
self.next(self.a)
else:
self.next(self.b)
@step
def a(self): ...
@step
def b(self): ....
```
How can I do this with metaflow `@step`? | open | 2023-11-30T08:54:41Z | 2024-07-23T11:50:52Z | https://github.com/Netflix/metaflow/issues/1647 | [] | sangwoo-joh | 2 |
jonaswinkler/paperless-ng | django | 122 | Save port number for Cookies / Auth problems on multiple instances | tldr: It would be great if paperless-ng were to set cookies for host:**port**. I have two instances running on separate ports and therefore have to reauth constantly .
______
Hey, first of all, thank you very much. I really like paperless-ng and use it quite a lot.
As of yesterday, I have two instances of paperless running to separate my documents from those of my wife (It gets confusing with archive numbers if everything is on one instance.)
But there is a problem with cookie / auth management when working simultaneously on both instances. I constantly get logged out on one or the other instance. paperless-ng sets cookies for domain, discarding the port number.
A fix would be highly appreciated. Thank you
| closed | 2020-12-11T07:45:59Z | 2020-12-11T16:51:27Z | https://github.com/jonaswinkler/paperless-ng/issues/122 | [
"feature request"
] | praul | 3 |
amidaware/tacticalrmm | django | 1,387 | Prevent Users from Uninstalling the Agent | **Is your feature request related to a problem? Please describe.**
I'm trying out the RMM its so good so far but a small concern I have is since most of my end users have admin access they are able to uninstall the agent.
**Describe the solution you'd like**
is there any way to prevent this from happening? what I'm asking is if they need to uninstall they need to provide a maintenance or uninstall token which is to get from the RMM at the uninstall otherwise it fails,
| closed | 2022-12-29T11:00:54Z | 2022-12-29T16:39:40Z | https://github.com/amidaware/tacticalrmm/issues/1387 | [] | kakalpa | 0 |
X-PLUG/MobileAgent | automation | 84 | 运行PC-Agent的时候遇到报错OSError: cannot open resource | Traceback (most recent call last):
File "D:\MobileAgent\PC-Agent\run.py", line 507, in <module>
perception_infos, width, height = get_perception_infos(screenshot_file, screenshot_som_file, font_path=args.font_path)
File "D:\MobileAgent\PC-Agent\run.py", line 383, in get_perception_infos
draw_coordinates_boxes_on_image(screenshot_file, copy.deepcopy(merged_icon_coordinates), screenshot_som_file, font_path)
File "D:\MobileAgent\PC-Agent\run.py", line 69, in draw_coordinates_boxes_on_image
font = ImageFont.truetype(font_path, int(height * 0.012))
File "C:\ProgramData\anaconda3\envs\GUIagent\lib\site-packages\PIL\ImageFont.py", line 807, in truetype
return freetype(font)
File "C:\ProgramData\anaconda3\envs\GUIagent\lib\site-packages\PIL\ImageFont.py", line 804, in freetype
return FreeTypeFont(font, size, index, encoding, layout_engine)
File "C:\ProgramData\anaconda3\envs\GUIagent\lib\site-packages\PIL\ImageFont.py", line 244, in __init__
self.font = core.getfont(
OSError: cannot open resource | closed | 2025-01-23T08:39:14Z | 2025-02-05T06:45:34Z | https://github.com/X-PLUG/MobileAgent/issues/84 | [] | wenwend1122 | 1 |
autogluon/autogluon | data-science | 4,521 | [github] Add issue templates for bug reports per module | Add issue templates for bug reports per module so it is easier to track for developers and lowers ambiguity. | open | 2024-10-04T03:15:18Z | 2024-11-25T22:47:16Z | https://github.com/autogluon/autogluon/issues/4521 | [
"API & Doc"
] | Innixma | 0 |
lanpa/tensorboardX | numpy | 205 | Add manual override to walltime | Hi, I'm using tensorboardX for a slightly unusual purpose: I'm using it to visualize data made from *simulations*, not for neural network training. TensorboardX + Tensorboard is very close to what I currently need for this purpose, however one feature I'd like is to manually override the "walltime" so that I can view the time in my simulation, instead of the actual time.
I'm looking at the code and don't think it'd be too hard, if I made a Pull Request would you be willing to review? | closed | 2018-08-03T15:01:19Z | 2018-08-08T13:07:36Z | https://github.com/lanpa/tensorboardX/issues/205 | [] | andrewkho | 2 |
graphistry/pygraphistry | pandas | 473 | [BUG] hackernews demo fails on merge branch | On `http://localhost/notebook/lab/tree/demos/ai/Introduction/Ask-HackerNews-Demo.ipynb`:
```
ile /opt/conda/envs/rapids/lib/python3.8/site-packages/graphistry/feature_utils.py:652, in impute_and_scale_df(df, use_scaler, impute, n_quantiles, output_distribution, quantile_range, n_bins, encode, strategy, keep_n_decimals)
629 def impute_and_scale_df(
630 df: pd.DataFrame,
631 use_scaler: str = "robust",
(...)
639 keep_n_decimals: int = 5,
640 ) -> Tuple[pd.DataFrame, Pipeline]:
642 transformer = get_preprocessing_pipeline(
643 impute=impute,
644 use_scaler=use_scaler,
(...)
650 strategy=strategy,
651 )
--> 652 res = fit_pipeline(df, transformer, keep_n_decimals=keep_n_decimals)
654 return res, transformer
File /opt/conda/envs/rapids/lib/python3.8/site-packages/graphistry/feature_utils.py:622, in fit_pipeline(X, transformer, keep_n_decimals)
619 columns = X.columns
620 index = X.index
--> 622 X = transformer.fit_transform(X)
623 if keep_n_decimals:
624 X = np.round(X, decimals=keep_n_decimals) # type: ignore # noqa
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:437, in Pipeline.fit_transform(self, X, y, **fit_params)
410 """Fit the model and transform with the final estimator.
411
412 Fits all the transformers one after the other and transform the
(...)
434 Transformed samples.
435 """
436 fit_params_steps = self._check_fit_params(**fit_params)
--> 437 Xt = self._fit(X, y, **fit_params_steps)
439 last_step = self._final_estimator
440 with _print_elapsed_time("Pipeline", self._log_message(len(self.steps) - 1)):
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:339, in Pipeline._fit(self, X, y, **fit_params_steps)
336 def _fit(self, X, y=None, **fit_params_steps):
337 # shallow copy of steps - this should really be steps_
338 self.steps = list(self.steps)
--> 339 self._validate_steps()
340 # Setup the memory
341 memory = check_memory(self.memory)
File /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/pipeline.py:243, in Pipeline._validate_steps(self)
237 # We allow last estimator to be None as an identity transformation
238 if (
239 estimator is not None
240 and estimator != "passthrough"
241 and not hasattr(estimator, "fit")
242 ):
--> 243 raise TypeError(
244 "Last step of Pipeline should implement fit "
245 "or be the string 'passthrough'. "
246 "'%s' (type %s) doesn't" % (estimator, type(estimator))
247 )
TypeError: Last step of Pipeline should implement fit or be the string 'passthrough'. '<function identity at 0x7fc7b4870430>' (type <class 'function'>) doesn't
``` | open | 2023-05-01T06:13:05Z | 2023-05-26T23:48:30Z | https://github.com/graphistry/pygraphistry/issues/473 | [
"bug"
] | lmeyerov | 4 |
graphql-python/graphene-django | graphql | 764 | It is possible to translate validation errors that graphene django provides? | `{
"errors": [
{
"message": "Authentication credentials were not provided",
"locations": [
{
"line": 2,
"column": 3
}
]
}
],
"data": {
"viewer": null
}
}` | closed | 2019-09-05T14:43:37Z | 2019-10-21T11:42:59Z | https://github.com/graphql-python/graphene-django/issues/764 | [] | claudio-evocorp | 1 |
cupy/cupy | numpy | 8,144 | Scipy stats hypergeom function implement | ### Description
Hello cupy develop team,
I would like to request if scipy stats hypergeom function can be incorporate to cupy. I would like to speed up my pipeline and I use hypergeom function frequently.
Thank you!
### Additional Information
_No response_ | open | 2024-01-25T21:29:33Z | 2024-03-17T13:17:47Z | https://github.com/cupy/cupy/issues/8144 | [
"contribution welcome",
"cat:feature"
] | yihan1119 | 12 |
jazzband/django-oauth-toolkit | django | 796 | Undo commented-out PEP8 tests | Some flake8 tests were commented-out/disabled (with `--exit-zero`) from tox.ini (see #749 for example). Restore these tests and fix the code that caused them to fail. | closed | 2020-03-01T19:11:42Z | 2020-03-02T01:49:28Z | https://github.com/jazzband/django-oauth-toolkit/issues/796 | [] | n2ygk | 0 |
Buuntu/fastapi-react | fastapi | 87 | Nginx container sometimes errors on load | This has to do with not waiting for frontend and backend containers to be started before the Nginx container loads. Using something like `depends_on` in Docker won't work because the frontend container has technically started even before Webpack is built. Backend can be an issue too but Webpack is the one that takes a significant amount of time in my experience.
Example error:
```
nginx_1 | 2020/07/08 06:13:44 [emerg] 1#1: host not found in upstream "backend" in /etc/nginx/conf.d/default.conf:24
nginx_1 | nginx: [emerg] host not found in upstream "backend" in /etc/nginx/conf.d/default.conf:24
```
What we need is something like Docker's [healthcheck](https://docs.docker.com/engine/reference/builder/#healthcheck) or the [wait-for-it](https://github.com/vishnubob/wait-for-it) script to check that these containers are responding to requests. It would also be good if Nginx can report an error message that makes sense (like that it's starting up) during this time, to avoid confusion. | open | 2020-07-15T20:32:22Z | 2020-07-16T03:19:16Z | https://github.com/Buuntu/fastapi-react/issues/87 | [] | Buuntu | 2 |
psf/requests | python | 6,747 | Check for codes | Please refer to our [Stack Overflow tag](https://stackoverflow.com/questions/tagged/python-requests) for guidance. | closed | 2024-06-21T13:59:08Z | 2024-06-21T13:59:20Z | https://github.com/psf/requests/issues/6747 | [
"Question/Not a bug",
"actions/autoclose-qa"
] | Gostqa | 1 |
custom-components/pyscript | jupyter | 244 | [Feature Request] Throttle Decorator | I've found ways to implement throttle myself in a few ways but I think it'd be nice if the framework supplied its own.
I could also see it taking in a string for uniqueness like `task.unique`.
Thanks! | open | 2021-09-19T18:03:40Z | 2022-01-07T03:17:15Z | https://github.com/custom-components/pyscript/issues/244 | [] | tal | 2 |
flasgger/flasgger | api | 562 | Incompatibility with flask 2.3 | The [relase notes for flask 2.3](https://flask.palletsprojects.com/en/2.3.x/changes/) contain this:
> json_encoder and json_decoder attributes on app and blueprint, and the corresponding json.JSONEncoder and JSONDecoder classes, are removed.
There is [some more information in the docs for 2.2](https://flask.palletsprojects.com/en/2.2.x/api/#flask.json.JSONEncoder) about `JSONEncoder`.
Currently `JSONEncoder` is used [here in flasgger](https://github.com/flasgger/flasgger/blob/master/flasgger/base.py#L898), leading to this import error:
```
my_api/__init__.py:9: in <module>
from flasgger import Swagger
/usr/local/lib/python3.10/site-packages/flasgger/__init__.py:10: in <module>
from .base import Swagger, Flasgger, NO_SANITIZER, BR_SANITIZER, MK_SANITIZER, LazyJSONEncoder # noqa
/usr/local/lib/python3.10/site-packages/flasgger/base.py:28: in <module>
from flask.json import JSONEncoder
E ImportError: cannot import name 'JSONEncoder' from 'flask.json' (/usr/local/lib/python3.10/site-packages/flask/json/__init__.py)
```
I think the fix would be to port the serialization behavior to [JSONProvider](https://flask.palletsprojects.com/en/2.3.x/api/#flask.json.provider.JSONProvider).
There is some information in the [PR that introduced JSONProvider](https://github.com/pallets/flask/pull/4692) about why they changed this. | closed | 2023-04-26T08:02:05Z | 2023-05-04T02:20:48Z | https://github.com/flasgger/flasgger/issues/562 | [] | totycro | 0 |
mherrmann/helium | web-scraping | 20 | Running multiple instances in parallel | I've been using Selenium to run multiple browser windows in parallel. To do this I would simply start a new Chrome instance and place all the driver references into a list. I then send each instance to a thread to run through its instructions. However, it doesn't seem like this is possible with Helium as, as far as I can tell, there is just one global Helium instance and I need to set the driver before executing the command. However, when I run this in multiple threads, the wrong commands get called.
Is there to have multiple instances of Helium at one time? | open | 2020-04-29T16:34:57Z | 2024-01-16T10:31:26Z | https://github.com/mherrmann/helium/issues/20 | [] | TheBestMoshe | 4 |
Yorko/mlcourse.ai | numpy | 573 | Re-run topic9_part1_time_series_python.ipynb for English version | Plots and output are missing for the second half of the notebook when viewed via nbviewer:
https://nbviewer.jupyter.org/github/Yorko/mlcourse_open/blob/master/jupyter_english/topic09_time_series/topic9_part1_time_series_python.ipynb
Globally, an excellent resource. Thanks! | closed | 2019-02-18T01:43:25Z | 2019-02-18T16:09:37Z | https://github.com/Yorko/mlcourse.ai/issues/573 | [] | khof312 | 1 |
keras-team/keras | tensorflow | 20,556 | How to enable Flash-Attn in the PyTorch backend. | The 3.7.0 update documentation states that the PyTorch backend is optionally invoked. I now want to call the BERT model from keras_hub. How do I start Flash Attn? | closed | 2024-11-27T14:54:10Z | 2024-11-29T13:14:22Z | https://github.com/keras-team/keras/issues/20556 | [
"type:support"
] | pass-lin | 3 |
keras-team/keras | tensorflow | 20,210 | Embedding Projector using TensorBoard callback | # Environment
- Python 3.12.4
- Tensorflow v2.16.1-19-g810f233968c 2.16.2
- Keras 3.5.0
- TensorBoard 2.16.2
# How to reproduce it?
I tried to visualizing data using [the embedding Projector in TensorBoard](https://github.com/tensorflow/tensorboard/blob/2.16.2/docs/tensorboard_projector_plugin.ipynb). So I added the following args to TensorBoard callback:
```python
metadata_filename = "metadata.tsv"
os.makedirs(logs_path, exist_ok=True)
# Save Labels separately on a line-by-line manner.
with open(os.path.join(logs_path, metadata_filename), "w") as f:
for token in vectorizer.get_vocabulary():
f.write("{}\n".format(token))
keras.callbacks.TensorBoard(
log_dir=logs_path,
embeddings_freq=1,
embeddings_metadata=metadata_filename
)
```
Anyway TensorBoard embedding tab only shows [this HTML page](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/vz_projector/vz-projector-dashboard.ts#L23-L65).
# Issues
The above HTML page is returned because [`dataNotFound` is true](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/vz_projector/vz-projector-dashboard.ts#L22). This happens because [this route](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/vz_projector/vz-projector-dashboard.ts#L97) (`http://localhost:6006/data/plugin/projector/runs`) returns an [empty JSON](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/projector_plugin_test.py#L71-L72). In particular, this route is addressed by [this Python function](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/projector_plugin.py#L545-L549). Under the hood this function tries to [find the latest checkpoint](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/projector_plugin.py#L458). In particular, it gets the path of the latest checkpoint using [`tf.train.latest_checkpoint`](https://github.com/tensorflow/tensorflow/blob/810f233968cec850915324948bbbc338c97cf57f/tensorflow/python/checkpoint/checkpoint_management.py#L328-L365). Like doc string states, this TF function finds a **TensorFlow (2 or 1.x) checkpoint**. Now, TensorBoard callback [saves a checkpoint](https://github.com/keras-team/keras/blob/fa834a767bfab5d8e4180ada03fd0b7a597d6d55/keras/src/callbacks/tensorboard.py#L591-L596), at the end of the epoch, but it is a **Keras checkpoint**.
Furthermore, `projector_config.pbtxt` is written in the [wrong place](https://github.com/keras-team/keras/blob/fa834a767bfab5d8e4180ada03fd0b7a597d6d55/keras/src/callbacks/tensorboard.py#L304): TensorBoard [expects this file](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/tensorboard/plugins/projector/projector_plugin.py#L441) in the same place where checkpoints are saved.
Finally, choosing [a fixed name](https://github.com/keras-team/keras/blob/fa834a767bfab5d8e4180ada03fd0b7a597d6d55/keras/src/callbacks/tensorboard.py#L278-L283) is a strong assumption. In my model, tensor associated to Embedding layer had a different name (obviously).
## Notes
IMO this feature stopped working when the callback updated to TF 2.0. Indeed, callback for TF 1.x should work. For example, it [saves checkpoint](https://github.com/keras-team/tf-keras/blob/c5f97730b2e495f5f56fc2267d22504075e46337/tf_keras/callbacks_v1.py#L493-L497) using TF format. But when callback was updated to be compatible with TF 2.0 it was used `tf.keras.Model.save_weights` and not `tf.train.Checkpoint`: perfectly legit like reported [here](https://github.com/tensorflow/tensorflow/blob/810f233968cec850915324948bbbc338c97cf57f/tensorflow/python/training/saver.py#L646-L650).
# Possible solution
Saving only weights from Embedding layer. [Here](https://github.com/tensorflow/tensorboard/blob/4c004d4bddb5040de138815b3bec3cb2829d2878/docs/tensorboard_projector_plugin.ipynb#L242-L249), you can find an example. To get model, you can use [`self._model`](https://github.com/keras-team/keras/blob/fa834a767bfab5d8e4180ada03fd0b7a597d6d55/keras/src/callbacks/tensorboard.py#L203). Plus it is not necessary to specify tensor name because there is only one tensor to save. The only drawback is: how to handle two or more embeddings? | open | 2024-09-04T16:58:15Z | 2024-09-19T16:17:55Z | https://github.com/keras-team/keras/issues/20210 | [
"stat:awaiting keras-eng",
"type:Bug"
] | miticollo | 4 |
Asabeneh/30-Days-Of-Python | flask | 88 | Something wrong with the sentence. | 
Double "start”?? | closed | 2020-10-11T15:57:25Z | 2021-01-28T13:05:45Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/88 | [] | Fatpandac | 0 |
deezer/spleeter | deep-learning | 673 | [Bug] Spleeter only processes the first minute of audio | - [x] I didn't find a similar issue already open.
- [x] I read the documentation (README AND Wiki)
- [x] I have installed FFMpeg
- [x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
When trying to separate the music using any of the tracks (2/4/5 stems) the spleeter processes the command normally, but the final audio is only 1 minute long.
## Step to reproduce
1. Installed using `pip`
2. Run `python3 -m spleeter separate -p spleeter:2stems -o output audio.mp3`
3. Receive "succesfully" message
4. The final audio is just 1 minute
## Output
```bash
INFO:spleeter:File output/audio/vocals.wav written succesfully
INFO:spleeter:File output/audio/accompaniment.wav written succesfully
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux (Kubuntu 20.04) |
| Installation type | pip |
| RAM available | 8GB |
| Hardware spec | CPU (I5 3ª Generation) |
## Additional context
It started happening after the September 3rd update.
| closed | 2021-10-30T16:18:56Z | 2022-07-31T18:40:19Z | https://github.com/deezer/spleeter/issues/673 | [
"bug",
"invalid"
] | LoboMetalurgico | 5 |
darrenburns/posting | automation | 95 | Make keybindings configurable | I know this might be a big ask, but my current issue is that a few useful keybindings like C-j and C-o in my case are already in use by my wezterm configuration.
While I could replace C-o with something else, C-j is what I use for pane navigation, and I can see how others using tmux-navigation or a zellij variation can suffer from this same problem.
To simplify the initial title, making C-j configurable would be enough to avoid commonly used terminal multiplexer navigation
Lovely tool by the way! | closed | 2024-08-24T14:52:07Z | 2024-11-18T17:27:35Z | https://github.com/darrenburns/posting/issues/95 | [] | diegodorado | 8 |
cupy/cupy | numpy | 8,697 | Creating Array with Pinned Memory fails | ### Description
when trying to create a CuPy array with pinned memory we get a Type Error.
`TypeError: Cannot convert cupy.cuda.pinned_memory.PinnedMemoryPointer to cupy.cuda.memory.MemoryPointer`
### To Reproduce
```py
import cupy
with cupy.cuda.using_allocator(cupy.get_default_pinned_memory_pool().malloc):
a = cupy.arange(10)
```
### Installation
Conda-Forge (`conda install ...`)
### Environment
```
OS : Linux-5.15.0-1025-nvidia-x86_64-with-glibc2.35
Python Version : 3.12.2
CuPy Version : 13.3.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.1.2
SciPy Version : None
Cython Build Version : 0.29.37
Cython Runtime Version : None
CUDA Root : /opt/nvidia/hpc_sdk/Linux_x86_64/24.9/compilers
nvcc PATH : /opt/nvidia/hpc_sdk/Linux_x86_64/24.9/compilers/bin/nvcc
CUDA Build Version : 12060
CUDA Driver Version : 12040
CUDA Runtime Version : 12060 (linked to CuPy) / 12060 (locally installed)
CUDA Extra Include Dirs : ['/opt/conda/targets/x86_64-linux/include', '/opt/conda/include']
cuBLAS Version : (available)
cuFFT Version : 11300
cuRAND Version : 10307
cuSOLVER Version : (11, 7, 1)
cuSPARSE Version : (available)
NVRTC Version : (12, 6)
Thrust Version : 200500
CUB Build Version : 200600
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : 22304
NCCL Runtime Version : 21805
cuTENSOR Version : 20002
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA A100-SXM4-80GB
Device 0 Compute Capability : 80
Device 0 PCI Bus ID : 0000:C2:00.0
```
### Additional Information
_No response_ | open | 2024-10-25T21:01:02Z | 2025-02-07T00:45:12Z | https://github.com/cupy/cupy/issues/8697 | [
"cat:enhancement",
"prio:low"
] | Marcus-M1999 | 2 |
ray-project/ray | deep-learning | 50,661 | [Core] Why does the statistical information of node report that the message is too large? | ### What happened + What you expected to happen

### Versions / Dependencies
main
### Reproduction script
Start 8 ray local clusters on a single node. I don't know why it reports that the message exceeds 2g.
### Issue Severity
None | open | 2025-02-17T12:08:35Z | 2025-02-20T00:14:52Z | https://github.com/ray-project/ray/issues/50661 | [
"bug",
"P2",
"@external-author-action-required",
"core"
] | Jay-ju | 1 |
Johnserf-Seed/TikTokDownload | api | 602 | Windows10怎么设置TikTokTool为环境变量 | 设置PATH路径不生效,添加浏览器端cookie也无法完成主页下载
扫码登陆二维码图案会变形无法完成扫描
| closed | 2023-11-17T01:34:03Z | 2024-02-24T10:09:02Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/602 | [
"不修复(wontfix)"
] | Virtual-human | 1 |
piskvorky/gensim | machine-learning | 2,938 | Fix deprecations in SoftCosineSimilarity | When running our CI test suite, I see an array of deprecation warnings:
https://dev.azure.com/rare-technologies/gensim-ci/_build/results?buildId=287&view=logs&jobId=f9575ddc-dec8-54e6-9d26-abb8bdd9bed7&j=f9575ddc-dec8-54e6-9d26-abb8bdd9bed7&t=180156a9-2bf9-537d-c84a-ef9e808c0367
Some are from gensim, some from scipy:
<img width="1440" alt="Screen Shot 2020-09-09 at 09 16 00" src="https://user-images.githubusercontent.com/610412/92567026-635b8f00-f27d-11ea-9d5f-c56f3ba7c08b.png">
@Witiko could you please have a look? Is it something your existing PR already addresses?
If not, can you please fix those? Thanks. | closed | 2020-09-09T07:20:19Z | 2020-09-16T07:32:48Z | https://github.com/piskvorky/gensim/issues/2938 | [
"bug",
"reach HIGH",
"impact MEDIUM",
"housekeeping"
] | piskvorky | 6 |
holoviz/panel | matplotlib | 6,898 | Basic auth seems to be broken starting with panel 1.4.3 | #### Description of expected behavior and the observed behavior
panel program started with --auth-module=xxx.py works fine with panel 1.4.2, not working with 1.4.3 and 1.4.4
#### Complete, minimal, self-contained example code that reproduces the issue
auth-test.py
```python
import datetime
import tornado
from tornado.web import RequestHandler
# could define get_user_async instead
def get_user(request_handler):
return request_handler.get_cookie("user")
# could also define get_login_url function (but must give up LoginHandler)
login_url = "/login"
class LoginHandler(RequestHandler):
def get(self):
try:
errormessage = self.get_argument("error")
except Exception:
errormessage = ""
self.render("login.html", errormessage=errormessage)
def check_permission(self, username, password):
if username == "testuser" and password == "password":
return True
return False
def post(self):
username = self.get_argument("username", "")
password = self.get_argument("password", "")
auth = self.check_permission(username, password)
if auth:
self.set_current_user(username)
self.redirect("/")
login_status = "successful"
else:
error_msg = "?error=" + \
tornado.escape.url_escape("Login incorrect")
login_status = "failed"
self.redirect(login_url + error_msg)
def set_current_user(self, user):
if user:
self.set_cookie(
"user",
tornado.escape.json_encode(user),
expires_days=0.3
)
else:
self.clear_cookie("user")
# optional logout_url, available as curdoc().session_context.logout_url
logout_url = "/logout"
# optional logout handler for logout_url
class LogoutHandler(RequestHandler):
def get(self):
self.clear_cookie("user")
self.redirect("/")
login.html
```html
<!doctype html>
<html>
<head>
<title>Test Login</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html {
height: 100%;
}
body {
font-family: "Segoe UI", sans-serif;
font-size: 1rem;
line-height: 1.6;
height: 100%;
}
.wrap {
width: 100%;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
background: #fafafa;
}
.login-form {
width: 350px;
margin: 0 auto;
border: 1px solid #ddd;
padding: 2rem;
background: #ffffff;
}
.form-input {
background: #fafafa;
border: 1px solid #eeeeee;
padding: 12px;
width: 100%;
}
.form-group {
margin-bottom: 1rem;
}
.form-button {
background: #69d2e7;
border: 1px solid #ddd;
color: #ffffff;
padding: 10px;
width: 100%;
text-transform: uppercase;
}
.form-button:hover {
background: #69c8e7;
}
.form-header {
text-align: center;
margin-bottom: 2rem;
}
.form-footer {
text-align: center;
}
</style>
</head>
<body>
<div class="wrap">
<form class="login-form" action="/login" method="post">
{% module xsrf_form_html() %}
<div class="form-header">
<p>Login to access dashboard</p>
</div>
<!--Email Input-->
<div class="form-group">
<input
name="username"
type="text"
class="form-input"
autocapitalize="off"
autocorrect="off"
placeholder="username"
/>
</div>
<!--Password Input-->
<div class="form-group">
<input
name="password"
type="password"
class="form-input"
placeholder="password"
/>
</div>
<!--Login Button-->
<div class="form-group">
<button class="form-button" type="submit">Login</button>
</div>
<span class="errormessage">{{errormessage}}</span>
</form>
</div>
</body>
</html>
```
test_login.py
```python
import panel as pn
pn.extension()
pn.panel("Hello World").servable()
```
### panel serve test_login.py --auth-module=auth-test.py
Works up until panel 1.4.2
For panel 1.4.3 and 1.4.4:
2024-06-06 21:27:00,402 Uncaught exception GET /test_login (::1)
HTTPServerRequest(protocol='http', host='localhost:5006', method='GET', uri='/test_login', version='HTTP/1.1', remote_ip='::1')
Traceback (most recent call last):
File "/Users/sylvaint/mambaforge/envs/panel_latest/lib/python3.10/site-packages/tornado/web.py", line 1769, in _execute
result = await result
File "/Users/sylvaint/mambaforge/envs/panel_latest/lib/python3.10/site-packages/panel/io/server.py", line 508, in get
payload = self._generate_token_payload()
File "/Users/sylvaint/mambaforge/envs/panel_latest/lib/python3.10/site-packages/panel/io/server.py", line 452, in _generate_token_payload
payload.update(self.application_context.application.process_request(self.request))
File "/Users/sylvaint/mambaforge/envs/panel_latest/lib/python3.10/site-packages/panel/io/application.py", line 103, in process_request
user = decode_signed_value(config.cookie_secret, 'user', user.value).decode('utf-8')
AttributeError: 'NoneType' object has no attribute 'decode'
2024-06-06 21:27:00,404 500 GET /test_login (::1) 1.78ms
| closed | 2024-06-07T03:44:18Z | 2024-07-12T12:47:22Z | https://github.com/holoviz/panel/issues/6898 | [] | sylvaint | 0 |
iMerica/dj-rest-auth | rest-api | 566 | Refresh Token to be saved as a http only cookie instead of Access Token | Hello,
I am requesting to the developers of this repository to include the "**refresh token**" instead of access token as a http only cookie(HttpOnly=true) along with sessionid(HttpOnly=true) and csrftoken(HttpOnly=false) as a response to the **/dj-rest-auth/login/** (POST) endpoint.
The reason being "refresh_token" is not included in the response data and also not set as a HttpOnly cookie makes it difficult to get hold of the refresh token (from the django server) when the access token expires.
Setting refresh token as a HttpOnly cookie and access token in the response data will help to access both tokens at the appropriate point in the code.
Please let me know your thoughts on this.
Thanks,
A. | closed | 2023-11-08T19:11:54Z | 2023-11-08T19:21:56Z | https://github.com/iMerica/dj-rest-auth/issues/566 | [] | anykate | 1 |
errbotio/errbot | automation | 1,373 | Keep plugin configuration in config.py | Excuse me if this question is very basic, but I really couldn't find the answer in documentation, existing Github issues or in the Internet.
I am trying to use Webserver plugin, which needs activation. The only way I found to activate it is to send commands to the bot. Configuration is then seems to be stored in the errbot db. Is there a way to keep configuration in config.py? I would like to store all the config as code in the repository and also package the bot together with all the relevant configuration, instead of running imperative commands _after_ bot is stared and rely on some persistence layer to just keep the config of the bot. | closed | 2019-08-07T08:48:26Z | 2019-10-16T19:12:25Z | https://github.com/errbotio/errbot/issues/1373 | [
"type: support/question"
] | Fodoj | 2 |
tensorflow/tensor2tensor | deep-learning | 1,664 | File Not Found Error | ### Description
When running the Walkthrough of tensor2tensor (the commands are exactly the same except changing some directories), an error message occurs: "translation.en; No such file or directory".
This error happens at the command "cat translation.en".
I didn't change anything of the code. How to solve this? | open | 2019-08-18T11:01:07Z | 2019-08-18T11:01:07Z | https://github.com/tensorflow/tensor2tensor/issues/1664 | [] | stormsunshine | 0 |
localstack/localstack | python | 12,104 | bug: LocalStack Lambda Java Runtime configuration broken for headless fonts. | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In AWT headless mode we should still be able to load and work with fonts. In LocalStack the Java runtime fontconfig appears misconfigured at the OS level, Headless font operations break as shown in the code below:
```java
GraphicsEnvironment environment = GraphicsEnvironment.getLocalGraphicsEnvironment();
for (Font font : environment.getAllFonts()) {
System.out.println(font.getFontName() + " " + font.getFamily());
}
```
Throws Runtime exception: Fontconfig head is null, check your fonts or fonts configuration.
This works nicely on AWS real lambda runtime 21. The only adjustment I have made is setting HOME=/tmp in environment to stop fontconfig cache warnings (moving home directory to writeable on AWS Lambda).
### Expected Behavior
No output if no fonts configured, or at least the "default" fonts that are part of the Java VM implementation. On AWS real we get the JVM minimum logical font families (Dialog, DialogInput, Monospaced, SansSerif, Serif) and "Noto Sans" on the 21 runtime. You can also load custom fonts into the lambda Runtime VM.
AWS Real Java 21 lambda runtime returns:
```
Dialog.bold Dialog
Dialog.bolditalic Dialog
Dialog.italic Dialog
Dialog.plain Dialog
DialogInput.bold DialogInput
DialogInput.bolditalic DialogInput
DialogInput.italic DialogInput
DialogInput.plain DialogInput
Monospaced.bold Monospaced
Monospaced.bolditalic Monospaced
Monospaced.italic Monospaced
Monospaced.plain Monospaced
Noto Sans Italic Noto Sans
Noto Sans Regular Noto Sans
SansSerif.bold SansSerif
SansSerif.bolditalic SansSerif
SansSerif.italic SansSerif
SansSerif.plain SansSerif
Serif.bold Serif
Serif.bolditalic Serif
Serif.italic Serif
Serif.plain Serif
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
I have built an entire maven project that reproduces the bug on LocalStack, and can deploy the jar working on an AWS account.
Readme contains information on running the test with Localstack, and deploying the same jar to AWS Lambda runtime (21).
Please see https://github.com/LimeMojito/localstack-lambda-font-failure
### Environment
```markdown
- OS: Mac Sequoia 15.2
- LocalStack:
LocalStack version: 4.0.3
LocalStack Docker image sha:
LocalStack build date:
LocalStack build git hash:
```
### Anything else?
_No response_ | open | 2025-01-06T04:27:25Z | 2025-01-07T12:42:29Z | https://github.com/localstack/localstack/issues/12104 | [
"type: bug",
"status: triage needed",
"aws:lambda",
"status: backlog"
] | lachlanodonnell | 1 |
plotly/dash | plotly | 2,954 | subtitle edit config does not always work | Subtitles do not work when dcc.Graph() config toggles from `config={editable: True}` to `config={editable: False}`. Standalone it works as expected, and subtitle is the only layout feature that is affected by this. The bug only occurs going from `True` to `False`.
```python
def plot_config(is_none=False):
if not is_none:
return {
"displaylogo": False,
"modeBarButtonsToRemove": ["zoom", "pan", "zoomIn", "zoomOut", "autoScale", "lasso2d", "select2d", "toImage"],
"edits": {
"axisTitleText": True,
"legendPosition": True,
"titleText": True
}
}
return {
"displayModeBar": False,
"staticPlot": True,
"editable": False
}
```
**Video for added clarity**
Description: On launch, the plot has no data, and is not editable, which is the desired behavior. Then off screen I select some filters which populate the chart with data. The subtitle for this chart is set to editable, and I can edit it as expected. Then I remove data from the chart, and it renders the same chart and settings as the chart on launch, except here the subtitle is not reset, **and** is editable.

| open | 2024-08-20T20:28:31Z | 2024-08-22T16:24:51Z | https://github.com/plotly/dash/issues/2954 | [
"bug",
"P2"
] | marcstern14 | 0 |
django-import-export/django-import-export | django | 1,585 | `after_import_row()` declares `row_number` kwarg but it is never passed | **Describe the bug**
[`after_import_row()`](https://github.com/django-import-export/django-import-export/blob/a5dda8a511f42917c3daf97ac787964ea91ab8af/import_export/resources.py#L702) declares `row_number` kwarg but it is [never passed](https://github.com/django-import-export/django-import-export/blob/a5dda8a511f42917c3daf97ac787964ea91ab8af/import_export/resources.py#L804)
The `row_number` value is actually in kwargs, so `row_number` declaration is unused. Other methods have this same bug.
**Versions (please complete the following information):**
- Django Import Export: 3.2.0
- Python 3.10
- Django 4.1.9
**Expected behavior**
Param should be passed in `row_number` param.
| closed | 2023-05-05T14:57:32Z | 2023-10-10T19:35:09Z | https://github.com/django-import-export/django-import-export/issues/1585 | [
"bug",
"v4"
] | matthewhegarty | 0 |
clovaai/donut | computer-vision | 308 | What should be the configuration of the machine to train the model? | open | 2024-07-01T09:29:07Z | 2024-07-01T09:29:07Z | https://github.com/clovaai/donut/issues/308 | [] | anant996 | 0 | |
seleniumbase/SeleniumBase | web-scraping | 2,697 | Freezes on ChromeOS when loading Driver | Saw similar issue [323](https://github.com/seleniumbase/SeleniumBase/issues/323), but the fix did not work. Understand it is not supported on ChromeOS, but would really like to get working as it seems(wishful thinking) like should be minor tweaks of Linux --
Sample never gets to 'Driver loaded.':
Code:
```
from seleniumbase import Driver
print('starting...')
driver = Driver(uc=True)
print('Driver loaded.')
```
Path:
```
(venv) twv123@penguin:~/my_code_projects/python/webscrape$ echo $PATH
/home/twv123/my_code_projects/python/webscrape/venv/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
```
And related:
```
(venv) twv123@penguin:~/my_code_projects/python/webscrape$ which chromium
/usr/bin/chromium
(venv) twv123@penguin:~/my_code_projects/python/webscrape$ which google-chrome
/usr/bin/google-chrome
(venv) twv123@penguin:~/my_code_projects/python/webscrape/venv/bin$ ls -l
total 15876
-rw-r--r-- 1 twv123 twv123 2018 Apr 12 17:07 activate
-rw-r--r-- 1 twv123 twv123 944 Apr 12 17:07 activate.csh
-rw-r--r-- 1 twv123 twv123 2224 Apr 12 17:07 activate.fish
-rw-r--r-- 1 twv123 twv123 9033 Apr 12 17:07 Activate.ps1
-rwxr-xr-x 1 twv123 twv123 261 Apr 12 17:08 behave
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 chardetect
-rwxr-xr-x 1 twv123 twv123 16157160 Apr 15 09:32 chromedriver
-rwxr-xr-x 1 twv123 twv123 267 Apr 12 17:08 markdown-it
-rwxr-xr-x 1 twv123 twv123 280 Apr 12 17:08 normalizer
-rwxr-xr-x 1 twv123 twv123 263 Apr 12 17:08 nosetests
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 pip
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 pip3
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 pip3.11
-rwxr-xr-x 1 twv123 twv123 262 Apr 12 17:08 pygmentize
-rwxr-xr-x 1 twv123 twv123 263 Apr 12 17:08 pynose
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 py.test
-rwxr-xr-x 1 twv123 twv123 268 Apr 12 17:08 pytest
lrwxrwxrwx 1 twv123 twv123 7 Apr 12 17:07 python -> python3
lrwxrwxrwx 1 twv123 twv123 16 Apr 12 17:07 python3 -> /usr/bin/python3
lrwxrwxrwx 1 twv123 twv123 7 Apr 12 17:07 python3.11 -> python3
-rwxr-xr-x 1 twv123 twv123 278 Apr 12 17:08 sbase
-rwxr-xr-x 1 twv123 twv123 278 Apr 12 17:08 seleniumbase
-rwxr-xr-x 1 twv123 twv123 255 Apr 12 17:08 wheel
(venv) twv123@penguin:~/my_code_projects/python/webscrape/venv/bin$
```
chromedriver appears to be in the $PATH as well....?
Any thoughts appreciated - would love to use SeleniumBase to simplify things.
Thanks.
| closed | 2024-04-15T14:54:24Z | 2024-04-15T17:18:39Z | https://github.com/seleniumbase/SeleniumBase/issues/2697 | [
"duplicate",
"not enough info",
"UC Mode / CDP Mode"
] | twv123 | 2 |
D4Vinci/Scrapling | web-scraping | 50 | stdout redirection. when used from an MCP server, Scrapling's "Downloading..." messages interfere with the protocol communication | ### Have you searched if there an existing issue for this?
- [x] I have searched the existing issues
### Python version (python --version)
Python 3.12
### Scrapling version (scrapling.__version__)
0.2.96
### Dependencies version (pip3 freeze)
cffi==1.17.1
cryptography==44.0.0
gpg==1.24.1
numpy==2.2.2
openvino==2025.0.0
openvino-telemetry==2024.1.0
packaging==24.2
pillow==11.1.0
pip @ file:///opt/homebrew/Cellar/python%403.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/ensurepip/_bundled/pip-25.0-py3-none-any.whl#sha256=7014abc1d3b7485993957b49ad91a6736c87811228dd1dc4351de5d58385c1af
pybind11==2.13.6
pycparser==2.22
TBB==0.2
wheel @ file:///opt/homebrew/Cellar/python%403.13/3.13.2/libexec/wheel-0.45.1-py3-none-any.whl#sha256=b9235939e2096903717cb6bfc132267f8a7e46deb2ec3ef9c5e234ea301795d0
### What's your operating system?
Macos Sequoia
### Are you using a separate virtual environment?
Yes
### Expected behavior
when scrapling is being called by my library, i don't want it to write anything to stdio.
### Actual behavior
from time to time (presumably when those files are updated), scrapling downloads files and outputs a status message.
### Steps To Reproduce
This is the project that is using scrapling.
https://github.com/cyberchitta/scrapling-fetch-mcp
uv is being used to build and run. | closed | 2025-03-15T10:09:44Z | 2025-03-16T13:59:50Z | https://github.com/D4Vinci/Scrapling/issues/50 | [
"bug"
] | restlessronin | 16 |
axnsan12/drf-yasg | django | 887 | `produces` argument is not being resolved | # Bug Report
## Description
When set `produces=["text/calendar"]` for the operation it is not being resolved, so there is no `produces` in generated swagger


| open | 2024-07-31T09:06:08Z | 2025-03-09T10:14:07Z | https://github.com/axnsan12/drf-yasg/issues/887 | [
"bug",
"1.21.x"
] | vanya909 | 1 |
thunlp/OpenPrompt | nlp | 274 | AttributeError: 'PromptForGeneration' object has no attribute 'can_generate' | I get an error while calling PromptForGeneration.generate() method
AttributeError: 'PromptForGeneration' object has no attribute 'can_generate' | closed | 2023-05-04T07:07:43Z | 2023-05-04T11:00:29Z | https://github.com/thunlp/OpenPrompt/issues/274 | [] | ngavcc | 1 |
OpenInterpreter/open-interpreter | python | 1,202 | Use RAG for better context | ## What is RAG?
A brief explaination of RAG by GPT4-Turbo:
> "Retrieval Augmented Generation" (RAG) is a natural language processing technique that combines traditional generative models with retrieval mechanisms. RAG first retrieves relevant information from a large database of documents, and then uses this information to aid a generative model (like a Transformer) in text generation. This approach significantly enhances the relevance and accuracy of the text because the model can utilize real-time knowledge fetched from the retrieved content, rather than solely relying on the knowledge learned during training. This technique is widely used in scenarios requiring external knowledge, such as question answering, summary generation, and more.
And here's a detailed introduction of RAG: [Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models (meta.com)](https://ai.meta.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/)
## Why RAG?
Currently OI uses a simple strategy to maintain the context of conversations:
* Always have the system prompts in context, which are necessary.
* Put all history messages in the context and send them to LLM, as long as the length of them doesn't exceed the `context_window` setting.
* If the length of history messages exceeds the `context_window`, remove the earliest messages from the context until it can fit into.
This strategy works well on most use cases for daily tasks, however there are several problems when engaging conversations need long context, like using LLM as an assistant to summarize research papers of one field:
* **High token cost:** before exceeding the `context_window`, the length of context sent to LLM grows linearly as the conversation going on. With models having larger and larger context window (for example, the current default model `openai/gpt4-turbo` which has a 128K context window), this will cost a lot if a user keep asking questions in one single conversation. What's even more horrifying, if one conversation exceeded the `context_window`, the cost of each request following won't grow but will keep at a very expensive price.
* **Lost of early memory:** for most cases, the most important context for LLM to generate a good answer are the latest messages, however sometimes there are also important information in early messages like user's instructions for the current conversation and background info.
* **Noise in context:** even though LLMs are capable enough to handle a lot of information and get the useful part, somehow the irrelevant information would affect the accuracy of LLMs' answer.
With RAG, we can convert history message of current into embeddings and store them into a vector database. Every time there's a new message from the user, we can use the user's input as a query to search the most relevant context from the vector database and put into the context sent to LLM. In this way, we can have flexible context length for different questions and have more useful information in the context. Besides, we can do more with RAG in the future, for example we can add an interface for users to import local documents as background information for conversations.
All in all, as a LLM client (both as an application kernel and a standalone CLI), context management is a very low-level but important part, spend some effort on this will be helpful.
## How to implement RAG in OI?
I think [langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications (github.com)](https://github.com/langchain-ai/langchain) would be a great library to import RAG as well as other cool features to enhance the context management function of OI. Anyway, this will be a tough and huge task, a lot of research, develop and test works included. Implementation details will be updated later. BTW, I am planning to implement this as an optional feature and set it off by default, which means this is only for the users who know well about what they are playing with. | closed | 2024-04-12T19:28:33Z | 2024-04-13T05:15:56Z | https://github.com/OpenInterpreter/open-interpreter/issues/1202 | [] | Steve235lab | 5 |
youfou/wxpy | api | 91 | 机器人工作一段时间,似乎会自动停止 | 我的程序在跑了2天后,就不响应了。
是否要保持手机端的wx一直在线呢。
掉线了如何通知呢 | open | 2017-06-21T02:37:02Z | 2017-06-23T09:03:18Z | https://github.com/youfou/wxpy/issues/91 | [] | davieds | 2 |
biolab/orange3 | pandas | 6,299 | "CN2Classifier object has no attribute params" shows if I press "report" menu of an OWRuleViewer | Here is my environment:
Python 3.9
PyQt5
Orange3
here is my code, you can run this code directly
```from PyQt5 import QtWidgets, QtGui, QtCore
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
import Orange
from Orange.widgets.visualize.owruleviewer import OWRuleViewer
from Orange.classification import CN2Learner
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(1200, 800)
self.MainWindow = MainWindow
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.verticalLayout = QtWidgets.QVBoxLayout()
self.verticalLayout.setObjectName("verticalLayout")
self.horizontalLayout = QtWidgets.QHBoxLayout()
self.horizontalLayout.setObjectName("horizontalLayout")
spacerItem = QtWidgets.QSpacerItem(20, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem)
self.pushButton_showOrange = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_showOrange.setObjectName("pushButton_showOrange")
self.horizontalLayout.addWidget(self.pushButton_showOrange)
spacerItem1 = QtWidgets.QSpacerItem(40, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem1)
self.pushButton_closeOrange = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_closeOrange.setObjectName("pushButton_closeOrange")
self.horizontalLayout.addWidget(self.pushButton_closeOrange)
spacerItem2 = QtWidgets.QSpacerItem(20, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout.addItem(spacerItem2)
self.verticalLayout.addLayout(self.horizontalLayout)
spacerItem3 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
self.verticalLayout.addItem(spacerItem3)
self.horizontalLayout_2 = QtWidgets.QHBoxLayout()
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
spacerItem4 = QtWidgets.QSpacerItem(10, 20, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_2.addItem(spacerItem4)
self.tabWidget = QtWidgets.QTabWidget(self.centralwidget)
self.tabWidget.setObjectName("tabWidget")
self.tab_added = QtWidgets.QWidget()
self.tab_added.setObjectName("tab_added")
current_verticalLayout = QtWidgets.QVBoxLayout(self.tab_added)
current_verticalLayout.setObjectName("current_verticalLayout")
spacerItem2 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
current_verticalLayout.addItem(spacerItem2)
###############################Python 3.9 + PyQt5 + Orange 3 ######################################
data = Orange.data.Table(r"D:\Software\Orange3\Orange\Lib\site-packages\Orange\datasets\heart_disease.tab")
learner = Orange.classification.CN2Learner()
model = learner(data)
model.instances = data
self.ow = OWRuleViewer() # 1. create an instance
self.ow.set_classifier(model)
self.ow.show()
####################################################################################################
self.ow.setParent(self.tab_added) # 2. add "ow" to the "tab" of the QTabWidget
####################################################################################################
spacerItem3 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
current_verticalLayout.addItem(spacerItem3)
current_verticalLayout.addWidget(self.ow) # 3. add "ow" to the vertical layout
####################################################################################################
self.tabWidget.addTab(self.tab_added, "")
self.horizontalLayout_2.addWidget(self.tabWidget)
spacerItem5 = QtWidgets.QSpacerItem(10, 20, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
self.horizontalLayout_2.addItem(spacerItem5)
self.verticalLayout.addLayout(self.horizontalLayout_2)
self.gridLayout.addLayout(self.verticalLayout, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 1200, 23))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
self.tabWidget.setCurrentIndex(0)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
self.pushButton_closeOrange.clicked.connect(self.close_orange)
self.pushButton_showOrange.clicked.connect(self.show_orange)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton_showOrange.setText(_translate("MainWindow", "Open"))
self.pushButton_closeOrange.setText(_translate("MainWindow", "Close"))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_added), _translate("MainWindow", "tab_added"))
def close_orange(self):
self.ow.close()
def show_orange(self):
self.ow.show()
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
```
Then, click the "report" at the bottom left of the OWRuleViewer

then "CN2Classifier object has no attribute params" shows

but, if I comment line 177 in Python\Lib\site-packages\Orange\widgets\visualize\owruleviewer.py as below

```
def send_report(self):
if self.classifier is not None:
self. report_domain("Data domain", self.classifier.original_domain)
#self. report_items("Rule induction algorithm", self.classifier.params)
self. report_table("Induced rules", self.view)
```
then error will not occur and it works correctly:

so, is this a bug? or do I forget a neccesary step (call some method or set some attribute?) after creating an instance of OWRuleViewer?
```
data = Orange.data.Table(r"D:\Software\Orange3\Orange\Lib\site-packages\Orange\datasets\heart_disease.tab")
learner = Orange.classification.CN2Learner()
model = learner(data)
model.instances = data
self.ow = OWRuleViewer() # 1. create an instance
self.ow.set_classifier(model)
self.ow.show()
```
| closed | 2023-01-16T09:30:46Z | 2023-01-20T18:41:05Z | https://github.com/biolab/orange3/issues/6299 | [
"bug",
"snack"
] | madainigun14 | 1 |
suitenumerique/docs | django | 290 | 🛂(frontend) Doc private not connected | ## Feature Request
When we access a private doc and we are not connected we get an error message.
Instead of this error message, we should either redirect directly to the OIDC or add a obvious button to propose to connect.

| closed | 2024-09-26T08:31:18Z | 2024-09-27T14:04:32Z | https://github.com/suitenumerique/docs/issues/290 | [
"enhancement",
"frontend"
] | AntoLC | 0 |
laughingman7743/PyAthena | sqlalchemy | 339 | how to get QueryExecutionId from cursor when using sqlalchemy + pyathena | I am using sqlalchemy + pyathena
Here is an example code:
from urllib.parse import quote_plus
from sqlalchemy import create_engine, inspect
conn_str = "awsathena+rest://@athena.{region_name}.amazonaws.com:443/{schema_name}?s3_staging_dir={s3_staging_dir}"
engine = create_engine(conn_str.format(region_name="us-east-1", schema_name="default", s3_staging_dir=quote_plus("s3://aws-athena-query-results-bucket/")))
conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute("""SELECT * FROM "database"."table""")
for sqlalchemy + pyathena setup, I want to know if there is a way to get QueryExecutionId of a running query from its cursor so that it can be later used to cancel the query using [cancel method](https://github.com/laughingman7743/PyAthena/blob/master/pyathena/common.py#L484). | open | 2022-06-27T18:53:36Z | 2024-04-11T14:04:54Z | https://github.com/laughingman7743/PyAthena/issues/339 | [] | mdeshmu | 8 |
allenai/allennlp | data-science | 5,023 | Referring Expressions with COCO, COCO+, and COCOg | In the referring expressions task, the model is given an image and an expression, and has to find a bounding box in the image for the thing that the expression refers to.
Here is an example of some images with expressions:
<table width="100%">
<tr>
<td><img src="http://bvisionweb1.cs.unc.edu/licheng/referit/refer_example.jpg"></td>
</tr>
</table>
To do this, we need the following components:
1. A `DatasetReader` that reads the referring expression data, matches it up with the images, and pre-processes it to produce candidate bounding boxes. The best way to get the referring expressions annotations is from https://github.com/lichengunc/refer, though the code there is out of date, so we'll have to write our own code to read in that data. Other than that, the dataset reader should follow the example of [`VQAv2Reader`](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/dataset_readers/vqav2.py#L239). The resulting `Instance`s should consist of the embedded regions of interest from the `RegionDetector`, the text of one referring expression, in a `TextField`, and a label field that gives the [IoU](https://stackoverflow.com/questions/25349178/calculating-percentage-of-bounding-box-overlap-for-image-detector-evaluation) between the gold annotated region and each predicted region.
2. A `Model` that uses VilBERT as a back-end to combine the vision and text data, and gives each region a score. The model computes a loss by taking the softmax of the region scores, and computing the dot product of that with the label field. You might want to look at [VqaVilbert](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/models/vilbert_vqa.py#L23) to steal some ideas.
3. A model config that trains this whole thing end-to-end. We're hoping get somewhere near the scores in the [VilBERT 12-in-1 paper](https://www.semanticscholar.org/paper/12-in-1%3A-Multi-Task-Vision-and-Language-Learning-Lu-Goswami/b5f3fe42548216cd93816b1bf5c437cf47bc5fbf), though we won't beat the high score since this issue does not cover the extensive multi-task-training work that's covered in the paper.
As always, we recommend you use the [AllenNLP Repository Template](https://github.com/allenai/allennlp-template-config-files) as a starting point. | open | 2021-02-26T00:35:13Z | 2021-03-18T20:53:56Z | https://github.com/allenai/allennlp/issues/5023 | [
"Contributions welcome",
"Models",
"hard"
] | dirkgr | 0 |
horovod/horovod | deep-learning | 3,511 | Horovod installation for TF CPU nightly fails with error: no member "tensorflow_gpu_device_info"! | **Environment:**
1. TensorFlow
2. Framework version: 2.10 nightly
3. Horovod version: 0.24.2 all the way up to tip of master
4. MPI version: 4.0.3
5. CUDA version: N/A this is CPU install
6. NCCL version: N/A
7. Python version: 3.8.10
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: Ubuntu 20.04.4 LTS
11. GCC version: 9.4.0
12. CMake version: 3.16.3
While installing any version of Horovod from `0.24.2` all the way up to tip of `master` branch and with the following settings I get:
```
# Install Horovod
export HOROVOD_WITHOUT_PYTORCH=1
export HOROVOD_WITHOUT_MXNET=1
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_VERSION=v0.24.2
```
and then:
```
python3 -m pip install git+https://github.com/horovod/horovod.git@${HOROVOD_VERSION}
```
and I'm getting this error during installation:
```
/tmp/pip-req-build-xs138tj2/horovod/common/ops/gloo_operations.h:51:8: required from here
/tmp/pip-req-build-xs138tj2/third_party/gloo/gloo/math.h:20:22: warning: comparison of integer expressions of different signedness: ‘int’ and ‘size_t’ {aka ‘long unsigned int’} [-Wsign-compare]
[ 99%] Building CXX object horovod/tensorflow/CMakeFiles/tensorflow.dir/mpi_ops.cc.o
cd /tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo/horovod/tensorflow && /usr/bin/c++ -DEIGEN_MPL2_ONLY=1 -DHAVE_GLOO=1 -DHAVE_MPI=1 -DTENSORFLOW_VERSION=2010000000 -Dtensorflow_EXPORTS -I/tmp/pip-req-build-xs138tj2/third_party/HTTPRequest/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/assert/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/config/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/core/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/detail/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/iterator/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/lockfree/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/mpl/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/parameter/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/predef/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/preprocessor/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/static_assert/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/type_traits/include -I/tmp/pip-req-build-xs138tj2/third_party/boost/utility/include -I/tmp/pip-req-build-xs138tj2/third_party/lbfgs/include -I/tmp/pip-req-build-xs138tj2/third_party/gloo -I/tmp/pip-req-build-xs138tj2/third_party/flatbuffers/include -isystem /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi -isystem /usr/lib/x86_64-linux-gnu/openmpi/include -isystem /usr/local/lib/python3.8/dist-packages/tensorflow/include -I/usr/local/lib/python3.8/dist-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0 -DEIGEN_MAX_ALIGN_BYTES=64 -pthread -fPIC -Wall -ftree-vectorize -mf16c -mavx -mfma -O3 -g -DNDEBUG -fPIC -std=c++14 -o CMakeFiles/tensorflow.dir/mpi_ops.cc.o -c /tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc: In function ‘int horovod::tensorflow::{anonymous}::GetDeviceID(tensorflow::OpKernelContext*)’:
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:389:26: error: ‘class tensorflow::DeviceBase’ has no member named ‘tensorflow_gpu_device_info’; did you mean ‘tensorflow_accelerator_device_info’?
context->device()->tensorflow_gpu_device_info() != nullptr) {
^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow_accelerator_device_info
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:390:33: error: ‘class tensorflow::DeviceBase’ has no member named ‘tensorflow_gpu_device_info’; did you mean ‘tensorflow_accelerator_device_info’?
device = context->device()->tensorflow_gpu_device_info()->gpu_id;
^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow_accelerator_device_info
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc: At global scope:
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:384:18: warning: ‘tensorflow::OpKernelContext* horovod::tensorflow::{anonymous}::TFOpContext::GetKernelContext() const’ defined but not used [-Wunused-function]
OpKernelContext* TFOpContext::GetKernelContext() const { return context_; }
^~~~~~~~~~~
/tmp/pip-req-build-xs138tj2/horovod/tensorflow/mpi_ops.cc:293:30: warning: ‘const tensorflow::Tensor* horovod::tensorflow::{anonymous}::TFTensor::tensor() const’ defined but not used [-Wunused-function]
const ::tensorflow::Tensor* TFTensor::tensor() const { return &tensor_; }
^~~~~~~~
make[2]: *** [horovod/tensorflow/CMakeFiles/tensorflow.dir/build.make:453: horovod/tensorflow/CMakeFiles/tensorflow.dir/mpi_ops.cc.o] Error 1
make[2]: Leaving directory '/tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo'
make[1]: *** [CMakeFiles/Makefile2:443: horovod/tensorflow/CMakeFiles/tensorflow.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-req-build-xs138tj2/build/temp.linux-x86_64-cpython-38/RelWithDebInfo'
make: *** [Makefile:130: all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-xs138tj2/setup.py", line 166, in <module>
setup(name='horovod',
File "/usr/local/lib/python3.8/dist-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 223, in run
self.run_command('build')
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/tmp/pip-req-build-xs138tj2/setup.py", line 100, in build_extensions
subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
```
| closed | 2022-04-19T17:24:41Z | 2022-04-21T01:08:25Z | https://github.com/horovod/horovod/issues/3511 | [
"bug"
] | ashahba | 4 |
opengeos/leafmap | jupyter | 60 | Add a colormaps module | This will allow users to create colormaps easily and add them to the map easily. | closed | 2021-06-25T12:50:24Z | 2021-06-26T02:33:54Z | https://github.com/opengeos/leafmap/issues/60 | [
"Feature Request"
] | giswqs | 1 |
falconry/falcon | api | 1,595 | Docs should demonstrate returning early in a responder | New Falcon users may not realize they can simply `return` anywhere in a responder. This is useful for complicated nested logic paths. We should make sure examples of this are sprinkled throughout the docs in key places. | open | 2019-10-29T05:44:20Z | 2021-12-20T18:12:49Z | https://github.com/falconry/falcon/issues/1595 | [
"documentation",
"good first issue",
"needs contributor"
] | kgriffs | 7 |
waditu/tushare | pandas | 780 | fina_indicator接口数据错误 | pro.fina_indicator(ts_code='600519.SH', start_date='20080101', end_date='20141231')
会出现如下数据
ts_code ann_date end_date eps dt_eps
......
27 600519.SH 20080313 20071231 3.00 3.00
28 600519.SH 20080313 20061231 1.64 1.64
27和28中间缺少06年季报数据
但是pro.fina_indicator(ts_code='600519.SH', start_date='20060101', end_date='20121231')则不会
| closed | 2018-10-19T03:04:48Z | 2018-10-19T06:42:50Z | https://github.com/waditu/tushare/issues/780 | [] | MrLpk | 2 |
unit8co/darts | data-science | 2,514 | Ability to add regressors to a Prophet model in Python | In the Facebook Prophet library there is the ability to add regressors prior to fitting the model, the ability to do that is missing in darts. We can only add seasonality, as far as i'm aware.
It would be great to have this capability included in the package with the add_regressors function. Thanks. | closed | 2024-08-29T15:00:26Z | 2024-09-17T13:37:08Z | https://github.com/unit8co/darts/issues/2514 | [
"question"
] | BenJCross1995 | 1 |
microsoft/nni | deep-learning | 5,300 | NetWork Error | **Describe the issue**:
When I use NNI, at most 10 minutes, I will be reminded of NetWork Error, and then the port connection will be disconnected. I want to know what the problem is. By the way, I am using Windows10 system
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc):local
- Client OS:windows 10
- Server OS (for remote mode only):
- Python version:3.7
- PyTorch/TensorFlow version:torch==1.8.1
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?:no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
**Log message**:
- nnimanager.log:
- dispatcher.log:[2022-12-26 10:35:12] INFO (nni.tuner.tpe/MainThread) Using random seed 175818551
[2022-12-26 10:35:12] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2022-12-26 10:41:26] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Exception on receiving: ConnectionClosedError(None, None, None)
[2022-12-26 10:41:26] WARNING (nni.runtime.tuner_command_channel.channel/MainThread) Connection lost. Trying to reconnect...
[2022-12-26 10:41:26] INFO (nni.runtime.tuner_command_channel.channel/MainThread) Attempt #0, wait 0 seconds...
[2022-12-26 10:41:26] INFO (nni.runtime.msg_dispatcher_base/MainThread) Report error to NNI manager: Traceback (most recent call last):
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\http.py", line 120, in read_response
status_line = await read_line(stream)
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\http.py", line 194, in read_line
line = await stream.readline()
File "E:\Anaconda\install\envs\pytorch\lib\asyncio\streams.py", line 496, in readline
line = await self.readuntil(sep)
File "E:\Anaconda\install\envs\pytorch\lib\asyncio\streams.py", line 588, in readuntil
await self._wait_for_data('readuntil')
File "E:\Anaconda\install\envs\pytorch\lib\asyncio\streams.py", line 473, in _wait_for_data
await self._waiter
File "E:\Anaconda\install\envs\pytorch\lib\asyncio\selector_events.py", line 814, in _read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\__main__.py", line 61, in main
dispatcher.run()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\tuner_command_channel\channel.py", line 94, in _receive
command = self._retry_receive()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\tuner_command_channel\channel.py", line 104, in _retry_receive
self._channel.connect()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\tuner_command_channel\websocket.py", line 62, in connect
self._ws = _wait(_connect_async(self._url))
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\tuner_command_channel\websocket.py", line 111, in _wait
return future.result()
File "E:\Anaconda\install\envs\pytorch\lib\concurrent\futures\_base.py", line 435, in result
return self.__get_result()
File "E:\Anaconda\install\envs\pytorch\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\nni\runtime\tuner_command_channel\websocket.py", line 125, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "E:\Anaconda\install\envs\pytorch\lib\asyncio\tasks.py", line 442, in wait_for
return fut.result()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\client.py", line 671, in __await_impl__
extra_headers=protocol.extra_headers,
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "E:\Anaconda\install\envs\pytorch\lib\site-packages\websockets\legacy\client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2022-12-28T03:43:11Z | 2023-08-11T01:37:02Z | https://github.com/microsoft/nni/issues/5300 | [
"known issue"
] | accelerator1737 | 3 |
zappa/Zappa | django | 1,126 | Stage name not allowing '-' character in zappa_settings.json | Is there any specific reason for that? API gateways allows `-` character in stage name but the zappa regex does not allow this.
Any specific reason the regex is not updated yet? | closed | 2022-04-25T05:11:27Z | 2024-04-13T20:12:37Z | https://github.com/zappa/Zappa/issues/1126 | [
"no-activity",
"auto-closed"
] | sridhar562345 | 3 |
mars-project/mars | numpy | 3,229 | [BUG]Failed to run execute(wait=False) in another thread | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
execute(wait=False) failed when running in a thread pool
```Python
import os
import time
import mars.tensor as mt
import mars.dataframe as md
import concurrent.futures
def func():
df = md.DataFrame(
mt.random.rand(4, 4, chunk_size=2),
columns=list("abcd"),
)
df.execute()
pid = os.getpid()
info = df.apply(
lambda s: s if os.getpid() == pid else (time.sleep(5) or s)
).execute(wait=False)
result = info.result()
return result
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(func)
print(future.result())
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.9.13
2. The version of Mars you use 0.9.0
3. Versions of crucial packages, such as numpy, scipy and pandas
i. numpy: 1.23.0
ii. ray: 1.13.0
iii. scipy: 1.8.1
iv. pandas: 1.4.3
4. Full stack of the error.
```
Traceback (most recent call last):
File "/Users/vessalius/.local/share/virtualenvs/outer_mars-bUMhzz2w/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 101, in _ensure_future
self._future_local.future
AttributeError: '_thread._local' object has no attribute 'future'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/vessalius/Desktop/ray_serving_test/problem.py", line 25, in <module>
print(future.result())
File "/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/vessalius/Desktop/ray_serving_test/problem.py", line 19, in func
result = info.result()
File "/Users/vessalius/.local/share/virtualenvs/outer_mars-bUMhzz2w/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 131, in result
self._ensure_future()
File "/Users/vessalius/.local/share/virtualenvs/outer_mars-bUMhzz2w/lib/python3.9/site-packages/mars/deploy/oscar/session.py", line 110, in _ensure_future
self._future_local.aio_future = asyncio.wrap_future(fut)
File "/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/futures.py", line 411, in wrap_future
loop = events.get_event_loop()
File "/usr/local/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/events.py", line 642, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'ThreadPoolExecutor-1_0'.
```
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| open | 2022-08-18T08:27:15Z | 2022-08-18T08:34:14Z | https://github.com/mars-project/mars/issues/3229 | [
"type: bug"
] | VessaliusOz | 1 |
huggingface/datasets | numpy | 6,595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import datasets
from datasets import load_from_disk
dataset = load_from_disk("ds")
datasets.config.DEFAULT_MAX_BATCH_SIZE = 1
dataset.push_to_hub("kopyl/ds", private=True, max_shard_size="500MB")
```
And i get this error:
`pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat`
Full traceback:
```
>>> dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, max_shard_size="500MB")
Map: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1451/1451 [00:00<00:00, 6827.40 examples/s]
Uploading the dataset shards: 0%| | 0/2099 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 1705, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 5208, in _push_parquet_shards_to_hub
shard.to_parquet(buffer)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 4931, in to_parquet
return ParquetDatasetWriter(self, path_or_buf, batch_size=batch_size, **parquet_writer_kwargs).write()
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 129, in write
written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/io/parquet.py", line 141, in _write
writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs)
File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py", line 1016, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unhandled type for Arrow to Parquet schema conversion: halffloat
```
Smaller datasets with the same way of saving and pushing work wonders. Big ones are not.
I'm currently trying to upload dataset like this:
`HfApi().upload_folder...`
But i'm not sure that in this case "load_dataset" would work well.
This setting num_shards does not help too:
```
dataset.push_to_hub("kopyl/3M_icons_monochrome_only_no_captioning_mapped-for-SDXL-2", private=True, num_shards={'train': 500})
```
Tried 3000, 500, 478, 100
Also do you know if it's possible to push a dataset with multiple processes? It would take an eternity pushing 1TB...
### Steps to reproduce the bug
Described above
### Expected behavior
Should be able to upload...
### Environment info
Total dataset size: 978G
Amount of `.arrow` files: 2101
Each `.arrow` file size: 477M (i know 477 megabytes * 2101 does not equal 978G, but i just checked the size of a couple `.arrow` files, i don't know if some might have different size)
Some files:
- "ds/train/state.json": https://pastebin.com/tJ3ZLGAg
- "ds/train/dataset_info.json": https://pastebin.com/JdXMQ5ih | closed | 2024-01-16T02:03:09Z | 2024-01-27T18:26:33Z | https://github.com/huggingface/datasets/issues/6595 | [] | kopyl | 14 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,610 | Is it normal to use 2 black and white images? | During the training with the colorize mode, something caught my attention and that is that during the training it uses two black and white images when one is in color.
This should be the norm, right?

But this....?



| open | 2023-11-06T22:59:00Z | 2024-01-08T16:52:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1610 | [] | Keiser04 | 7 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,369 | [Feature Request]: Per-checkpoint default CFG scale, steps etc. | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Various checkpoints have different recommended CFG settings, steps or samplers, meaning cfg=7, steps=20 and sampler=dpm++2m shouldn't be the default setting for all of them.
Eg. for RealVisXL 4.0 we have:
> Use Turbo models with DPM++ SDE Karras sampler, 4-10 steps and CFG Scale 1-2.5
> Use Lightning models with DPM++ SDE Karras / DPM++ SDE sampler, 4-6 steps and CFG Scale 1-2
And for STOIQO NewReality:
>MAIN SAMPLER (Recommended): dpmpp_3m_sde + Exponential
>OTHER SAMPLERS: dpmpp_sde + Karras
> FOR MORE RALISM: CFG: 1-3 | STEPS: 15+
> FOR BALANCING (Recommended): CFG: 4 | STEPS: 20+
> FOR MORE CREATIVITY: CFG: 5-7 | STEPS: 30+
Either a model vendor should be able to provide such settings somewhere in safetensors file, or the user should be able to configure their settings per-model. It's hard to remember all the numbers, or run x/y/z plot to test some prompt with various checkpoints.
### Proposed workflow
1. Allow configuring per-checkpoint cfg_scale, steps and sampler parameters
2. When the checkpoint is chosen and a relevant configuration flag is enabled, these settings are updated.
### Additional information
I would use https://github.com/rifeWithKaiju/model_preset_manager for this but it doesn't even work on Linux. And such a feature is too important to rely on poorly-maintained extension, should be part of core functionality.
I suppose there are optional fields in safetensors file format that could be standarized by AUTOMATIC1111 so that model vendors could populate them prior to uploading to Civitai. Alternatively, a manifest file format could be introduced that could be stored together with the checkpoint and/or deployed separately. In a perfect scenario, https://github.com/zixaphir/Stable-Diffusion-Webui-Civitai-Helper could just collect this information from Civitai without refetching all the models. | open | 2024-08-11T16:07:49Z | 2024-08-11T16:08:22Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16369 | [
"enhancement"
] | paboum | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 657 | got an unexpected keyword argument 'label_smoothing' | Please help to fix it, thanks.
loss_function = torch.nn.CrossEntropyLoss(label_smoothing=0.1)
TypeError: __init__() got an unexpected keyword argument 'label_smoothing'
| closed | 2022-10-14T14:39:02Z | 2022-10-17T14:08:47Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/657 | [] | lamsongianm | 1 |
akfamily/akshare | data-science | 5,426 | stock_comment_em 接口问题报告 |
1. 操作系统版本:Windows 10 64
2. Python 版本:Python 3.10.0
3. AKShare 版本:1.15.47
5. 接口的名称和相应的调用代码 : stock_comment_em
```py
import akshare as ak
# 调用接口获取数据
stock_comment_em_df = ak.stock_comment_em()
# 将结果输出到CSV文件
stock_comment_em_df.to_csv("千股千评.csv", index=False,encoding=csv_encoding)
```
6. 接口报错的截图或描述
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[3], line 5
1 import akshare as ak
2 # print(ak.__version__)
3
4 # 调用接口获取数据
----> 5 stock_comment_em_df = ak.stock_comment_em()
6 # stock_comment_em_df = ak.stock_comment_detail_zlkp_jgcyd_em(symbol="600000")
7
8 # 将结果输出到CSV文件
9 stock_comment_em_df.to_csv("千股千评.csv", index=False,encoding=csv_encoding)
File [~\AppData\Roaming\Python\Python310\site-packages\akshare\stock_feature\stock_comment_em.py:42](https://file+.vscode-resource.vscode-cdn.net/d%3A/GithubProjects/%E8%82%A1%E7%A5%A8/akShare/~/AppData/Roaming/Python/Python310/site-packages/akshare/stock_feature/stock_comment_em.py:42), in stock_comment_em()
40 big_df = pd.DataFrame()
41 tqdm = get_tqdm()
---> 42 for page in tqdm(range(1, total_page + 1), leave=False):
43 params.update({"pageNumber": page})
44 r = requests.get(url, params=params)
File [c:\Python310\lib\site-packages\tqdm\notebook.py:234](file:///C:/Python310/lib/site-packages/tqdm/notebook.py:234), in tqdm_notebook.__init__(self, *args, **kwargs)
232 unit_scale = 1 if self.unit_scale is True else self.unit_scale or 1
233 total = self.total * unit_scale if self.total else self.total
--> 234 self.container = self.status_printer(self.fp, total, self.desc, self.ncols)
235 self.container.pbar = proxy(self)
236 self.displayed = False
File [c:\Python310\lib\site-packages\tqdm\notebook.py:108](file:///C:/Python310/lib/site-packages/tqdm/notebook.py:108), in tqdm_notebook.status_printer(_, total, desc, ncols)
99 # Fallback to text bar if there's no total
100 # DEPRECATED: replaced with an 'info' style bar
101 # if not total:
(...)
105
106 # Prepare IPython progress bar
107 if IProgress is None: # #187 #451 #558 #872
--> 108 raise ImportError(WARN_NOIPYW)
109 if total:
110 pbar = IProgress(min=0, max=total)
ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
```
| closed | 2024-12-15T15:50:36Z | 2024-12-23T09:06:44Z | https://github.com/akfamily/akshare/issues/5426 | [
"bug"
] | milk36 | 2 |
fastapi/sqlmodel | sqlalchemy | 172 | How to use the method tablesample | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlalchemy import func
selectable = people.tablesample(
func.bernoulli(1),
name='alias',
seed=func.random())
stmt = select(selectable.c.people_id)
```
### Description
I'm trying to use the tablesample method defined in SQLAlchemy, but I can't seem to understand how to migrate it's structure from the sample code into SQLModel code.
For more context, I'm trying to get a random sample from a simple select query (with two wheres) and would prefere to do everything in the query instead of having to sample the resulting data (cause the table is really large)
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | open | 2021-11-30T15:20:13Z | 2021-11-30T15:20:13Z | https://github.com/fastapi/sqlmodel/issues/172 | [
"question"
] | santigandolfo | 0 |
thtrieu/darkflow | tensorflow | 1,170 | Inferencing 2 frozen models at the same time | We are running inference on pb models with these commands on one particular product we trained and it works fine: it detects, shows a green box around it, etc
options = {"pbLoad": "file.pb", "metaLoad": "file.meta", "threshold": $
tfnet2 = TFNet(options)
But we need to detect more than product, and we don´t want to have to merge .pb files to do that
Any suggestion on how?
| open | 2020-05-05T13:12:51Z | 2020-05-05T13:12:51Z | https://github.com/thtrieu/darkflow/issues/1170 | [] | Sequential-circuits | 0 |
deezer/spleeter | tensorflow | 486 | [Question] Please help me use Spleeter with FFmpeg | <!-- Please respect the title [Discussion] tag. -->
Hi everybody,
I always use ffmpeg to edit video and audio. I wonder can I insert spleeter command into ffmpeg bat file then I can create a complete code without using Spleeter seperately.
Thank you so much. | closed | 2020-08-29T10:52:24Z | 2020-08-30T20:26:50Z | https://github.com/deezer/spleeter/issues/486 | [
"question"
] | Thanhcaro | 1 |
lanpa/tensorboardX | numpy | 352 | Is it possible to add summaries globally? | Python's logging module allows you to share a logger globally without needing to pass the logger around as function arguments. All you need to do to get access to the instantiated logger is `logger = logging.getLogger('foo')`. Similarly in tensorflow you can do `tf.summary.scalar('foo', foo)` without needing access to the SummaryWriter.
Is this possible in tensorboardX? If not, are there any plans to make it a feature? | closed | 2019-02-09T15:41:46Z | 2020-02-22T03:04:46Z | https://github.com/lanpa/tensorboardX/issues/352 | [
"enhancement"
] | KeAWang | 0 |
taverntesting/tavern | pytest | 85 | saved value which is in Integer converting to String while Reading it | saving the response like
```yaml
owner:
**owner-id: !anyint**
owner-name: "test zyx"
email-distribution-list: ["datalakeemail@two.com"]
save:
body:
**owner-id: owner.owner-id**
```
trying to read in the subsequent test like
```yaml
owner:
owner-id: "{owner-id:d}"
owner-name: "test zyx"
```
Getting difference in the Expected and actual
Expected : showing owner-id in string instead of in Int/nuber
If some one could help me out that would be really great
| closed | 2018-04-10T13:24:20Z | 2018-05-29T09:39:04Z | https://github.com/taverntesting/tavern/issues/85 | [] | raghavakora | 2 |
matplotlib/matplotlib | data-visualization | 29,681 | [ENH]: Add parameter 'error_linestyle' to plt.errorbar() | ### Problem
Currently, **plt.errorbar()** does not provide a direct way to change the linestyle of the error bars themselves. The _linestyle_ parameter only affects the main connecting line, and modifying the error bars requires accessing the returned object and manually setting e.g. _set_linestyle('--')_ on the error bar lines.
### Proposed solution
Introduce a new parameter, e.g., _error_linestyle_, that allows users to set the linestyle of the error bars directly. Example usage:
`plt.errorbar(x, y, yerr=yerr, fmt='o', error_linestyle='--')` | open | 2025-02-26T14:57:35Z | 2025-03-18T14:22:00Z | https://github.com/matplotlib/matplotlib/issues/29681 | [
"New feature"
] | MBeusch | 3 |
ageitgey/face_recognition | python | 1,076 | High number_of_times_to_up sample make tis slow and lag | * face_recognition version: latest
* Python version: 3.7
* Operating System: Jetson Nano / Ubuntu
### Description
I want to detect faces far away from the camera. So I change the number_of_times_to_upsample to 2. It will make the app very slow and delay.
How can I detect the smaller faces but still keep the app smooth?
### What I Did
```
face_locations = face_recognition.face_locations(rgb_small_frame, number_of_times_to_upsample=2)
```
| open | 2020-03-03T10:35:27Z | 2020-03-05T12:34:19Z | https://github.com/ageitgey/face_recognition/issues/1076 | [] | tongvantruong | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,544 | How to only save the conversion results of AtoB when testing? | Thank you for such an excellent project, I want to output only the conversion results of AtoB when testing, not all the results, how can I do this? | closed | 2023-02-17T12:36:26Z | 2023-03-27T06:32:11Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1544 | [] | yuninn | 3 |
jina-ai/serve | machine-learning | 6,142 | Fix code scanning alert - Information exposure through an exception | <!-- Warning: The suggested title contains the alert rule name. This can expose security information. -->
Tracking issue for:
- [ ] https://github.com/jina-ai/jina/security/code-scanning/6
| closed | 2024-02-22T13:24:03Z | 2024-06-06T00:18:51Z | https://github.com/jina-ai/serve/issues/6142 | [
"Stale"
] | JoanFM | 1 |
darrenburns/posting | rest-api | 213 | The Directory parameter of creating new request is a bit of miss-leading | When creating a new request, there is a "Directory" parameter that is actually the collection path.
<div align="center">
<img src="https://github.com/user-attachments/assets/390acc4b-d6e9-413c-89f2-8470dd0f30e2" width="400px"/>
</div>
- If we use the default value "`.`", then the request is saved in the root of the collection window.
- If we specify a path, like "`./myapis/`", then the request will be saved in a sub-collection, and showing in the tree node in the collection window.
The "Directory" parameter is first used to organize collections showing on the UI, and then it also maps to the disk storage path of the request data, which is not recognized by user.
As this is a UI app, users may care more about the UI element (collection) rather than the backend element (storage path). I think using "Collection path" might be better than using "Directory". Or at least, adding a note about collection path after "Directory" may reduce the confusion on collection and storage path. | closed | 2025-03-08T05:14:43Z | 2025-03-08T16:53:34Z | https://github.com/darrenburns/posting/issues/213 | [
"planned"
] | garylavayou | 2 |
iperov/DeepFaceLab | deep-learning | 5,363 | The Parameter --force-model-name doesn't work in merging | So I edit the Merger.py a little:
Before:
model = models.import_model(model_class_name)(is_training=False,
saved_models_path=saved_models_path,
force_gpu_idxs=force_gpu_idxs,
cpu_only=cpu_only)
After:
model = models.import_model(model_class_name)(is_training=False,
saved_models_path=saved_models_path,
force_gpu_idxs=force_gpu_idxs,
force_model_name=force_model_name,
cpu_only=cpu_only)
Now it works. | closed | 2021-07-15T09:59:26Z | 2021-07-17T18:03:31Z | https://github.com/iperov/DeepFaceLab/issues/5363 | [] | dsyrock | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 174 | openai_api_server_vllm.py运行 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型量化和部署
### 基础模型
Alpaca-2-13B
### 操作系统
Linux
### 详细描述问题
vllm的api部署运行不成功,出现
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:147: UserWarning: /opt/conda/envs/llama_env did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:147: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
[2023-08-23 16:08:37,118] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
File "/workspace/ypg/Chinese-LLaMA-Alpaca-2/scripts/openai_server_demo/openai_api_server_vllm.py", line 63, in <module>
Conversation(
TypeError: Conversation.__init__() got an unexpected keyword argument 'system'
### 依赖情况(代码类问题务必提供)
peft 0.3.0.dev0
sentence-transformers 2.2.2
torch 2.0.1
torchvision 0.15.2
transformers 4.31.0
### 运行日志或截图
python openai_api_server_vllm.py --model /workspace/ypg/Chinese-LLaMA-Alpaca-2/llama2_model/chinese-alpaca-2-13b --tokenizer-mode slow --served-model-name chinese-llama-alpaca-2
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:147: UserWarning: /opt/conda/envs/llama_env did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:147: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /opt/conda/envs/llama_env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
[2023-08-23 16:08:37,118] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
File "/workspace/ypg/Chinese-LLaMA-Alpaca-2/scripts/openai_server_demo/openai_api_server_vllm.py", line 63, in <module>
Conversation(
TypeError: Conversation.__init__() got an unexpected keyword argument 'system' | closed | 2023-08-23T08:18:33Z | 2023-09-03T23:49:39Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/174 | [
"stale"
] | zzx528 | 3 |
Urinx/WeixinBot | api | 8 | todo | 不知道接下来要写哪些地方呢?建议把要改进的列个todo list,大家可以一起贡献代码
———— 20160614 ————
by sbilly: 用下管理权,close 封贴。以后发帖同学还请每个需求单独开贴。
| closed | 2016-02-09T16:54:40Z | 2018-02-08T01:14:49Z | https://github.com/Urinx/WeixinBot/issues/8 | [] | BillBillBillBill | 85 |
youfou/wxpy | api | 116 | Bug |
```
friends = bot.friends()
user = firends[1]
$: type(user)
wxpy.api.chats.friend.Friend
user.pin()或user.unpin()
```
remark_name修改为nick_name | open | 2017-07-14T09:18:01Z | 2017-07-21T07:14:49Z | https://github.com/youfou/wxpy/issues/116 | [] | kalivim | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,379 | Test failres due to unclosed database on Python 3.13.0 | <!--
This issue tracker is a tool to address bugs in Flask-SQLAlchemy itself. Please
use GitHub Discussions or the Pallets Discord for questions about your own code.
Ensure your issue is with Flask-SQLAlchemy and not SQLAlchemy itself.
Replace this comment with a clear outline of what the bug is.
-->
Various tests are failing on Python 3.13.0, with the following pattern:
```pytb
_____________________________ test_paginate[Model] _____________________________
cls = <class '_pytest.runner.CallInfo'>
func = <function call_and_report.<locals>.<lambda> at 0x7ffff425ed40>
when = 'call'
reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
@classmethod
def from_call(
cls,
func: Callable[[], TResult],
when: Literal["collect", "setup", "call", "teardown"],
reraise: type[BaseException] | tuple[type[BaseException], ...] | None = None,
) -> CallInfo[TResult]:
"""Call func, wrapping the result in a CallInfo.
:param func:
The function to call. Called without arguments.
:type func: Callable[[], _pytest.runner.TResult]
:param when:
The phase in which the function is called.
:param reraise:
Exception or exceptions that shall propagate if raised by the
function, instead of being wrapped in the CallInfo.
"""
excinfo = None
start = timing.time()
precise_start = timing.perf_counter()
try:
> result: TResult | None = func()
/nix/store/6wq270gc19f8p07jy7892r05avgwb3xz-python3.13-pytest-8.3.3/lib/python3.13/site-packages/_pytest/runner.py:341:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/nix/store/6wq270gc19f8p07jy7892r05avgwb3xz-python3.13-pytest-8.3.3/lib/python3.13/site-packages/_pytest/runner.py:242: in <lambda>
lambda: runtest_hook(item=item, **kwds), when=when, reraise=reraise
/nix/store/w91wlq864lgwj9r938gmf8czk5cwlqjy-python3.13-pluggy-1.5.0/lib/python3.13/site-packages/pluggy/_hooks.py:513: in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
/nix/store/w91wlq864lgwj9r938gmf8czk5cwlqjy-python3.13-pluggy-1.5.0/lib/python3.13/site-packages/pluggy/_manager.py:120: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
/nix/store/6wq270gc19f8p07jy7892r05avgwb3xz-python3.13-pytest-8.3.3/lib/python3.13/site-packages/_pytest/threadexception.py:92: in pytest_runtest_call
yield from thread_exception_runtest_hook()
/nix/store/6wq270gc19f8p07jy7892r05avgwb3xz-python3.13-pytest-8.3.3/lib/python3.13/site-packages/_pytest/threadexception.py:68: in thread_exception_runtest_hook
yield
/nix/store/6wq270gc19f8p07jy7892r05avgwb3xz-python3.13-pytest-8.3.3/lib/python3.13/site-packages/_pytest/unraisableexception.py:95: in pytest_runtest_call
yield from unraisable_exception_runtest_hook()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def unraisable_exception_runtest_hook() -> Generator[None]:
with catch_unraisable_exception() as cm:
try:
yield
finally:
if cm.unraisable:
if cm.unraisable.err_msg is not None:
err_msg = cm.unraisable.err_msg
else:
err_msg = "Exception ignored in"
msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
msg += "".join(
traceback.format_exception(
cm.unraisable.exc_type,
cm.unraisable.exc_value,
cm.unraisable.exc_traceback,
)
)
> warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
E pytest.PytestUnraisableExceptionWarning: Exception ignored in: <sqlite3.Connection object at 0x7ffff3fec7c0>
E
E Traceback (most recent call last):
E File "/nix/store/9q6cs27gcx2h27brmg7nb8xhbzj0zrnm-python3.13-sqlalchemy-2.0.36/lib/python3.13/site-packages/sqlalchemy/event/base.py", line 148, in __init__
E self._empty_listeners = self._empty_listener_reg[instance_cls]
E ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
E File "/nix/store/0b83hlniyfbpha92k2j0w93mxdalv8kb-python3-3.13.0/lib/python3.13/weakref.py", line 415, in __getitem__
E return self.data[ref(key)]
E ~~~~~~~~~^^^^^^^^^^
E KeyError: <weakref at 0x7ffff3e05ad0; to 'type' at 0x1429b20 (Session)>
E
E During handling of the above exception, another exception occurred:
E
E Traceback (most recent call last):
E File "/nix/store/0b83hlniyfbpha92k2j0w93mxdalv8kb-python3-3.13.0/lib/python3.13/weakref.py", line 428, in __setitem__
E self.data[ref(key, self._remove)] = value
E ~~~^^^^^^^^^^^^^^^^^^^
E ResourceWarning: unclosed database in <sqlite3.Connection object at 0x7ffff3fec7c0>
```
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
Run the testsuite with Python 3.13.0.
<!--
Describe the expected behavior that should have happened but didn't.
-->
https://gist.github.com/mweinelt/843a1c04973a0ba82c01d09f26167d6f
Environment:
- Python version: 3.13.0
- Pytest: version 8.3.3
- Flask-SQLAlchemy version: 3.1.1
- SQLAlchemy version: 2.0.36
| open | 2024-11-15T03:01:50Z | 2024-11-15T03:01:57Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1379 | [] | mweinelt | 0 |
aminalaee/sqladmin | asyncio | 788 | Specify particular column to be searched. | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I can specify columns to search against, using `column_sortable_list`. But afaik, it always searches in all columns, which is often unnecessary costly.
### Describe the solution you would like.
I would like to be able to pass search terms as keywords eg. "field1 = 'abc'", so that it's the only filter executed on the database level, rather than "field1 = 'abc', field2 = 'abc'".
### Describe alternatives you considered
Even better, the filters could be able to differentiate column type and pick more optimal operator than `ilike` for columns like int or primary key and leverage their indicies. Maybe even `between` operators for numbers and timestamps?
### Additional context
_No response_ | open | 2024-07-08T12:00:18Z | 2024-07-08T14:39:48Z | https://github.com/aminalaee/sqladmin/issues/788 | [] | JakNowy | 1 |
pyro-ppl/numpyro | numpy | 1,278 | Can't sample posteior predictive, if model uses no covariates | Hi. I am trying to simulate data using the stochastic volatility model, [described](https://num.pyro.ai/en/stable/examples/stochastic_volatility.html) in the docs. However, when I try to get posterior predictive samples, it just returns observed values. For clarity, here's the model function below:
```
def model(returns):
step_size = numpyro.sample("sigma", dist.Exponential(50.0))
s = numpyro.sample(
"s", dist.GaussianRandomWalk(scale=step_size, num_steps=jnp.shape(returns)[0])
)
nu = numpyro.sample("nu", dist.Exponential(0.1))
return numpyro.sample(
"r", dist.StudentT(df=nu, loc=0.0, scale=jnp.exp(s)), obs=returns
)
```
So, when I try to predict after fitting the model:
```
...
predictive = Predictive(model=model, posterior_samples=mcmc.get_samples())
samples_predictive = predictive(random.PRNGKey(42), returns)
```
I just get my observed `returns` back.
Am I doing something wrong? Is there a way to actually forward sample here? Thank you! | closed | 2022-01-09T04:48:55Z | 2022-01-10T09:13:03Z | https://github.com/pyro-ppl/numpyro/issues/1278 | [
"question"
] | sokol11 | 2 |
darrenburns/posting | rest-api | 109 | Feature request: ability to set which `Request` tab gets focus by default | It would be a handy feature if there was a method of selecting which tab in the `Request` box gets focus by default. Currently `Headers` gets focus all the time; personally I almost never want to modify the headers (especially if loading from a collection) but almost always want to be modifying the `Body`.
The ability to override per-endpoint within the collections would be ideal. | closed | 2024-09-18T13:50:47Z | 2024-11-18T17:23:28Z | https://github.com/darrenburns/posting/issues/109 | [] | davep | 0 |
ydataai/ydata-profiling | pandas | 1,529 | pandas.Series.to_dict() got an unexpected keyword argument 'orient' | ### Current Behaviour
ProfileReport._render_json method tries to use function with keyword parameter only available in pd.DataFrame in a [pd.Series](https://pandas.pydata.org/docs/reference/api/pandas.Series.to_dict.html), raising type error:
https://github.com/ydataai/ydata-profiling/blob/cdfc17ac7c01a66a2f3bbf6641112149b1d83d90/src/ydata_profiling/profile_report.py#L453
https://pandas.pydata.org/docs/reference/api/pandas.Series.to_dict.html
```
437 return {encode_it(v) for v in o}
438 elif isinstance(o, (pd.DataFrame, pd.Series)):
--> 439 return encode_it(o.to_dict(orient="records"))
440 elif isinstance(o, np.ndarray):
441 return encode_it(o.tolist())
TypeError: to_dict() got an unexpected keyword argument 'orient'
```
### Expected Behaviour
A json format from a comparison report
### Data Description
**previous_dataset**
```python
previous_dataset = pd.DataFrame(data=[(1000, 42), (900, 30), (1500, 40), (1800, 38)], columns=["rent_per_month", "total_area"])
```
**current_dataset**
```python
current_dataset = pd.DataFrame(data=[(5000, 350), (9000, 600), (5000, 400), (3500, 500), (6000, 600)], columns=["rent_per_month", "total_area"])
```
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
previous_dataset = pd.DataFrame(data=[(1000, 42), (900, 30), (1500, 40), (1800, 38)], columns=["rent_per_month", "total_area"])
current_dataset = pd.DataFrame(data=[(5000, 350), (9000, 600), (5000, 400), (3500, 500), (6000, 600)], columns=["rent_per_month", "total_area"])
previous_dataset_report = ProfileReport(
previous_dataset, title="Previous dataset report"
)
current_dataset_report = ProfileReport(
current_dataset, title="Current dataset report"
)
comparison_report = previous_dataset_report.compare(current_dataset_report)
comparison_report.to_json()
```
### pandas-profiling version
v4.5.1
### Dependencies
```Text
aiobotocore==1.4.2
aiohttp==3.9.1
aioitertools==0.11.0
aiosignal==1.3.1
appdirs==1.4.4
argon2-cffi==20.1.0
async-generator==1.10
async-timeout==4.0.3
attrs==20.3.0
awscli==1.32.26
backcall==0.2.0
bidict==0.21.4
bleach==3.3.0
boto3==1.17.106
botocore==1.20.106
butterfree==1.2.3
cassandra-driver==3.24.0
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
charset-normalizer==2.0.12
click==7.1.2
cmake==3.27.2
colorama==0.4.4
cycler==0.10.0
Cython==0.29.23
dacite==1.8.1
dbus-python==1.2.16
decorator==5.0.6
defusedxml==0.7.1
distlib==0.3.4
distro==1.4.0
distro-info==0.23+ubuntu1.1
docutils==0.16
entrypoints==0.3
facets-overview==1.0.0
filelock==3.6.0
frozenlist==1.4.1
fsspec==2021.8.1
geomet==0.2.1.post1
h3==3.7.6
hierarchical-conf==1.0.2
htmlmin==0.1.12
idna==2.10
ImageHash==4.3.1
ipykernel==5.3.4
ipython==7.22.0
ipython-genutils==0.2.0
ipywidgets==7.6.3
jedi==0.17.2
Jinja2==2.11.3
jmespath==0.10.0
joblib==1.0.1
jsonschema==3.2.0
jupyter-client==6.1.12
jupyter-core==4.7.1
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
koalas==1.8.2
MarkupSafe==2.0.1
matplotlib==3.4.2
mdutils==1.6.0
mistune==0.8.4
multidict==6.0.4
multimethod==1.10
nbclient==0.5.3
nbconvert==6.0.7
nbformat==5.1.3
nest-asyncio==1.5.1
networkx==3.1
notebook==6.3.0
numpy==1.22.4
packaging==23.2
pandas==1.3.5
pandocfilters==1.4.3
parameters-validation==1.2.0
parso==0.7.0
patsy==0.5.6
pexpect==4.8.0
phik==0.12.4
pickleshare==0.7.5
Pillow==8.2.0
pip-resolved==0.3.0
plotly==5.5.0
prometheus-client==0.10.1
prompt-toolkit==3.0.17
protobuf==3.17.2
psycopg2==2.8.5
ptyprocess==0.7.0
py4j==0.10.9
pyarrow==13.0.0
pyarrow-hotfix==0.5
pyasn1==0.5.1
pycparser==2.20
pydantic==1.9.2
pydeequ==0.1.8
Pygments==2.8.1
PyGObject==3.36.0
pyparsing==2.4.7
pyrsistent==0.17.3
pyspark==3.0.2
python-apt==2.0.1+ubuntu0.20.4.1
python-dateutil==2.8.1
python-engineio==4.3.0
python-socketio==5.4.1
pytz==2023.3
PyWavelets==1.4.1
PyYAML==5.4.1
pyzmq==20.0.0
requests==2.26.0
requests-unixsocket==0.2.0
rsa==4.7.2
s3fs==2021.8.1
s3transfer==0.4.2
scikit-learn==0.24.1
scipy==1.10.1
seaborn==0.11.1
Send2Trash==1.5.0
six==1.15.0
ssh-import-id==5.10
statsmodels==0.14.1
tangled-up-in-unicode==0.2.0
tenacity==8.0.1
terminado==0.9.4
testpath==0.4.4
threadpoolctl==2.1.0
tornado==6.1
tqdm==4.66.1
traitlets==5.0.5
typeguard==2.13.3
typer==0.3.2
typing-extensions==4.0.1
unattended-upgrades==0.1
urllib3==1.26.16
virtualenv==20.4.1
visions==0.7.5
wcwidth==0.2.5
webencodings==0.5.1
widgetsnbextension==3.5.1
wordcloud==1.9.2
wrapt==1.16.0
yamale==4.0.2
yarl==1.9.4
ydata-profiling==4.5.1
```
### OS
Ubuntu 20.04.4 LTS
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2024-01-24T19:03:43Z | 2024-02-09T11:41:34Z | https://github.com/ydataai/ydata-profiling/issues/1529 | [
"bug 🐛"
] | michellyrds | 2 |
plotly/dash | data-science | 2,274 | Wrap text in a dash editable table | After revisitting the site for dash editable datatable:
https://dash.plotly.com/datatable/editable
it does not look possible to have multiline cells (wrap text in a cell when the text does not fit into the cell width).
I would argue that allowing multi line texts in cells would make plotly dash editable table a nice substitution for excel. Often times excel is used for working with texts, only possible if multiline is allowed. | closed | 2022-10-16T22:46:06Z | 2024-07-24T15:06:06Z | https://github.com/plotly/dash/issues/2274 | [] | joseberlines | 1 |
litestar-org/litestar | api | 3,999 | Bug: failure to reflect constraints in autogenerated OpenAPI spec | ### Description
Adding a `gt` constraint to an `int` with either `msgpec.Meta` or `litestar.params.Body()` fails to have that greater than constraint reflected in the autogenerated OpenAPI spec.
Here is the MRE:
```
from typing import Annotated
import msgspec
from litestar import get, Request, Litestar
from litestar.params import Body
class Resp(msgspec.Struct):
foo: Annotated[int, msgspec.Meta(gt=0), Body(gt=0)]
Resp = Annotated[Resp, Body(description="A response object.")]
@get("/")
async def home() -> Resp:
return Resp(1)
@get(path=["/openapi.json"], include_in_schema=False, sync_to_thread=True)
def get_openapi(request: Request) -> dict:
schema = request.app.openapi_schema
return schema.to_schema()
app = Litestar(
[home, get_openapi],
)
if __name__ == "__main__":
import uvicorn
try:
uvicorn.run(
app = "_mre:app",
host = "localhost",
port = 2020,
reload = True,
)
except Exception:
uvicorn.run(
app = app,
host = "localhost",
port = 2020,
)
```
Here is the autogenerated OpenAPI spec:
```
{
"info": {
"title": "Litestar API",
"version": "1.0.0"
},
"openapi": "3.1.0",
"servers": [
{
"url": "/"
}
],
"paths": {
"/": {
"get": {
"summary": "Home",
"operationId": "Home",
"responses": {
"200": {
"description": "Request fulfilled, document follows",
"headers": {
},
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Resp"
}
}
}
}
},
"deprecated": false
}
}
},
"components": {
"schemas": {
"Resp": {
"properties": {
"foo": {
"type": "integer"
}
},
"type": "object",
"required": [
"foo"
],
"title": "Resp",
"description": "A response object."
}
}
}
}
```
It only works if you specify float instead of int.
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
```
### Litestar Version
2.13.0 final
### Platform
- [x] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2025-02-14T08:04:10Z | 2025-02-14T09:40:11Z | https://github.com/litestar-org/litestar/issues/3999 | [
"Bug :bug:"
] | umarbutler | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.