repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ets-labs/python-dependency-injector | flask | 285 | Usage of DelegatedFactory seems not correct in an example | I'm reading the docs, and it seems to me that one of the examples is not correct:
https://github.com/ets-labs/python-dependency-injector/blob/2b30e172d13337db8b88ffb9c167eedf283a2840/examples/providers/factory_delegation.py#L29
Indeed, it seems to me that this line should simple read, as for the other examples in the same file:
```python
users_factory = providers.Factory(
# ^ no need to use a Delegated Factory here
```
Is my understanding correct?
| closed | 2020-08-26T15:23:24Z | 2020-08-27T08:29:52Z | https://github.com/ets-labs/python-dependency-injector/issues/285 | [
"bug"
] | ojob | 3 |
python-gino/gino | asyncio | 564 | How to chain e.g. on_conflict_do_nothing() to Model.create()? | * GINO version: 0.8.3
* Python version: 3.7.3
* asyncpg version: 0.18.3
* aiocontextvars version: 0.2.2
* PostgreSQL version: 11.5
### Description
First of all, Gino seems awesome. Thanks for your hard work :)
I'm trying to understand how can I use things like `on_conflict_do_nothing()` (https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#insert-on-conflict-upsert)
when using Gino ORM. Full example below, but basically this is what I'm trying to achieve:
```python
await User.create(id='1', name='jack', fullname='Jack Jones') # succeeds
await User.create(id='1', name='jack', fullname='Jack Jones').on_conflict_do_nothing() # raises exception
```
How should things like these be approached when using Gino?
Thank you for your time!
### What I Did
```python
from gino import Gino
db = Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
fullname = db.Column(db.String)
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
await User.create(id='1', name='jack', fullname='Jack Jones')
# how to do something similar to this:
await User.create(id='1', name='jack', fullname='Jack Jones').on_conflict_do_nothing()
```
| closed | 2019-10-08T11:03:14Z | 2019-10-10T08:06:40Z | https://github.com/python-gino/gino/issues/564 | [
"question"
] | mikmatko | 2 |
lepture/authlib | django | 612 | Drop `starlette.config.Config` from the Starlette integration | Hi, maintainer of Starlette here. 👋
The `OAuth` class receives a `starlette.config.Config`, and forces users to instantiate it only for `authlib`. Note that the `config` module is a very small thing of Starlette.
Would it be possible for `authlib` to not use this `starlette` structure? | closed | 2023-12-24T09:29:03Z | 2024-12-18T13:57:19Z | https://github.com/lepture/authlib/issues/612 | [] | Kludex | 2 |
microsoft/unilm | nlp | 1,092 | Pretraining details for Beit3 model? | Hi, thank you for sharing great works.
I recently interested in Beit-v3 and have several questions.
1. As far as I know, you shared a different model from what you used in your paper.
Do you have any plans to share the model you used in the paper?
2. In paper, the pretraining task, mask-then-predict, is regarding the image as a foreign language.
This is quite ambiguous for me, so can you explain the detail of this pretrain task?
(Is the same object masked or random tokens masked? Are both image tokens and language tokens predicted or just language tokens predicted?)
And did you used the same pretrain task for new Beit3? (It would be more grateful If you have a plan to share the pretrain code of the new model)
3. Again in your paper, the image data is tokenized by the tokenizer of Beit-v2.
In the title of the beit-v2 paper, I understood that beit-v2 itself was defined as a tokenizer.
But when looking closely beit-v2, I found that a tokenizer encoder existed separately.
In the image tokenization of beit3, is the tokenizer encoder of beit-v2 used? Or did you use the beit-v2 full model?
You might think it's stupid questions, but I'd be very grateful if you answer it.
Thank you again for greate works! | open | 2023-05-10T01:39:35Z | 2023-05-24T03:22:26Z | https://github.com/microsoft/unilm/issues/1092 | [] | YeonjeeJung | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,606 | ERROR Calling AnimateDiff | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I does not see the AnimateDiff UI on my Weiui bruhhh
### Steps to reproduce the problem
1.download AnimateDiff
2.open weiui
3.does not show
### What should have happened?
WebUI should show me AnimateDiff extension
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-04-23-00-08.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15069607/sysinfo-2024-04-23-00-08.json)
### Console logs
```Shell
Already up to date.
venv "Y:\Stable Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
loading WD14-tagger reqs from Y:\Stable Diffusion\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt
Checking WD14-tagger requirements.
Launching Web UI with arguments: --xformers --medvram
2024-04-23 07:55:09.568810: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-23 07:55:10.134346: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
[-] ADetailer initialized. version: 24.4.2, num models: 10
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
dirname: Y:\Stable Diffusion\stable-diffusion-webui\localizations
localizations: {'zh-Hans (Stable) [vladmandic]': 'Y:\\Stable Diffusion\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-zh_Hans\\localizations\\zh-Hans (Stable) [vladmandic].json', 'zh-Hans (Stable)': 'Y:\\Stable Diffusion\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-zh_Hans\\localizations\\zh-Hans (Stable).json', 'zh-Hans (Testing) [vladmandic]': 'Y:\\Stable Diffusion\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-zh_Hans\\localizations\\zh-Hans (Testing) [vladmandic].json', 'zh-Hans (Testing)': 'Y:\\Stable Diffusion\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-zh_Hans\\localizations\\zh-Hans (Testing).json'}
ControlNet preprocessor location: Y:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-04-23 07:55:17,533 - ControlNet - INFO - ControlNet v1.1.445
2024-04-23 07:55:17,650 - ControlNet - INFO - ControlNet v1.1.445
sd-webui-prompt-all-in-one background API service started successfully.
== WD14 tagger /gpu:0, uname_result(system='Windows', node='DESKTOP-K0IHA7P', release='10', version='10.0.22631', machine='AMD64') ==
Loading weights [a074b8864e] from Y:\Stable Diffusion\stable-diffusion-webui\models\Stable-diffusion\counterfeitV30_25sd1.5.safetensors
[LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
Creating model from config: Y:\Stable Diffusion\stable-diffusion-webui\configs\v1-inference.yaml
*** Error calling: Y:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py/ui
Traceback (most recent call last):
File "Y:\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 528, in wrap_call
return func(*args, **kwargs)
File "Y:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 43, in ui
from scripts.animatediff_mm import mm_animatediff as motion_module
ModuleNotFoundError: No module named 'scripts.animatediff_mm'
---
*** Error calling: Y:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py/ui
Traceback (most recent call last):
File "Y:\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 528, in wrap_call
return func(*args, **kwargs)
File "Y:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 43, in ui
from scripts.animatediff_mm import mm_animatediff as motion_module
ModuleNotFoundError: No module named 'scripts.animatediff_mm'
```
### Additional information
_No response_ | open | 2024-04-23T00:09:27Z | 2024-04-23T22:02:32Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15606 | [
"bug-report"
] | yinquest11 | 10 |
modelscope/modelscope | nlp | 785 | eyError: 'asr-inference is not in the pipelines registry group auto-speech-recognition. Please make sure the correct version of ModelScope library is used.' | inference_16k_pipline = pipeline(
task=Tasks.auto_speech_recognition,
model='iic/speech_UniASR_asr_2pass-cantonese-CHS-16k-common-vocab1468-tensorflow1-offline'
)
# waveform, sample_rate = soundfile.read("/data/linry/FunASR/asr_example_粤语.wav")
rec_result = inference_16k_pipline(audio_in="/data/linry/FunASR/asr_example_粤语.wav")
print(rec_result)
2024-02-26 18:49:21,770 - modelscope - WARNING - ('PIPELINES', 'auto-speech-recognition', 'asr-inference') not found in ast index file
Traceback (most recent call last):
File "/data/linry/FunASR/1-my/UniASR.py", line 24, in <module>
offline()
File "/data/linry/FunASR/1-my/UniASR.py", line 6, in offline
inference_16k_pipline = pipeline(
File "/data/linry/anaconda3/envs/funasr/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 170, in pipeline
return build_pipeline(cfg, task_name=task)
File "/data/linry/anaconda3/envs/funasr/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 65, in build_pipeline
return build_from_cfg(
File "/data/linry/anaconda3/envs/funasr/lib/python3.10/site-packages/modelscope/utils/registry.py", line 198, in build_from_cfg
raise KeyError(
KeyError: 'asr-inference is not in the pipelines registry group auto-speech-recognition. Please make sure the correct version of ModelScope library is used.' | closed | 2024-02-26T10:50:18Z | 2024-05-21T01:49:11Z | https://github.com/modelscope/modelscope/issues/785 | [
"Stale"
] | LRY1994 | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 843 | 矩阵目标检测 | 请问我想用Faster R-CNN对一个矩阵中的目标区域进行识别,能否通过修改Faster R-CNN的输入和channles来实现?
| open | 2024-11-24T14:31:21Z | 2024-11-24T14:31:21Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/843 | [] | dahaigui | 0 |
fbdesignpro/sweetviz | data-visualization | 112 | Many warnings appeared while generating report! | 
| closed | 2022-03-09T10:15:21Z | 2022-06-14T21:53:28Z | https://github.com/fbdesignpro/sweetviz/issues/112 | [] | mingjun1120 | 3 |
sigmavirus24/github3.py | rest-api | 796 | initializing a repository object fails on github enterprise | got this error with my github enterprise on the latest release of github3.py:
```
00:08:06.853 Traceback (most recent call last):
00:08:06.853 File "/home/jenkins/workspace/***/temp/lib64/python3.6/site-packages/github3/models.py", line 48, in __init__
00:08:06.853 self._update_attributes(json)
00:08:06.853 File "/home/jenkins/workspace/***/temp/lib64/python3.6/site-packages/github3/repos/repo.py", line 2450, in _update_attributes
00:08:06.853 self.original_license = repo['license']
00:08:06.853 KeyError: 'license'
````
looks like github3.py is assuming all repos have license data, but this is not the case for github enterprise.
thanks for this great library :) | closed | 2018-03-16T20:59:31Z | 2018-03-16T23:58:26Z | https://github.com/sigmavirus24/github3.py/issues/796 | [] | guykisel | 2 |
benbusby/whoogle-search | flask | 376 | [FEATURE] Anti-support for JavaScript | <!--
DO NOT REQUEST UI/THEME/GUI/APPEARANCE IMPROVEMENTS HERE
THESE SHOULD GO IN ISSUE #60
REQUESTING A NEW FEATURE SHOULD BE STRICTLY RELATED TO NEW FUNCTIONALITY
-->
It would be nice to see anti-support for JavaScript. As we all know, JavaScript is the devil and should always be avoided at all costs.
Thank you! | closed | 2021-07-13T04:48:20Z | 2021-07-13T16:29:28Z | https://github.com/benbusby/whoogle-search/issues/376 | [
"enhancement"
] | Hund | 2 |
JaidedAI/EasyOCR | deep-learning | 942 | EasyOCR is annoyingly finicky... | The digits on the first image are incorrectly detected as 13, the second image is detected correctly as 113. :( Help.

| open | 2023-02-02T14:21:02Z | 2023-02-08T07:32:34Z | https://github.com/JaidedAI/EasyOCR/issues/942 | [] | joemensor | 1 |
ray-project/ray | data-science | 51,211 | [Ray Core] For the same python test, the results of pytest and bazel are inconsistent | ### What happened + What you expected to happen
The results of using `pytest` and `bazel` to test the same python code are different. Pytest always succeeds, while bazel test always throws the following exception. What may be the cause?
### Versions / Dependencies
Ray v2.38.0
### Reproduction script
The two test statements are:
`python -m pytest -v -s python/ray/tests/test_ray_debugger.py`
`bazel test --build_tests_only $(./ci/run/bazel_export_options) --config=ci --test_env=CI="1" --test_output=streamed -- //python/ray/tests:test_ray_debugger`
The error message of bazel test is:
```
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //python/ray/tests:test_ray_debugger
-----------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.3.0 -- /opt/conda/envs/original-env/bin/python3
cachedir: .pytest_cache
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /root/.cache/bazel/_bazel_root/7b4611e5f7d910d529cf99d9ecdcc56a/execroot/com_github_ray_project_ray
configfile: pytest.ini
plugins: asyncio-0.17.0, forked-1.4.0, shutil-1.7.0, sugar-0.9.5, rerunfailures-11.1.2, timeout-2.1.0, httpserver-1.0.6, sphinx-0.5.1.dev0, docker-tools-3.1.3, anyio-3.7.1, virtualenv-1.7.0, lazy-fixture-0.6.3, benchmark-4.0.0
timeout: 180.0s
timeout method: signal
timeout func_only: False
collecting ... collected 10 items
python/ray/tests/test_ray_debugger.py::test_ray_debugger_breakpoint 2025-03-07 02:42:55,881 INFO worker.py:1807 -- Started a local Ray instance. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
[36m(f pid=26195)[0m RemotePdb session open at localhost:44791, use 'ray debug' to connect...
[36m(f pid=26195)[0m RemotePdb accepted connection from ('127.0.0.1', 48272).
[36m(f pid=26195)[0m *** SIGSEGV received at time=1741315376 on cpu 3 ***
[36m(f pid=26195)[0m PC: @ 0x7f4ab74057fd (unknown) (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab72aa520 (unknown) (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab04d3061 16544 (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab04c9d20 (unknown) _rl_set_mark_at_pos
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: *** SIGSEGV received at time=1741315376 on cpu 3 ***
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: PC: @ 0x7f4ab74057fd (unknown) (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab72aa520 (unknown) (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab04d3061 16544 (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab04c9d20 (unknown) _rl_set_mark_at_pos
[36m(f pid=26195)[0m Fatal Python error: Segmentation fault
[36m(f pid=26195)[0m
[36m(f pid=26195)[0m Stack (most recent call first):
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap_external>", line 1176 in create_module
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 571 in module_from_spec
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 674 in _load_unlocked
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
[36m(f pid=26195)[0m File "/opt/conda/envs/original-env/lib/python3.10/pdb.py", line 148 in __init__
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 122 in listen
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 269 in _connect_ray_pdb
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 290 in set_trace
[36m(f pid=26195)[0m File "/root/.cache/bazel/_bazel_root/7b4611e5f7d910d529cf99d9ecdcc56a/execroot/com_github_ray_project_ray/bazel-out/k8-opt/bin/python/ray/tests/test_ray_debugger.runfiles/com_github_ray_project_ray/python/ray/tests/test_ray_debugger.py", line 23 in f
[36m(f pid=26195)[0m File "/data/ray/python/ray/_private/worker.py", line 917 in main_loop
[36m(f pid=26195)[0m File "/data/ray/python/ray/_private/workers/default_worker.py", line 289 in <module>
[36m(f pid=26195)[0m
[36m(f pid=26195)[0m Extension modules: psutil._psutil_linux, psutil._psutil_posix, msgpack._cmsgpack, google.protobuf.pyext._message, setproctitle, yaml._yaml, charset_normalizer.md, ray._raylet, pvectorc (total: 9)
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~ Stack of ray_print_logs (139687217845824) ~~~~~~~~~~~~~~~~~~~
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/data/ray/python/ray/_private/worker.py", line 939, in print_logs
data = subscriber.poll()
~~~~~~~~~~~~~ Stack of ray_listen_error_messages (139687226238528) ~~~~~~~~~~~~~
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/data/ray/python/ray/_private/worker.py", line 2198, in listen_error_messages
_, error_data = worker.gcs_error_subscriber.poll()
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
Traceback (most recent call last):
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/pytest_timeout.py", line 241, in handler
timeout_sigalrm(item, settings.timeout)
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/pytest_timeout.py", line 409, in timeout_sigalrm
pytest.fail("Timeout >%ss" % timeout)
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/_pytest/outcomes.py", line 198, in fail
raise Failed(msg=reason, pytrace=pytrace)
Failed: Timeout >180.0s
```
### Issue Severity
None | open | 2025-03-10T08:43:52Z | 2025-03-10T22:18:22Z | https://github.com/ray-project/ray/issues/51211 | [
"bug",
"P2",
"core"
] | Moonquakes | 0 |
quasarstream/python-ffmpeg-video-streaming | dash | 5 | How to pass a main argument like "stream_loop" for hls | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2019-11-18T10:19:33Z | 2019-12-23T15:22:04Z | https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/5 | [] | Nijinsha | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,493 | Bump kube-scheduler binary to v1.30 from v1.28 in the user-scheduler | We are using a binary of the official kube-scheduler that is now getting too outdated, we should bump it to the latest v1.30.* up from v1.28.*.
We could bump to v1.31, but I think we should try to aim having kube-scheduler at the second newest k8s version because officially kube-scheduler should only be used with a +-1 version to the k8s api-server.
Doing so requires some checks as mentioned by comments next to where kube-scheduler's version is declared.
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/29f0adb03741ece8fa2b784bb60e85fa3bfa6b0b/jupyterhub/values.yaml#L512-L548 | closed | 2024-09-05T07:49:37Z | 2024-09-23T07:10:46Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3493 | [
"maintenance"
] | consideRatio | 1 |
Guovin/iptv-api | api | 387 | 问题 | 我的网络是陕西联通,更新时比别人慢很多,主要是到了TONKING组播阶段,别人一共用几分钟就全部完成,我的是要30多分钟,我是台式机,本地搜索,自己整理。 | closed | 2024-10-13T09:57:11Z | 2024-12-14T09:37:41Z | https://github.com/Guovin/iptv-api/issues/387 | [
"enhancement",
"question"
] | moonkeyhoo | 13 |
deezer/spleeter | tensorflow | 118 | [Discussion] How many examples do I need for training new models | <!-- Please respect the title [Discussion] tag. -->
I only have a basic understanding of machine learning but just interested to know the minimum number of examples I need to train a new model?
Also, suppose I have already used spleeter to produce a 1000 acapellas, and I already have the corresponding studio acapellas, can I then go ahead and train it on these so that I could try and remove the imperfections on any other random diy acapellas? | closed | 2019-11-21T02:37:46Z | 2020-01-13T17:45:45Z | https://github.com/deezer/spleeter/issues/118 | [
"question",
"training"
] | Scylla2020 | 16 |
miguelgrinberg/Flask-SocketIO | flask | 1,575 | I need help with websocket implemetation | I'm trying to implement a websocket but it was always being long polled.
I installed the eventlet but I have an error
Thank you in advance
__ init __.py
```
from flask import Flask
from flask_socketio import SocketIO
from flask_cors import CORS
socketio = SocketIO(logger=True, engineio_logger=True)
def create_app():
app = Flask(__name__)
app.debug = True
app.config['SECRET_KEY'] = 'gjr39dkjn344_!67#'
CORS(app)
from app.user import bp as user_bp
app.register_blueprint(user_bp)
socketio.init_app(app, cors_allowed_origins="*", async_mode='eventlet')
return app
```
app.py
```
from app import create_app, socketio
app = create_app()
if __name__== '__main__':
socketio.run(app)
```
js client
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.3.0/socket.io.js"></script>
<script>
window.onload = function(){
const socket = io();
function AddToChat(msg) {
const span = document.createElement("span");
const chat = document.querySelector(".chat");
span.innerHTML = `<strong>${msg.name}:</strong> ${msg.message}`;
chat.append(span);
}
socket.on('connect', () => {
socket.emit('Usuário conectado ao socket!')
});
document.querySelector("form").addEventListener("submit", function(event){
event.preventDefault();
socket.emit('sendMessage', {name: event.target[0].value, message: event.target[1].value})
event.target[0].value = "";
event.target[1].value = "";
})
socket.on('getMessage', (msg) => {
AddToChat(msg)
})
socket.on('message', (msgs) => {
for(msg of msgs){
console.log(msg)
AddToChat(msg)
}
})
}
```
error:

| closed | 2021-06-21T22:30:33Z | 2021-06-27T19:36:05Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1575 | [] | GustavoSwDaniel | 5 |
CorentinJ/Real-Time-Voice-Cloning | python | 497 | training encode with LibriSpeech, VoxCeleb1, and VoxCeleb2 is failing | After preprocessing all three datasets, I'm getting the following error while training encoder. I carefully checked the preprocessed directories. They all have `_sources.txt`.
Can anyone help me with this?
```
..Traceback (most recent call last):
File "encoder_train.py", line 46, in <module>
train(**vars(args))
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/train.py", line 67, in train
for step, speaker_batch in enumerate(loader, init_step):
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 971, in _next_dat$
return self._process_data(data)
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process$data
data.reraise()
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
Exception: Caught Exception in DataLoader worker process 4.
Original Traceback (most recent call last):
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worke$_loop
data = fetcher.fetch(index)
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_verification_dataset.py", line 55, in $ollate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_batch.py", line 8, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_batch.py", line 8, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker.py", line 34, in random_partial
self._load_utterances()
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker.py", line 18, in _load_utterances
self.utterance_cycler = RandomCycler(self.utterances)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/random_cycler.py", line 14, in __init__
raise Exception("Can't create RandomCycler from an empty collection")
Exception: Can't create RandomCycler from an empty collection
``` | closed | 2020-08-19T06:51:15Z | 2020-08-22T17:06:59Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/497 | [] | amintavakol | 6 |
unit8co/darts | data-science | 2,736 | [BUG] Darts not working in COLAB/Google. | **Describe the bug**
Can't use darts in google colab.
**To Reproduce**
!pip install "u8darts[all]"
from darts.models import BlockRNNModel
from darts import TimeSeries
**Expected behavior**
Imported correctly the BlockRNNModel and other models of darts. Metrics and TimeSeries are also not working.
**System (please complete the following information):**
- Python version: Python 3.11.11
- darts version 0.34.0
**Additional context**
**Found error:
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject**
| closed | 2025-03-18T11:13:06Z | 2025-03-20T10:07:27Z | https://github.com/unit8co/darts/issues/2736 | [
"bug"
] | Cajeux1999 | 2 |
microsoft/qlib | deep-learning | 1,455 | Future Warning in qlib.utils.paral | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
When I use the RobustZScoreNorm process, future warning as below shown up:
/home/******/anaconda3/envs/qlib3.8/lib/python3.8/site-packages/qlib/utils/paral.py:54: FutureWarning: Not prepending group keys to the result index of transform-like apply. In the future, the group keys will be included in the index, regardless of whether turns a like-indexed object.
To preserve the previous behavior, use
>>> .groupby(..., group_keys=False)
To adopt the future behavior and silence this warning, use
>>> .groupby(..., group_keys=True)
return df.groupby(axis=axis, level=level).apply(apply_func)
Can you guys fix this? Thanks! | closed | 2023-03-08T03:28:25Z | 2023-10-24T02:55:12Z | https://github.com/microsoft/qlib/issues/1455 | [
"bug"
] | Wendroff | 1 |
apache/airflow | python | 47,621 | Log rendering issue with groups and stacktraces | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Log grouping renders all logs inside the group in single line which should ideally be separate lines under the group. It looks like leading spaces are also removed making it hard to view the stack traces which are aligned in the disk but not in the UI. It looks like the grouping issue happened with 97d1645204f260f771919250b0567c2987c459b9 where span tags are used.

### What you think should happen instead?
_No response_
### How to reproduce
1. Run `example_python_operator` example dag.
2. View `print_the_context` task instance log and expand "All kwargs" section.
3. View the `external_python` task which has leading spaces missing stack trace.
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-11T14:44:09Z | 2025-03-12T14:44:18Z | https://github.com/apache/airflow/issues/47621 | [
"kind:bug",
"area:logging",
"area:core",
"area:UI",
"needs-triage"
] | tirkarthi | 1 |
ned2/slapdash | dash | 2 | SubPage Implementation | Hey! This is an amazing example and I have been finding it super useful. But I'm looking for each page to have their own subpages. How would I go about that using your infrastructure?
Thanks! | closed | 2018-04-30T13:39:58Z | 2018-12-30T02:10:29Z | https://github.com/ned2/slapdash/issues/2 | [] | vserluco | 3 |
gee-community/geemap | streamlit | 1,899 | Draw_control.features is inaccurate after editing a drawn object | ### Environment Information
Python 3.11.6, geemap : 0.30.4, ee : 0.1.384, ipyleaflet : 0.17.3, running under Solara 1.25.1
### Description
The draw_control.features list doesn't accurately reflect drawn features on the map after using the edit control. In turn, I believe the same problem is also affecting: draw_control.collections, map.user_roi, etc
**Expected behaviour:**
After editing an object using the buttons on the map, the draw_control.feature list should update to reflect the new edited geometry.
**Observed behaviour:**
After editing and clicking save, an extra feature is added to draw_control.features. After each edit and save, the number of extra objects in the draw_controls.features list goes up by one. This makes impossible to, e.g. accurately calculate the area of the geometry drawn on the map.
### How to reproduce
Given the following code:
```python
import geemap
import solara
@solara.component
def Page():
def handle_draw(traget, action, geo_json):
print(f"A draw event occurred of type: {action}, and the draw control features length is: {len(Map.draw_control.features)}")
Map = geemap.Map(center=(40, -100), zoom=4)
Map.draw_control.on_draw(handle_draw)
display(Map)
```
Draw a square polygon on the map. Console reads:
`A draw event occurred of type: created, and the draw control features length is: 1`
Edit the square using the edit button and click 'Save'. Console reads:
`A draw event occurred of type: edited, and the draw control features length is: 2`
Edit the square again. Console reads:
`A draw event occurred of type: edited, and the draw control features length is: 3`
(Also, is the 'traget' parameter of Map.draw_control.on_draw() a typo? Should it be 'target'?) | closed | 2024-02-08T11:58:00Z | 2024-07-15T02:54:08Z | https://github.com/gee-community/geemap/issues/1899 | [
"bug"
] | nyehughes | 3 |
Guovin/iptv-api | api | 473 | 这demo的配置表单是不是可以优化一下。这问题好解决吗。小白的疑惑。 | 出现2个问题 ,
第一个【同名字的节目频道如果 多出特殊的符号并不会收集过来】。需要模板单独去添加。
例如CCTV1综合 那个CCTV1_电信 ,
凤凰中文。字体不一样繁体或者简体。
SiTV欢笑剧场和SiTV欢笑剧场_
这样多了一个下划线导致这节目并未收集到。
需要单独增加这频道在demo内,可能被采集到结果文件里面去。
第二个【未检到任何源的频道】
会空置在原位置。
能否可以把未搜到结果的频道自定清理了,
或者归类到结果文件末尾处。方便处理。
能否节目单做这样的 排列。
CCTV1综合:CCTV1#CCTV1综合#CCTv1#CCtv1
凤凰卫视:凤凰卫视#鳳凰中文#凤凰中文台
这样冒号前面显示播放软件里面名称
#后面是出现的变化的节目名称他们会统一收集到一个节目频道里面去了。
这样搜集出来的数据不会因为细微了差异就被忽略掉了。
以上仅仅是自己使用后的感觉。具体的改动希望博主可以优化的更加好。





| closed | 2024-10-28T13:55:50Z | 2024-11-05T08:36:09Z | https://github.com/Guovin/iptv-api/issues/473 | [
"enhancement",
"question"
] | Heiwk | 2 |
autogluon/autogluon | computer-vision | 4,864 | [BUG] NeuralNetFastAI_BAG_L1 Exception occured in `Recorder` when calling event `after_batch` | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Running autogluon 1.1.1 on the tabular Kaggle challenge https://www.kaggle.com/competitions/playground-series-s3e16 I got the following error
```python
Fitting model: NeuralNetFastAI_BAG_L1 ... Training model for up to 57302.84s of the 85977.56s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=1, gpus=0, memory=0.31%)
Warning: Exception caused NeuralNetFastAI_BAG_L1 to fail during training... Skipping this model.
[36mray::_ray_fit()[39m (pid=5618, ip=xxxxx)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 402, in _ray_fit
fold_model.fit(X=X_fold, y=y_fold, X_val=X_val_fold, y_val=y_val_fold, time_limit=time_limit_fold, **resources, **kwargs_fold)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/models/abstract/abstract_model.py", line 856, in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/tabular/models/fastainn/tabular_nn_fastai.py", line 359, in _fit
self.model.fit_one_cycle(epochs, params["lr"], cbs=callbacks)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/callback/schedule.py", line 121, in fit_one_cycle
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=start_epoch)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 266, in fit
self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 201, in _with_events
try: self(f'before_{event_type}'); f()
^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 255, in _do_fit
self._with_events(self._do_epoch, 'epoch', CancelEpochException)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 201, in _with_events
try: self(f'before_{event_type}'); f()
^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 250, in _do_epoch
self._do_epoch_validate()
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 246, in _do_epoch_validate
with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 201, in _with_events
try: self(f'before_{event_type}'); f()
^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 207, in all_batches
for o in enumerate(self.dl): self.one_batch(*o)
^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 237, in one_batch
self._with_events(self._do_one_batch, 'batch', CancelBatchException)
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 203, in _with_events
self(f'after_{event_type}'); final()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 174, in __call__
def __call__(self, event_name): L(event_name).map(self._call_one)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastcore/foundation.py", line 163, in map
def map(self, f, *args, **kwargs): return self._new(map_ex(self, f, *args, gen=False, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastcore/basics.py", line 927, in map_ex
return list(res)
^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastcore/basics.py", line 912, in __call__
return self.func(*fargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 178, in _call_one
for cb in self.cbs.sorted('order'): cb(event_name)
^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/callback/core.py", line 64, in __call__
except Exception as e: raise modify_exception(e, f'Exception occured in `{self.__class__.__name__}` when calling event `{event_name}`:\n\t{e.args[0]}', replace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/callback/core.py", line 62, in __call__
try: res = getcallable(self, event_name)()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 562, in after_batch
for met in mets: met.accumulate(self.learn)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/learner.py", line 484, in accumulate
self.total += learn.to_detach(self.func(learn.pred, *learn.yb))*bs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/metrics.py", line 290, in mae
inp,targ = flatten_check(inp,targ)
^^^^^^^^^^^^^^^^^^^^^^^
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastai/torch_core.py", line 789, in flatten_check
test_eq(len(inp), len(targ))
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastcore/test.py", line 39, in test_eq
test(a,b,equals, cname='==')
File "~/miniconda3/envs/autogluon/lib/python3.11/site-packages/fastcore/test.py", line 29, in test
assert cmp(a,b),f"{cname}:\n{a}\n{b}"
^^^^^^^^
AssertionError: Exception occured in `Recorder` when calling event `after_batch`:
==:
7168
256
```
**Expected behavior**
Not sure if this happens and should be ignored or if this means something important is wrong. The error is skipped so this does not result in a complete crash of the run.
This seems to happen for all `NeuralNetFastAI` models of the run.
**To Reproduce**
```python
import pandas as pd
from autogluon.tabular import TabularPredictor
train_data = pd.read_csv("train.csv")
train_data.drop('id', axis=1, inplace=True)
predictor = TabularPredictor(
label="Age", eval_metric="median_absolute_error", verbosity=2,
)
predictor.fit(
train_data,
presets="best_quality",
time_limit=600,
num_cpus=8,
)
```
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2025-02-04
time : 18:31:22.615938
python : 3.11.10.final.0
OS : Linux
OS-release : 4.15.0-45-generic
Version : #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019
machine : x86_64
processor : x86_64
num_cores : 8
cpu_ram_mb : 16039.359375
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 60044
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.35.66
catboost : 1.2.7
defusedxml : 0.7.1
evaluate : 0.4.3
fastai : 2.7.18
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.3
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.9.2
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.9.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.17.1
optimum-intel : None
orjson : 3.10.11
pandas : 2.2.3
pdf2image : 1.17.0
Pillow : 10.2.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 75.5.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.18.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1+cpu
torchmetrics : 1.2.1
torchvision : 0.18.1+cpu
tqdm : 4.67.0
transformers : 4.40.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
| closed | 2025-02-04T10:33:15Z | 2025-02-06T15:25:49Z | https://github.com/autogluon/autogluon/issues/4864 | [
"module: tabular",
"bug: unconfirmed",
"Needs Triage"
] | albertcthomas | 5 |
gee-community/geemap | streamlit | 2,015 | Error with import geemap on Jupyter Notebook | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```python
import geemap
geemap.Report()
```
### Description
I cannot even import geemap into a Jupyter Notebook.
I get this error:
`Please restart Jupyter kernel after installation if you encounter any errors when importing geemap.
File "C:\ProgramData\Anaconda3\lib\site-packages\geemap\geemap.py", line 935
if layer_manager := self._layer_manager:
^
SyntaxError: invalid syntax`
### What I Did
```
I tried updating all the packages in my conda environment and that did not help.
```
| closed | 2024-05-16T17:49:58Z | 2024-05-16T19:03:22Z | https://github.com/gee-community/geemap/issues/2015 | [
"bug"
] | mirizarry-ortiz | 3 |
Miserlou/Zappa | flask | 2,222 | Zappa not compatible with Python3.8 | Summary - Zappa version 0.52.0 not working with Python 3.8
## Context
I created the virtualenv using python 3.8 and tried to deploy the environment. Its showing some error. Same thing with python3.7 is working seamlessly.
## Expected Behavior
zappa deploy <env> should start deploying the project.
## Actual Behavior
zappa deploy <env> with python3.8 version, its showing below error--
Traceback (most recent call last):
File "/home/user/Development/serverless_project/virt/demo_zappa/bin/zappa", line 5, in <module>
from zappa.cli import handle
File "/home/user/Development/serverless_project/virt/demo_zappa/lib/python3.8/site-packages/zappa/cli.py", line 44, in <module>
from .core import Zappa, logger, API_GATEWAY_REGIONS
File "/home/user/Development/serverless_project/virt/demo_zappa/lib/python3.8/site-packages/zappa/core.py", line 33, in <module>
import troposphere
File "/home/user/Development/serverless_project/virt/demo_zappa/lib/python3.8/site-packages/troposphere/_init_.py", line 586, in <module>
class Template(object):
File "/home/user/Development/serverless_project/virt/demo_zappa/lib/python3.8/site-packages/troposphere/_init_.py", line 588, in Template
'AWSTemplateFormatVersion': (basestring, False),
NameError: name 'basestring' is not defined
## Steps to Reproduce(Similar steps with python3.7 are working fine)
1. Created Django Project
2. Created virualenv for python3.8 (virtualenv -p /usr/bin/python3.8 virt/demo_zappa/)
3. Setup zappa_settings.json with runtime "python3.8"
4. Executing zappa init or zappa deploy <env> is producing errors.
## Your Environment
* Zappa version used: 0.52.0
* Operating System and Python version: Ubuntu 18.04 LTS and Python 3.8.10
* The output of `pip freeze`:
argcomplete==1.12.3
asgiref==3.3.4
boto3==1.17.89
botocore==1.20.89
certifi==2021.5.30
cfn-flip==1.2.3
chardet==4.0.0
click==8.0.1
Django==3.2.3
django-extensions==3.1.3
djangorestframework==3.12.4
durationpy==0.5
future==0.18.2
hjson==3.0.2
idna==2.9
importlib-metadata==4.5.0
jmespath==0.10.0
kappa==0.6.0
pep517==0.10.0
pip-tools==6.1.0
placebo==0.9.0
PyMySQL==1.0.2
python-dateutil==2.8.1
python-slugify==5.0.2
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
s3transfer==0.4.2
six==1.16.0
sqlparse==0.4.1
text-unidecode==1.3
toml==0.10.2
tqdm==4.61.0
troposphere==2.7.1
typing-extensions==3.10.0.0
urllib3==1.26.5
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.52.0
zipp==3.4.1
* Your `zappa_settings.json`:
{
"dev": {
"aws_region": "ap-south-1",
"django_settings": "python_zappa.settings",
"profile_name": "default",
"project_name": "python-zappa",
"runtime": "python3.8",
"s3_bucket": "<zappa-s3-bucket>",
"manage_roles": false,
"role_arn": "arn:aws:iam::<id>:role/<role-name>",
"slim_handler": false,
"delete_local_zip": true
}
} | open | 2021-06-17T13:16:54Z | 2021-08-09T23:44:09Z | https://github.com/Miserlou/Zappa/issues/2222 | [] | divyaarya | 1 |
ContextLab/hypertools | data-visualization | 264 | Problem with creating multiple hypertools figures in a for loop | Hi,
Thank you for creating this awesome toolbox!
I'm using the hyp.plot function within a for loop, to create a new figure with new input data in each iteration the loop. However, the PCA plot does not update with every loop, but instead every plot looks the same as the first plot (i.e, the one generated in the first iteration of the loop). I have checked via debugging that the data is indeed new in every loop, and it works fine when I use matplotlib to plot the same data in the loop. Do you know what might be causing this behaviour, and how to fix it? I assumed there must be some state dependency within the hypertools library, and I found you're using a memoize decorator, that might be causing this problem? I tried to re-import the library with every iteration of the loop, but that unfortunately did not fix the problem. | open | 2024-07-21T10:44:08Z | 2024-07-21T10:44:08Z | https://github.com/ContextLab/hypertools/issues/264 | [] | NilsNyberg | 0 |
pyppeteer/pyppeteer | automation | 103 | Updating issue | When I update Pyppeteer to version 0.2.2 the pip list shows this has happened, but when I run the following on the command line REPL
import pyppeteer
print (pyppeteer._version_)
it shows as the old version 0.0.25. Has anyone encountered this issue before? I'm running it on Raspian on a Pi4 and using Python 3.7.
Many thanks in advance for any help. | closed | 2020-05-09T17:34:43Z | 2020-05-10T07:53:18Z | https://github.com/pyppeteer/pyppeteer/issues/103 | [
"bug"
] | oomocks | 2 |
pydata/xarray | pandas | 9,335 | datatree: assigning Dataset objects to Datatree nodes | ### What is your issue?
_Originally posted by @keewis in https://github.com/xarray-contrib/datatree/issues/200_
Intuitively, I'd assume
```python
tree = datatree.DataTree()
tree["path"] = xr.Dataset()
```
to create a new node with the assigned `Dataset` as the data.
However, currently this raises
```pytb
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [7], line 2
1 tree = datatree.DataTree()
----> 2 tree["path"] = xr.Dataset()
File .../datatree/datatree.py:786, in DataTree.__setitem__(self, key, value)
782 elif isinstance(key, str):
783 # TODO should possibly deal with hashables in general?
784 # path-like: a name of a node/variable, or path to a node/variable
785 path = NodePath(key)
--> 786 return self._set_item(path, value, new_nodes_along_path=True)
787 else:
788 raise ValueError("Invalid format for key")
File .../datatree/treenode.py:479, in TreeNode._set_item(self, path, item, new_nodes_along_path, allow_overwrite)
477 raise KeyError(f"Already a node object at path {path}")
478 else:
--> 479 current_node._set(name, item)
File .../datatree/datatree.py:762, in DataTree._set(self, key, val)
759 else:
760 if not isinstance(val, (DataArray, Variable)):
761 # accommodate other types that can be coerced into Variables
--> 762 val = DataArray(val)
764 self.update({key: val})
File .../lib/python3.10/site-packages/xarray/core/dataarray.py:412, in DataArray.__init__(self, data, coords, dims, name, attrs, indexes, fastpath)
409 attrs = getattr(data, "attrs", None)
411 data = _check_data_shape(data, coords, dims)
--> 412 data = as_compatible_data(data)
413 coords, dims = _infer_coords_and_dims(data.shape, coords, dims)
414 variable = Variable(dims, data, attrs, fastpath=True)
File .../lib/python3.10/site-packages/xarray/core/variable.py:243, in as_compatible_data(data, fastpath)
240 return data
242 # validate whether the data is valid data types.
--> 243 data = np.asarray(data)
245 if isinstance(data, np.ndarray) and data.dtype.kind in "OMm":
246 data = _possibly_convert_objects(data)
File .../lib/python3.10/site-packages/xarray/core/dataset.py:1374, in Dataset.__array__(self, dtype)
1373 def __array__(self, dtype=None):
-> 1374 raise TypeError(
1375 "cannot directly convert an xarray.Dataset into a "
1376 "numpy array. Instead, create an xarray.DataArray "
1377 "first, either with indexing on the Dataset or by "
1378 "invoking the `to_array()` method."
1379 )
TypeError: cannot directly convert an xarray.Dataset into a numpy array. Instead, create an xarray.DataArray first, either with indexing on the Dataset or by invoking the `to_array()` method.
```
which does not really tell us why it raises.
Would it be possible to enable that short-cut for assigning data to new nodes? I'm not sure what to do if the node already exists, though (overwrite the data? drop the existing node?)
| closed | 2024-08-13T15:55:47Z | 2024-10-21T16:01:52Z | https://github.com/pydata/xarray/issues/9335 | [
"topic-DataTree"
] | flamingbear | 1 |
Kludex/mangum | fastapi | 207 | Corrupted Multipart when receiving pdf file on lambda | Hi i am trying to push an api using fastapi and mangum on lambda. This api is supposed to receive a multipart body as the code shows.
```python
@app.post("/")
def func(params: Properties = Body(...), file: UploadFile = File(...)):
results = some_func(file, params)
return {"results": results}
```
At first I thought everything was working fine but everytime I receive an empty result, it was looking like the pdf file uploaded was empty. In my process I transform the pdf file into multiple Image array.
`np.array(pdf2image.convert_from_bytes(file.file.read()))`
This array is all 255 values everytime but it was working fine on local. Do you think it may be related to some Lambda process on the request reception ? | closed | 2021-11-25T17:28:04Z | 2021-11-29T16:38:11Z | https://github.com/Kludex/mangum/issues/207 | [] | NicoLivesey | 2 |
deepinsight/insightface | pytorch | 1,994 | How to increase face recognition quality on my data? | Hi! Thanks for awesome repo!
I try to use it on my data and something get wrong. Pls, help me with that
1) I make some face embeddings with "R100 | Glint360K onnx" recognition model
For embedding I use `cv2.dnn.blobFromImage` with `scalefactor=1.0 / 128.0, mean=(127.5, 127.5, 127.5), size=(112, 112), swapRB=True`
(face and flip face with a ~20px face paddings **1 QUESTION**: _should I do paddings here?_)
<p><img src="https://user-images.githubusercontent.com/58026014/167069048-74b07f21-99a6-437e-a9e4-5712c2b7d0fe.jpg" height="150">
<img src="https://user-images.githubusercontent.com/58026014/167069059-debe99d4-5e12-440e-94c8-5bccf32c7341.jpg" height="150">
<img src="https://user-images.githubusercontent.com/58026014/167069763-3da46fd6-a895-4d41-9538-02c7e84adab8.jpg" height="150">
<img src="https://user-images.githubusercontent.com/58026014/167069774-77e6d77b-54c7-40c5-94dd-c8f9ca24c5e9.jpg" height="150"></p>
NOT ONLY THAT PHOTO (SORRY BRUCE)!
2) I detect "new face" on the photo (RetinaFace)
3) I do 2 new face embeddings (face and flip face)
4) I try to calculate emb distance with sqeuclidean metric with all persons:
- "new face" & each person
- "new face" & flip each person
- flip "new face" & each person
- flip "new face" & flip each person
for example:
1. new face & Bruce1 face
2. new face & flip Bruce1 face
3. flip new face & Bruce1 face
4. flip new face & flip Bruce1 face
5. new face & Bruce2 face
6. new face & flip Bruce2 face
7. flip new face & Bruce2 face
8. flip new face & flip Bruce2 face
(And the same for each photo and person)
5) I get mean(all person distances) and get final distance number (for each person)
6) I labeling "new face" with "nearest person" by distance number form 5)
What could be the problem with my data? (Oh, questions... Finaly!)
**1) Should I do paddings for face embedding?
2) bad jpg quality original image?
3) Should I do all this embedding manipulations or did I misunderstand something?
4) Should I align face?
5) How head turns affect quality?
6) how lighting and background affect quality?
7) Should I train the model on my data to improve the quality?**
P.S. Thanks in advance! Sorry for my bad English :) | open | 2022-05-06T06:04:12Z | 2022-08-24T06:34:17Z | https://github.com/deepinsight/insightface/issues/1994 | [] | IamSVP94 | 4 |
cvat-ai/cvat | pytorch | 8,702 | Big chunks can lead to failure in job data access | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
I created a task with 500 frames in chunk. Totally, there are 750 frames in the task.
I am opening the job and after some time cvat gives the error 500
As a result, users can not use cvat anymore.
I am attaching the video.
https://github.com/user-attachments/assets/338038d9-e2f3-4be5-a322-492368af2214
### Expected Behavior
Previously, this scenario worked just fine.
### Possible Solution
Maybe this is connected with the new way of chunks storing. https://github.com/cvat-ai/cvat/pull/8272
### Context
_No response_
### Environment
```Markdown
app.cvat.ai
```
| open | 2024-11-14T13:00:55Z | 2025-02-26T08:39:56Z | https://github.com/cvat-ai/cvat/issues/8702 | [
"bug"
] | PMazarovich | 11 |
microsoft/nni | data-science | 5,102 | Does nni support GPT2 compression? | **Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-08-31T12:06:54Z | 2022-09-19T08:29:26Z | https://github.com/microsoft/nni/issues/5102 | [
"model compression"
] | xk503775229 | 1 |
nteract/testbook | pytest | 37 | Docstrings and type hinting | closed | 2020-06-18T19:28:31Z | 2020-08-29T12:39:08Z | https://github.com/nteract/testbook/issues/37 | [
"documentation",
"GSoC-2020"
] | rohitsanj | 0 | |
explosion/spaCy | data-science | 13,610 | Russian morphology information for plural adjectives not returning genderformatted differently to singular | Hi. I'm trying to count frequency of Russian morphological endings for the different cases for nouns and adjectives across all genders.
However, tokens with pos 'ADJ' plurals do not return gender with token.morph
For example:
Я предпочитаю ездить на красной машине ( I prefer to drive the red car)
Case=Loc|Degree=Pos|Gender=Fem|Number=Sing
But...
Я предпочитаю водить красные машины # (I prefer to drive red cars)
Animacy=Inan|Case=Acc|Degree=Pos|Number=Plur
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Linux Mint 20.1
* Python Version Used: 3.8.10
* spaCy Version Used: 3.7.5
* Environment Information: Cinnamon 4.8.6
| open | 2024-08-31T07:35:29Z | 2024-08-31T07:37:02Z | https://github.com/explosion/spaCy/issues/13610 | [] | ColinCazuarius | 0 |
coqui-ai/TTS | deep-learning | 2,501 | [Bug] Missing Logic for Trained Model Prediction | ### Describe the bug
I discovered this bug, that exists when you try to use a custom trained model that since `self.model_name` is None in those cases this will raise an error when you call `tts_model.tt
some code to recreate the bug:
```
from TTS.api import TTS
custom_model = TTS(
model_path="/home/ubuntu/.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/model_file.pth",
config_path="/home/ubuntu/.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/config.json"
)
custom_model.tts("This is a test!")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/TTS/TTS/api.py", line 507, in tts
self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, emotion=emotion, speed=speed)
File "/home/ubuntu/TTS/TTS/api.py", line 414, in _check_arguments
if not self.is_coqui_studio:
File "/home/ubuntu/TTS/TTS/api.py", line 296, in is_coqui_studio
return "coqui_studio" in self.model_name
TypeError: argument of type 'NoneType' is not iterable
```
Since the initialization logic would fail if I passed an intermediate model name in the `TTS` constructor, a intermediate solution is to use add:
```
custom_model.model_name = "custom-model"
```
but I recommend adding some logic to the `is_coqui_studio` function to:
```
@property
def is_coqui_studio(self):
if self.model_name:
return "coqui_studio" in self.model_name
return False
```
### To Reproduce
```
from TTS.api import TTS
custom_model = TTS(
model_path="/home/ubuntu/.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/model_file.pth",
config_path="/home/ubuntu/.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/config.json"
)
custom_model.tts("This is a test!")
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- Python3.9
- Ubuntu 20.04
-
```
### Additional context
_No response_ | closed | 2023-04-11T18:40:02Z | 2023-05-12T13:52:38Z | https://github.com/coqui-ai/TTS/issues/2501 | [
"bug",
"wontfix"
] | VArdulov | 3 |
aiogram/aiogram | asyncio | 1,380 | Add aiohttp=3.9.x support for aiogram 2.25.x | ### aiogram version
2.x
### Problem
#9 61.98 ERROR: Cannot install -r requirements.txt (line 1) and aiohttp==3.9.0 because these package versions have conflicting dependencies.
#9 61.98
#9 61.98 The conflict is caused by:
#9 61.98 The user requested aiohttp==3.9.0
#9 61.98 aiogram 2.25.1 depends on aiohttp<3.9.0 and >=3.8.0
### Possible solution
How i can see, there is no code that should be changed for [aiohttp v3.9.x](https://docs.aiohttp.org/en/v3.9.1/changes.html)
Maybe just need to upgrade requirements.txt
### Alternatives
_No response_
### Code example
_No response_
### Additional information

no big changes, just fixes
================================================================================


Some deprecated and a lot of bug fixes, improvals
================================================================================ | closed | 2023-12-02T05:58:01Z | 2023-12-02T17:36:02Z | https://github.com/aiogram/aiogram/issues/1380 | [
"enhancement"
] | DAKExDUCK | 2 |
explosion/spaCy | data-science | 12,747 | Spacy Matcher does not match if certain keywords are next to matched tokens. | I've found that the spacy matcher doesn't match when certain keywords are next to a matched token. [This video](https://recordit.co/uPUEueT8Yp) can explain it best, but here's a text based description. If words such as "none" or "any" are directly after a matched token, the en_core_web_lg and _sm models seem to not match. If that keyword is removed by at least one other token, then things do match. I would expect that the presence of these words would not affect the matching process.
## How to reproduce the behaviour
See the video above.
## Your Environment
- **spaCy version:** 3.5.3
- **Platform:** macOS-10.16-x86_64-i386-64bit
- **Python version:** 3.10.4
- **Pipelines:** en_core_web_lg (3.5.0)
| closed | 2023-06-22T22:04:42Z | 2023-07-29T00:02:23Z | https://github.com/explosion/spaCy/issues/12747 | [
"feat / matcher"
] | newlandj | 7 |
plotly/dash | jupyter | 2,654 | test dvpc001 failure on CI | The test [`dvpc001`](https://github.com/plotly/dash/blob/97f0fdc8c18188cc7019a5823157f3a9fc71b299/tests/integration/devtools/test_props_check.py#L176C22-L176C22) keeps failing randomly after numerous attempt to fix it was decided to skip it.
The stacktrace:
```
collecting ... Fatal Python error: Aborted
Current thread 0x00007fcf14fe1700 (most recent call first):
File "/home/circleci/dash/venv/lib/python3.9/site-packages/plotly/io/_json.py", line 161 in to_json_plotly
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/_utils.py", line 26 in to_json
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/dash.py", line 733 in serve_layout
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 1799 in dispatch_request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 1823 in full_dispatch_request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 2529 in wsgi_app
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 2552 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/debug/__init__.py", line 329 in debug_application
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 322 in execute
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 333 in run_wsgi
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/server.py", line 415 in handle_one_request
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/server.py", line 427 in handle
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 361 in handle
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 747 in __init__
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 360 in finish_request
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 683 in process_request_thread
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 910 in run
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 973 in _bootstrap_inner
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 930 in _bootstrap
Thread 0x00007fcf8a531700 (most recent call first):
File "<frozen importlib._bootstrap_external>", line 186 in _write_atomic
File "<frozen importlib._bootstrap_external>", line 1110 in set_data
File "<frozen importlib._bootstrap_external>", line 1085 in _cache_bytecode
File "<frozen importlib._bootstrap_external>", line 995 in get_code
File "<frozen importlib._bootstrap_external>", line 846 in exec_module
File "<frozen importlib._bootstrap>", line 680 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1058 in _handle_fromlist
File "/home/circleci/dash/venv/lib/python3.9/site-packages/numpy/lib/__init__.py", line 35 in <module>
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 850 in exec_module
File "<frozen importlib._bootstrap>", line 680 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1058 in _handle_fromlist
File "/home/circleci/dash/venv/lib/python3.9/site-packages/numpy/__init__.py", line 144 in <module>
File "<frozen importlib._bootstrap>", line 228 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 850 in exec_module
File "<frozen importlib._bootstrap>", line 680 in _load_unlocked
File "<frozen importlib._bootstrap>", line 986 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1007 in _find_and_load
File "/home/circleci/dash/venv/lib/python3.9/site-packages/plotly/io/_json.py", line 161 in to_json_plotly
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/_utils.py", line 26 in to_json
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/dash.py", line 733 in serve_layout
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 1799 in dispatch_request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 1823 in full_dispatch_request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 2529 in wsgi_app
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 2552 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/debug/__init__.py", line 329 in debug_application
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 322 in execute
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 333 in run_wsgi
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/server.py", line 415 in handle_one_request
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/server.py", line 427 in handle
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 361 in handle
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 747 in __init__
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 360 in finish_request
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 683 in process_request_thread
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 910 in run
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 973 in _bootstrap_inner
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 930 in _bootstrap
Thread 0x00007fcf8b82e700 (most recent call first):
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 312 in wait
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 574 in wait
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 897 in start
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 697 in process_request
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 316 in _handle_request_noblock
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socketserver.py", line 237 in serve_forever
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 766 in serve_forever
File "/home/circleci/dash/venv/lib/python3.9/site-packages/werkzeug/serving.py", line 1069 in run_simple
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flask/app.py", line 1191 in run
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/dash.py", line 2076 in run
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/testing/application_runners.py", line 171 in run
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 910 in run
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 973 in _bootstrap_inner
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/threading.py", line 930 in _bootstrap
Thread 0x00007fcf91667740 (most recent call first):
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/socket.py", line 704 in readinto
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/client.py", line 281 in _read_status
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/client.py", line 320 in begin
File "/home/circleci/.pyenv/versions/3.9.9/lib/python3.9/http/client.py", line 1377 in getresponse
File "/home/circleci/dash/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 461 in _make_request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 714 in urlopen
File "/home/circleci/dash/venv/lib/python3.9/site-packages/urllib3/poolmanager.py", line 376 in urlopen
File "/home/circleci/dash/venv/lib/python3.9/site-packages/urllib3/request.py", line 173 in request_encode_body
File "/home/circleci/dash/venv/lib/python3.9/site-packages/urllib3/request.py", line 81 in request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/selenium/webdriver/remote/remote_connection.py", line 369 in _request
File "/home/circleci/dash/venv/lib/python3.9/site-packages/selenium/webdriver/remote/remote_connection.py", line 347 in execute
File "/home/circleci/dash/venv/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 428 in execute
File "/home/circleci/dash/venv/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 442 in get
File "/home/circleci/dash/venv/lib/python3.9/site-packages/dash/testing/browser.py", line 398 in wait_for_page
File "/home/circleci/dash/tests/integration/devtools/test_props_check.py", line 200 in test_dvpc001_prop_check_errors_with_path
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/python.py", line 194 in pytest_pyfunc_call
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/python.py", line 1792 in runtest
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 169 in pytest_runtest_call
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 262 in <lambda>
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 341 in from_call
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 261 in call_runtest_hook
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flaky/flaky_pytest_plugin.py", line 138 in call_and_report
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 133 in runtestprotocol
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/runner.py", line 114 in pytest_runtest_protocol
File "/home/circleci/dash/venv/lib/python3.9/site-packages/flaky/flaky_pytest_plugin.py", line 94 in pytest_runtest_protocol
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/main.py", line 350 in pytest_runtestloop
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/main.py", line 325 in _main
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/main.py", line 271 in wrap_session
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/main.py", line 318 in pytest_cmdline_main
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/home/circleci/dash/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 169 in main
File "/home/circleci/dash/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 192 in console_main
File "/home/circleci/dash/venv/bin/pytest", line 8 in <module>
Aborted (core dumped)
```
| open | 2023-10-06T15:20:39Z | 2024-08-13T19:38:43Z | https://github.com/plotly/dash/issues/2654 | [
"bug",
"testing",
"P2"
] | T4rk1n | 0 |
ultralytics/ultralytics | computer-vision | 19,713 | dataset conflict in yolov versions | I have a dataset size of 1m table images and labels which i have already downloaded in yolov8 format..i now want to train this in latest yolov11 ...so i need to change the dataset label format in yolov11 or not i got confused.. @glenn-jocher
 | closed | 2025-03-15T12:00:15Z | 2025-03-15T16:08:12Z | https://github.com/ultralytics/ultralytics/issues/19713 | [
"question",
"detect"
] | tzktok | 4 |
recommenders-team/recommenders | deep-learning | 2,105 | [ASK] MIND test dataset doesn't work for run_eval | ### Description
The following [code](https://github.com/recommenders-team/recommenders/issues/1673#issuecomment-1070252082):
```python
label = [0 for i in impr.split()]
```
It is essentially making each news ID in the impression list non-clicked.
Instead of modifying the code, I modified the test behaviors file and added `-0` to each news ID in the impression list (e.g., `N712-0 N231-0`).
Now I get the following error after running `run_eval`:
```python
model.run_eval(test_news_file, test_behaviors_file)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <timed exec>:1
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/recommenders/models/newsrec/models/base_model.py:335, in BaseModel.run_eval(self, news_filename, behaviors_file)
331 else:
332 _, group_labels, group_preds = self.run_slow_eval(
333 news_filename, behaviors_file
334 )
--> 335 res = cal_metric(group_labels, group_preds, self.hparams.metrics)
336 return res
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/recommenders/models/deeprec/deeprec_utils.py:594, in cal_metric(labels, preds, metrics)
591 res["hit@{0}".format(k)] = round(hit_temp, 4)
592 elif metric == "group_auc":
593 group_auc = np.mean(
--> 594 [
595 roc_auc_score(each_labels, each_preds)
596 for each_labels, each_preds in zip(labels, preds)
597 ]
598 )
599 res["group_auc"] = round(group_auc, 4)
600 else:
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/recommenders/models/deeprec/deeprec_utils.py:595, in <listcomp>(.0)
591 res["hit@{0}".format(k)] = round(hit_temp, 4)
592 elif metric == "group_auc":
593 group_auc = np.mean(
594 [
--> 595 roc_auc_score(each_labels, each_preds)
596 for each_labels, each_preds in zip(labels, preds)
597 ]
598 )
599 res["group_auc"] = round(group_auc, 4)
600 else:
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/sklearn/metrics/_ranking.py:567, in roc_auc_score(y_true, y_score, average, sample_weight, max_fpr, multi_class, labels)
565 labels = np.unique(y_true)
566 y_true = label_binarize(y_true, classes=labels)[:, 0]
--> 567 return _average_binary_score(
568 partial(_binary_roc_auc_score, max_fpr=max_fpr),
569 y_true,
570 y_score,
571 average,
572 sample_weight=sample_weight,
573 )
574 else: # multilabel-indicator
575 return _average_binary_score(
576 partial(_binary_roc_auc_score, max_fpr=max_fpr),
577 y_true,
(...)
580 sample_weight=sample_weight,
581 )
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/sklearn/metrics/_base.py:75, in _average_binary_score(binary_metric, y_true, y_score, average, sample_weight)
72 raise ValueError("{0} format is not supported".format(y_type))
74 if y_type == "binary":
---> 75 return binary_metric(y_true, y_score, sample_weight=sample_weight)
77 check_consistent_length(y_true, y_score, sample_weight)
78 y_true = check_array(y_true)
File ~/.conda/envs/recommenders/lib/python3.9/site-packages/sklearn/metrics/_ranking.py:337, in _binary_roc_auc_score(y_true, y_score, sample_weight, max_fpr)
335 """Binary roc auc score."""
336 if len(np.unique(y_true)) != 2:
--> 337 raise ValueError(
338 "Only one class present in y_true. ROC AUC score "
339 "is not defined in that case."
340 )
342 fpr, tpr, _ = roc_curve(y_true, y_score, sample_weight=sample_weight)
343 if max_fpr is None or max_fpr == 1:
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.
```
Any fix or workaround to this? How do I get the scores?
### Other Comments
_Originally posted by @ubergonmx in https://github.com/recommenders-team/recommenders/issues/1673#issuecomment-2016744219_
I am trying to train the NAML model with the valid + test set.
| closed | 2024-05-28T13:43:39Z | 2024-06-17T14:57:06Z | https://github.com/recommenders-team/recommenders/issues/2105 | [
"help wanted"
] | ubergonmx | 3 |
wkentaro/labelme | deep-learning | 704 | How to handle number of instances > 255, which cannot be saved as PNG | If the number of instances is bigger than 255,we cannot save the pixel-wise class label as PNG, so please use the npy file ,how do I deal with it? | closed | 2020-06-28T12:01:54Z | 2020-07-01T17:02:27Z | https://github.com/wkentaro/labelme/issues/704 | [] | liburning | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 843 | Is there TensorFlow implementation of vocoder(waveRNN)? | Hi , I want to know that Is there TensorFlow implementation of vocoder(waveRNN)?
I ask this because that when I try to change the Pytorch model of waveRNN to tensorflow 2.x version, I succeed and can train it on dataset, but I find that the inference speed is 10 times slow that waveRNN of pytorch....
### It occurs in 'for loop' of [vocoder.generate()] function:
def generate(self, mels, batched, target, overlap, mu_law, progress_callback=None):
................
for i in range(seq_len):
m_t = mels[:, i, :]
a1_t, a2_t, a3_t, a4_t = (a[:, i, :] for a in aux_split)
x = torch.cat([x, m_t, a1_t], dim=1)
x = self.I(x)
h1 = rnn1(x, h1)
x = x + h1
inp = torch.cat([x, a2_t], dim=1)
h2 = rnn2(inp, h2)
x = x + h2
x = torch.cat([x, a3_t], dim=1)
x = F.relu(self.fc1(x))
x = torch.cat([x, a4_t], dim=1)
x = F.relu(self.fc2(x))
logits = self.fc3(x)
if self.mode == 'MOL':
sample = sample_from_discretized_mix_logistic(logits.unsqueeze(0).transpose(1, 2))
output.append(sample.view(-1))
if torch.cuda.is_available():
# x = torch.FloatTensor([[sample]]).cuda()
x = sample.transpose(0, 1).cuda()
else:
x = sample.transpose(0, 1)
elif self.mode == 'RAW' :
posterior = F.softmax(logits, dim=1)
distrib = torch.distributions.Categorical(posterior)
sample = 2 * distrib.sample().float() / (self.n_classes - 1.) - 1.
output.append(sample)
x = sample.unsqueeze(-1)
else:
raise RuntimeError("Unknown model mode value - ", self.mode)
if i % 100 == 0:
gen_rate = (i + 1) / (time.time() - start) * b_size / 1000
progress_callback(i, seq_len, b_size, gen_rate)
.............
### Tensorflow2.3 version:
for i in range(seq_len):
m_t = mels[:, i, :]
a1_t, a2_t, a3_t, a4_t = (a[:, i, :] for a in aux_split)
x = tf.concat([x, m_t, a1_t], axis=1)
x = self.I(x)
_, h1 = rnn1(tf.expand_dims(x, axis=1))
x = x + h1
inp = tf.concat([x, a2_t], axis=1)
_, h2 = rnn2(tf.expand_dims(inp, axis=1))
x = x + h2
x = tf.concat([x, a3_t], axis=1)
x = tf.nn.relu(self.fc1(x))
x = tf.concat([x, a4_t], axis=1)
x = tf.nn.relu(self.fc2(x))
logits = self.fc3(x)
if self.mode == 'RAW' :
posterior = tf.nn.softmax(logits, axis=1)
distrib = tfp.distributions.Categorical(posterior, dtype=tf.float32)
sample = 2 * distrib.sample() / (self.n_classes - 1.) - 1.
output.append(sample)
x = tf.expand_dims(sample, axis=-1)
else:
raise RuntimeError("Unknown model mode value - ", self.mode)
if i % 100 == 0:
gen_rate = (i + 1) / (time.time() - start) * b_size_np / 1000
progress_callback(i, seq_len, b_size, gen_rate)
...............
#### This part generate 1 sample using 5s for pytorch, but even 50s in tensorflow2.3! And I don't know why.
Could you please help me with the problem? | closed | 2021-09-10T07:34:10Z | 2021-09-14T20:40:23Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/843 | [] | ymzlygw | 2 |
pydantic/pydantic | pydantic | 10,681 | revalidate_instances conflict with SerializeAsAny | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When using SerializeAsAny + revalidate_instances + layered inheritances, SerializeAsAny is not anymore respected.
### Example Code
```Python
from pydantic import BaseModel, SerializeAsAny
class commonbase(
BaseModel,
revalidate_instances="subclass-instances", # <--- toogle to generate error or not
): ...
class basechild(commonbase):
test_val: int = 1
class derivedchild(basechild):
test_val2: int = 2
class container(commonbase):
ct_child_1: dict[str, basechild] = {}
ct_child_2: SerializeAsAny[dict[str, basechild]] = {}
ct_child_3: dict[str, SerializeAsAny[basechild]] = {}
if __name__ == "__main__":
test_val = container(
ct_child_1={"test1": derivedchild()},
ct_child_2={"test2": derivedchild()},
ct_child_3={"test3": derivedchild()},
)
# print(test_val.model_dump_json(indent=1))
# print(test_val.model_dump())
assert "test_val2" not in test_val.model_dump()["ct_child_1"]["test1"]
assert "test_val2" in test_val.model_dump()["ct_child_2"]["test2"]
assert "test_val2" in test_val.model_dump()["ct_child_3"]["test3"]
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: C:\Users\chacha\AppData\Roaming\Python\Python311\site-packages\pydantic
python version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
platform: Windows-10-10.0.22631-SP0
related packages: mypy-1.9.0 typing_extensions-4.10.0
commit: unknown
```
| closed | 2024-10-21T22:09:50Z | 2025-01-09T12:35:55Z | https://github.com/pydantic/pydantic/issues/10681 | [
"question"
] | cclecle | 2 |
ray-project/ray | data-science | 51,124 | [Core] RayCheck failed: placement_group_resource_manager_->ReturnBundle(bundle_spec) Status not OK | ### What happened + What you expected to happen
During the process of remove the PG, the workers using the related bundle will first be terminated, followed by triggering local task scheduling, and finally the deletion of the bundle. This sequence causes the bundle intended for deletion to be reused, leading to raycheck failed.
### Versions / Dependencies
- ray master commit: 0e97d8cfe4cf2575d10ff8d060ecccff720ab0ea
- python: 3.11.9
### Reproduction script
- pytest ut case
``` python
def test_remove_placement_group_when_a_actor_queued(ray_start_cluster):
cluster = ray_start_cluster
cluster.add_node(num_cpus=1)
cluster.wait_for_nodes()
ray.init(address=cluster.address)
pg = ray.util.placement_group([{"CPU": 1}])
@ray.remote
class Actor:
def get_raylet_pid(self):
return os.getppid()
actor = Actor.options(
lifetime="detached",
num_cpus=1.0,
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg, placement_group_bundle_index=0
),
).remote()
raylet = psutil.Process(ray.get(actor.get_raylet_pid.remote()))
_ = Actor.options(
# Ensure that it can still be scheduled when the job ends.
lifetime="detached",
num_cpus=1.0,
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg, placement_group_bundle_index=0
),
).remote()
assert raylet.is_running()
# trigger GCS remove pg
ray.shutdown()
# Ensure GCS finish remove pg.
with pytest.raises(psutil.TimeoutExpired):
raylet.wait(10)
# check raylet pass raycheck:
# `RAY_CHECK_OK(placement_group_resource_manager_->ReturnBundle(bundle_spec))`
assert raylet.is_running()
```
- it will failed case raylet dead.
### Issue Severity
None | open | 2025-03-06T11:21:48Z | 2025-03-21T23:37:11Z | https://github.com/ray-project/ray/issues/51124 | [
"bug",
"P0",
"core",
"core-scheduler"
] | Catch-Bull | 8 |
seleniumbase/SeleniumBase | web-scraping | 3,246 | Long-term support concerns with network interception due to Selenium Wire deprecation | We have several use cases that require request and response interception for UI testing, and we’re considering using SeleniumBase for this purpose. However, I noticed that SeleniumBase's wire mode relies on Selenium Wire, which is deprecated since January 2024.
Although network interception is still functional through Selenium Wire, I’m concerned about the impact on long-term support. Could you provide guidance on the following:
Are there plans to update or replace Selenium Wire within SeleniumBase to ensure continued support for network interception?
Is it advisable to continue using the current setup, or would you recommend alternative approaches for network interception within SeleniumBase?
Any insights on future plans for handling network intercepts would be greatly appreciated. | closed | 2024-11-06T09:14:29Z | 2024-12-23T21:45:13Z | https://github.com/seleniumbase/SeleniumBase/issues/3246 | [
"question",
"workaround exists",
"feature or fix already exists"
] | pradeepvegesna | 3 |
slackapi/python-slack-sdk | asyncio | 1,569 | Is it possible to rename Canvas with Python Slack SDK? | Hey, we are using `slack_sdk==3.33.1`
And we would like to periodically rename our canvases (this is a sort-of-workaround for sticky headers in Slack chats :D )
I've found that you provide `canvases.edit` API (https://api.slack.com/methods/canvases.edit). But it doesnt allow us to change the name of the canvas as it has just `document_content` inside its `changes` parameter. Should we use `https://api.slack.com/methods/canvases.create` instead and recreate old canvases with new names or is there a cleaner solution? | closed | 2024-09-24T10:36:27Z | 2024-09-24T23:13:06Z | https://github.com/slackapi/python-slack-sdk/issues/1569 | [
"question",
"web-client",
"Version: 3x"
] | simonasmulevicius-humbility | 1 |
iterative/dvc | data-science | 10,185 | push: make sure source files exist before applying changes | In cloud versioning case when we delete our cache and workspace data and try to `dvc push`, it will think that it needs to modify/relink a file and in order to do that will delete it but then fail to create a new one because it will turn out that there is no source for data.
We need to add a step that will filter out missing data (probably somewhere even before `index.checkout.compare()`) from a desired index, so that we don't even think that we can create those missing files.
One of our users already ran into this.
This also applies to `dvc checkout` and possibly other operations (don't think so, but need to check). | closed | 2023-12-20T04:26:21Z | 2023-12-21T05:25:24Z | https://github.com/iterative/dvc/issues/10185 | [
"bug"
] | efiop | 1 |
vaexio/vaex | data-science | 1,585 | [BUG-REPORT] Is conda package up to date? | Hi,
when installing vaex using anaconda
`conda install -c conda-forge vaex`
vaex-core 2.0.3 gets installed.
I am fighting with conda remove/install to remove it (ok) and ionstall last version that seems to be able ton conda-forge (4.5.1), but so far, my installation is not 'converging'
```bash
conda install -c conda-forge vaex-core=4.5.1
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving envrionment: -
```
and the last row runs for hours...
I saw a post in which @maartenbreddels you advise using mamba. I will give a try. | closed | 2021-09-23T19:32:23Z | 2021-09-24T19:25:24Z | https://github.com/vaexio/vaex/issues/1585 | [] | yohplala | 7 |
scikit-learn/scikit-learn | data-science | 31,051 | `PandasAdapter` causes crash or misattributed features | ### Describe the bug
If all the following hold
- Using ColumnTransformer with the output container set to pandas
- At least one transformer transforms 1D inputs to 2D outputs (like [DictVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.DictVectorizer.html))
- At least one transformer transformers 2D inputs to 2D outputs (like [FunctionTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.FunctionTransformer.html))
- The input is a pandas DataFrame with non-default index
then fit/transform with the ColumnTransformer crashes because of index misalignment, or (in pathological situations) **permutes the outputs of some feature transforms making the first data point have some features from the first data point and some features from the second data point**.
### Steps/Code to Reproduce
```
import pandas as pd
from sklearn.compose import make_column_transformer
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import FunctionTransformer
df = pd.DataFrame({
'dict_col': [{'foo': 1, 'bar': 2}, {'foo': 3, 'baz': 1}],
'dummy_col': [1, 2]
}, index=[1, 2]) # replace with [1, 0] for pathological example
t = make_column_transformer(
(DictVectorizer(sparse=False), 'dict_col'),
(FunctionTransformer(), ['dummy_col']),
)
t.set_output(transform='pandas')
t.fit_transform(df)
```
### Expected Results
The following features dataframe:
||dictvectorizer__bar|dictvectorizer__baz|dictvectorizer__foo|functiontransformer__dummy_col|
|---|---|---|---|---|
|0|2|0|1|1|
|1|0|1|3|2|
### Actual Results
A crash:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[3], line 17
11 t = make_column_transformer(
12 (DictVectorizer(sparse=False), 'dict_col'),
13 (FunctionTransformer(), ['dummy_col']),
14 )
15 t.set_output(transform='pandas')
---> 17 t.fit_transform(df)
File [~\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\sklearn\utils\_set_output.py:319](http://localhost:8888/lab/tree/~/Documents/Code/Python/Scratchwork/pandas_adapter/Lib/site-packages/sklearn/utils/_set_output.py#line=318), in _wrap_method_output.<locals>.wrapped(self, X, *args, **kwargs)
317 @wraps(f)
318 def wrapped(self, X, *args, **kwargs):
--> 319 data_to_wrap = f(self, X, *args, **kwargs)
320 if isinstance(data_to_wrap, tuple):
321 # only wrap the first output for cross decomposition
322 return_tuple = (
323 _wrap_data_with_container(method, data_to_wrap[0], X, self),
324 *data_to_wrap[1:],
325 )
File [~\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\sklearn\base.py:1389](http://localhost:8888/lab/tree/~/Documents/Code/Python/Scratchwork/pandas_adapter/Lib/site-packages/sklearn/base.py#line=1388), in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
1382 estimator._validate_params()
1384 with config_context(
1385 skip_parameter_validation=(
1386 prefer_skip_nested_validation or global_skip_validation
1387 )
1388 ):
-> 1389 return fit_method(estimator, *args, **kwargs)
File [~\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\sklearn\compose\_column_transformer.py:1031](http://localhost:8888/lab/tree/~/Documents/Code/Python/Scratchwork/pandas_adapter/Lib/site-packages/sklearn/compose/_column_transformer.py#line=1030), in ColumnTransformer.fit_transform(self, X, y, **params)
1028 self._validate_output(Xs)
1029 self._record_output_indices(Xs)
-> 1031 return self._hstack(list(Xs), n_samples=n_samples)
File [~\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\sklearn\compose\_column_transformer.py:1215](http://localhost:8888/lab/tree/~/Documents/Code/Python/Scratchwork/pandas_adapter/Lib/site-packages/sklearn/compose/_column_transformer.py#line=1214), in ColumnTransformer._hstack(self, Xs, n_samples)
1213 output_samples = output.shape[0]
1214 if output_samples != n_samples:
-> 1215 raise ValueError(
1216 "Concatenating DataFrames from the transformer's output lead to"
1217 " an inconsistent number of samples. The output may have Pandas"
1218 " Indexes that do not match, or that transformers are returning"
1219 " number of samples which are not the same as the number input"
1220 " samples."
1221 )
1223 return output
1225 return np.hstack(Xs)
ValueError: Concatenating DataFrames from the transformer's output lead to an inconsistent number of samples. The output may have Pandas Indexes that do not match, or that transformers are returning number of samples which are not the same as the number input samples.
```
Or the following for the pathological example (note the two entries in functiontransformer__dummy_col are in the wrong order):
||dictvectorizer__bar|dictvectorizer__baz|dictvectorizer__foo|functiontransformer__dummy_col|
|---|---|---|---|---|
|0|2|0|1|2|
|1|0|1|3|1|
### Versions
```shell
System:
python: 3.12.6 (tags/v3.12.6:a4a2d2b, Sep 6 2024, 20:11:23) [MSC v.1940 64 bit (AMD64)]
executable: C:\Users\user\Documents\Code\Python\Scratchwork\pandas_adapter\Scripts\python.exe
machine: Windows-11-10.0.26100-SP0
Python dependencies:
sklearn: 1.6.1
pip: 24.2
setuptools: 77.0.3
numpy: 2.2.4
scipy: 1.15.2
Cython: None
pandas: 2.2.3
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.6.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 12
prefix: libscipy_openblas
filepath: C:\Users\user\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\numpy.libs\libscipy_openblas64_-43e11ff0749b8cbe0a615c9cf6737e0e.dll
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 12
prefix: vcomp
filepath: C:\Users\user\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\sklearn\.libs\vcomp140.dll
version: None
user_api: blas
internal_api: openblas
num_threads: 12
prefix: libscipy_openblas
filepath: C:\Users\user\Documents\Code\Python\Scratchwork\pandas_adapter\Lib\site-packages\scipy.libs\libscipy_openblas-f07f5a5d207a3a47104dca54d6d0c86a.dll
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
``` | open | 2025-03-21T22:43:24Z | 2025-03-22T19:32:41Z | https://github.com/scikit-learn/scikit-learn/issues/31051 | [
"Bug",
"Needs Triage"
] | nicolas-bolle | 2 |
SciTools/cartopy | matplotlib | 2,011 | requesting a re-draw "on limits change" causes infinite loop | ### Description
I've been working on [EOmaps](https://github.com/raphaelquast/EOmaps) (an interactive layer on top of cartopy I'm developing) and I've noticed that requesting a re-draw of the Agg buffer if the axis-limits change causes an infinite loop for cartopy axes!
Somehow the re-draw triggers a strange cascade of limits changes...
(e.g. for PlateCarree the limits switch between: (-198.0, 198.0) ⬌ (-180.0, 180.0) )
This does NOT happen with ordinary matplotlib axes!
<!-- Please provide a general introduction to the issue/proposal. -->
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
f = plt.figure()
ax = f.add_subplot(projection=ccrs.PlateCarree())
def xlims_change(ax, *args, **kwargs):
print("xlims changed", ax.get_xlim())
f.canvas.draw_idle()
ax.callbacks.connect("xlim_changed", xlims_change)
```
```python
>>> ...
>>> ylims changed (-198.0, 198.0)
>>> ylims changed (-180.0, 180.0)
>>> ylims changed (-198.0, 198.0)
>>> ylims changed (-180.0, 180.0)
>>> ylims changed (-198.0, 198.0)
>>> ...
```
Note that this works just fine with ordinary axes!
```python
f = plt.figure()
ax = f.add_subplot()
def xlims_change(*args, **kwargs):
print("xlims changed", ax.get_xlim())
f.canvas.draw_idle()
ax.callbacks.connect("xlim_changed", xlims_change)
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Windows
### Cartopy version
0.20.2
</details>
| open | 2022-03-09T12:04:47Z | 2022-03-15T14:31:40Z | https://github.com/SciTools/cartopy/issues/2011 | [] | raphaelquast | 4 |
google-research/bert | nlp | 848 | How to run bert classifier pb file | Hi,
I am trying to run bert classifier. I have fine tuned the model on my data. I have also converted ckpt files to pb files. Now I want to do predictions using this pb files. Could anyone guide me how can I do it. What changes do I have to make in the code to load pb file instead of the ckpt files. Any help would be great. | closed | 2019-09-08T21:34:56Z | 2020-02-17T03:06:18Z | https://github.com/google-research/bert/issues/848 | [] | anmol4210 | 1 |
aimhubio/aim | data-visualization | 3,117 | PendingRollbackError in Rapid Interaction Logging with aim.run" | ## ❓Question
I'm integrating aim.run for logging data into my database for a chatbot project, aiming to capture every user interaction. However, I encounter an issue where the speed of interactions outpaces the logging process's ability to store data and close properly. If I initiate a new logging run before the previous one has completed, I run into a `PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush.` How can I resolve this issue to ensure smooth logging of rapid interactions? | open | 2024-03-13T21:00:06Z | 2024-03-13T21:00:06Z | https://github.com/aimhubio/aim/issues/3117 | [
"type / question"
] | CSharma02 | 0 |
kizniche/Mycodo | automation | 407 | Mux initialization | Suggestion.
Change Mux initialization to the method discussed in this thread:
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=141517
Way easier.
Take it ez.
The tin man
| closed | 2018-02-14T10:35:58Z | 2018-02-16T19:55:47Z | https://github.com/kizniche/Mycodo/issues/407 | [] | The-tin-man | 3 |
litestar-org/litestar | asyncio | 3,887 | Bug: 405: Method Not Allowed when using Websockets with Litestar and Nginx Unit | ### Description
I believe there may be an issue with how Litestar handles Websocket connections incoming from a client app hosted with Nginx Unit.
This problem does not happen with uvicorn, only Nginx Unit.
From my typescript react app I initiate the websocket connection:
```typescript
const ws = new WebSocket(
`${import.meta.env.VITE_WEBSOCKET_URL}/test?token=myauthtoken`,
);
```
and receive this error from litestar:
```
/site-packages/litestar/_asgi/routing_trie/traversal.py", line 174, in parse_path_to_route
raise MethodNotAllowedException() from e
litestar.exceptions.http_exceptions.MethodNotAllowedException: 405: Method Not Allowed
```
I traced the issue back to this function in that same file:
```python
def parse_node_handlers(
node: RouteTrieNode,
method: Method | None,
) -> ASGIHandlerTuple:
"""Retrieve the handler tuple from the node.
Args:
node: The trie node to parse.
method: The scope's method.
Raises:
KeyError: If no matching method is found.
Returns:
An ASGI Handler tuple.
"""
if node.is_asgi:
return node.asgi_handlers["asgi"]
if method:
return node.asgi_handlers[method]
return node.asgi_handlers["websocket"]
```
When I watch the "method" parameter on the incoming connection from the websocket and I'm using Uvicorn, method is "None" and everything works as expected.
When using Nginx Unit method is "GET" so it tries to handle it like an http connection rather than a websocket one and you get the above error.
If I then modify
```python
if method:
```
to
```python
if method and not node.asgi_handlers.get("websocket"):
```
I get past the "method not allowed" error but then I get
```
/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 232, in handle_websocket_exception
await send(event)
RuntimeError: WebSocket connect not received
```
I then took a look at the function from the first error:
```python
def parse_path_to_route(
method: Method | None,
mount_paths_regex: Pattern | None,
mount_routes: dict[str, RouteTrieNode],
path: str,
plain_routes: set[str],
root_node: RouteTrieNode,
) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any], str]:
"""Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.
Args:
method: The scope's method, if any.
root_node: The root trie node.
path: The path to resolve scope instance.
plain_routes: The set of plain routes.
mount_routes: Mapping of mount routes to trie nodes.
mount_paths_regex: A compiled regex to match the mount routes.
Raises:
MethodNotAllowedException: if no matching method is found.
NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.
Returns:
A tuple containing the stack of middlewares and the route handler that is wrapped by it.
"""
try:
if path in plain_routes:
asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
return asgi_app, handler, path, {}, path
if mount_paths_regex and (match := mount_paths_regex.match(path)):
mount_path = path[: match.end()]
mount_node = mount_routes[mount_path]
remaining_path = path[match.end() :]
# since we allow regular handlers under static paths, we must validate that the request does not match
# any such handler.
children = (
normalize_path(sub_route)
for sub_route in mount_node.children or []
if sub_route != mount_path and isinstance(sub_route, str)
)
if not any(remaining_path.startswith(f"{sub_route}/") for sub_route in children):
asgi_app, handler = parse_node_handlers(node=mount_node, method=method)
remaining_path = remaining_path or "/"
if not mount_node.is_static:
remaining_path = remaining_path if remaining_path.endswith("/") else f"{remaining_path}/"
return asgi_app, handler, remaining_path, {}, root_node.path_template
node, path_parameters, path = traverse_route_map(
root_node=root_node,
path=path,
)
asgi_app, handler = parse_node_handlers(node=node, method=method)
key = method or ("asgi" if node.is_asgi else "websocket")
parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))
return (
asgi_app,
handler,
path,
parsed_path_parameters,
node.path_template,
)
except KeyError as e:
raise MethodNotAllowedException() from e
except ValueError as e:
raise NotFoundException() from e
```
I then modified
```python
key = method or ("asgi" if node.is_asgi else "websocket")
```
to
```python
key = method if method and not node.asgi_handlers.get("websocket") else ("asgi" if node.is_asgi else "websocket")
```
and now everything works as expected.
The reason I believe this may be a bug is the way if its determining if it's a websocket connection in the "parse_node_handlers" function.
When I check the [websocket spec](https://datatracker.ietf.org/doc/html/rfc6455), page 17, point 2 it says
>"The method of the request MUST be **GET**, and the HTTP version MUST be at least 1.1."
So I **think** the method coming through as "GET" on the websocket connection from Nginx Unit is normal behavior and the method being "None" from Uvicorn is abnormal.
Unfortunately, it seems like Litestar relies on method being "none" currently to handle the websocket connection which is breaking websockets for servers following that spec.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
/site-packages/litestar/_asgi/routing_trie/traversal.py", line 174, in parse_path_to_route
raise MethodNotAllowedException() from e
litestar.exceptions.http_exceptions.MethodNotAllowedException: 405: Method Not Allowed
/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 232, in handle_websocket_exception
await send(event)
RuntimeError: WebSocket connect not received
```
### Litestar Version
2.13.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-12-05T18:56:13Z | 2025-03-20T15:55:02Z | https://github.com/litestar-org/litestar/issues/3887 | [
"Upstream"
] | FixFlare | 12 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 339 | Problem with demo_cli.py | C:\Users\Test\Desktop\Real-Time-Voice-Cloning-master>python demo_cli.py --low_mem
Traceback (most recent call last):
File "demo_cli.py", line 2, in <module>
from utils.argutils import print_args
File "C:\Users\Test\Desktop\Real-Time-Voice-Cloning-master\utils\argutils.py", line 2, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
What am I doing wrong? | closed | 2020-05-09T18:24:25Z | 2020-08-24T04:06:28Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/339 | [] | NumpyG | 3 |
pytest-dev/pytest-html | pytest | 432 | Migrate JavaScript test framework to more popular alternative | We currently use [QUnit](https://qunitjs.com/) for unit testing our JavaScript.
I propose that we move to a more modern testing framework ( the top five listed [here](https://codeburst.io/top-5-javascript-testing-frameworks-to-choose-out-of-in-2020-6903ae94461e) seem to pop up a bit ).
I would like to add that any library we migrate to should have good support for testing URL parameters. `QUnit` forces the definition of one test script per URL which I think isn't ideal, since it's pretty cumbersome: https://stackoverflow.com/questions/27388596/pass-url-parameter-to-qunit-test-using-grunt
As a follow up, tests should be written for **all** URL parameters, since we currently have 0 tests for them.
@BeyondEvil maybe this is something we could tackle as part of the next gen migration? | open | 2020-12-20T06:03:00Z | 2020-12-20T23:51:44Z | https://github.com/pytest-dev/pytest-html/issues/432 | [
"test",
"code quality",
"next-gen"
] | gnikonorov | 1 |
pyro-ppl/numpyro | numpy | 1,540 | Memory leak JAXNS sampling | Hello,
Mi Swap memory goes up to 30 GBs when running the following code:
I'll resume it because of length.
I have N number of models (causal bayesian networks) with K variables each. Each model is of the form:
(I have code which generates arbitrary number of models, this one is created by hand for sake of the example)
`def patient_1():
disease = numpyro.sample("disease",dist.Bernoulli(0.3))
treatment = numpyro.sample("treatment",dist.Bernoulli(0.6))
list_variables=[disease,treatment]
list_variables_no_treatment=[disease]
bloodpressure = numpyro.sample("bloodpressure",dist.Normal(non_linear_fn(jnp.array(random.sample(list_variables_no_treatment,np.random.randint(0,1))),np.random.randint(0,1)*treatment),1))
list_variables.append(bloodpressure)
list_variables_no_treatment.append(bloodpressure)
weight = numpyro.sample("weight", dist.Normal(non_linear_fn(jnp.array(random.sample(list_variables_no_treatment,np.random.randint(0,2))),np.random.randint(0,1)*treatment), 1))
list_variables.append(weight)
list_variables_no_treatment.append(weight)
heartattack=numpyro.sample("heartattack",dist.Normal(non_linear_fn(jnp.array(random.sample(list_variables_no_treatment,np.random.randint(1,3))),np.random.randint(0,1)*treatment),1))
list_variables.append(heartattack)
list_variables_no_treatment.append(heartattack)
variables=[]
list_variables=[disease,treatment,bloodpressure,weight,heartattack]
for i in range(5,k):
vi=numpyro.sample('variable'+str(i),dist.Normal(non_linear_fn(jnp.array(random.sample(list_variables,np.random.randint(0,i))),np.random.randint(0,1)*treatment), 1))
variables.append(vi)
list_variables.append(vi)
vi=numpyro.sample('variable'+str(k),dist.Normal(non_linear_fn(jnp.array(random.sample(list_variables,np.random.randint(0,k))),treatment), 1))
variables.append(vi)
list_variables.append(vi)
return list_variables`
And I sample from it using:
`kernel_1=NestedSampler(patient_1)
kernel_1.run(jrn.PRNGKey(1149343))`
And everything works well, but it consumes my 64gbs of installed RAM + 30 gbs of swap memory.
Any ideas why?
Thanks. | closed | 2023-02-24T14:52:34Z | 2023-03-05T21:25:14Z | https://github.com/pyro-ppl/numpyro/issues/1540 | [
"question"
] | MauricioGS99 | 1 |
indico/indico | flask | 6,520 | Preview to emails from contributions list does not respect roles | **Describe the bug**
Previewing an email shows wrong addresses.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Contribution list, select contributions
2. Click "email"
3. select role "Speaker"
4. Click "Preview"
5. The "Dear {first_name}" may be showing a co-author, which is wrong
6. Click Submit -> the previewed email is not actually sent
**Expected behavior**
Preview should show an example email that will actually be sent (in the example above, to a speaker). The role selection should be respected.
| open | 2024-09-02T13:28:34Z | 2024-09-23T09:39:52Z | https://github.com/indico/indico/issues/6520 | [
"bug"
] | JohannesBuchner | 4 |
aiortc/aiortc | asyncio | 270 | Transport closes in setRemoteDescription (renegotiation) if media and datachannel are bundled | Simply that.
Create a bundled session with media and datachannel, renegotiate and transport is closed and never reconnected. | closed | 2020-02-18T12:37:25Z | 2020-02-18T19:23:30Z | https://github.com/aiortc/aiortc/issues/270 | [] | jmillan | 1 |
mwaskom/seaborn | data-visualization | 3,663 | objects.Norm : Normailize among a group? | # The problem
I am trying to achieve an effect where the values within a group is normalized. Consider
```python
# Make some sample data
import pandas as pd
import seaborn.objects as so
data = [
# 2020
{'year': "2020", 'category': 'happy'},
{'year': "2020", 'category': 'happy'},
{'year': "2020", 'category': 'sad'},
# 2021
{'year': "2021", 'category': 'happy'},
{'year': "2021", 'category': 'happy'},
{'year': "2021", 'category': 'happy'},
{'year': "2021", 'category': 'sad'},
# 2022
{'year': "2022", 'category': 'happy'},
{'year': "2022", 'category': 'happy'},
{'year': "2022", 'category': 'mad'},
]
df = pd.DataFrame(data)
```
With the current interface we can create something like
```
(
so.Plot(df, x='year', color='category')
.add(so.Line(), so.Count(), so.Norm())
).show()
```

However, I actually want something like
```
fracs = (
df.groupby('year')
.apply(lambda x: x['category'].value_counts(normalize=True))
.unstack()
.fillna(0)
)
fracs.plot()
plt.xlabel('Year')
plt.ylabel('Fraction of Category within Year')
plt.show()
```

# Question
As far as I can tell there isn't a way to create this style of plot. Am I missing something in `so.Norm` that enables this (I found the documentation of `so.Norm` arguments somewhat confusing)? There may also be some other kind of functionality/plot already in Seaborn to enable this kind of analysis without using `so.Norm`?
Otherwise, is there interest in adding something like a `within_groups` arg (or other name. Not sure what makes the most sense.) to `so.Norm` to enable this?
| open | 2024-03-25T21:14:28Z | 2024-03-26T00:22:06Z | https://github.com/mwaskom/seaborn/issues/3663 | [] | DNGros | 4 |
replicate/cog | tensorflow | 1,564 | Disable mirroring input in output | Hi, I noticed that when using the http API, the contents of "input" in the request get echoed back in the response. I'm working on a model that sends a few MB of string data, and I would prefer not to double my network traffic. Is it possible to disable this? (Maybe in cog.yaml? Or as an option in the request?)
From a design perspective, I feel that echoing the input is a little redundant when users are making synchronous requests (they already have the data they just sent). For asynchronous requests, that may not be the case (they may already have trashed their inputs).
# Example
Here's a minimal example to show what I mean.
## Setup
`predict.py`
```py title="foo"
from cog import BasePredictor, Input
class Predictor(BasePredictor):
def predict(self,value: str = Input(description="The input")) -> int:
return 42
```
`cog.yaml`
```yaml
build:
gpu: false
python_version: "3.11"
predict: "predict.py:Predictor"
```
building and running
```bash
cog build -t stringizer
docker run -d -p 5000:5000 stringizer
```
## Result
I ran the following command:
```
curl -X 'POST' 'http://localhost:5000/predictions' \
-H 'accept: application/json' -H 'Content-Type: application/json' \
-d '{ "input": { "value": "possibly a very long string" } }'
```
### Expected
From my initial reading of the [http docs](https://github.com/replicate/cog/blob/main/docs/http.md#post-predictions-synchronous), I expected to get this response:
```
{"output":42,"error":null,"status":"succeeded"}
```
### Actual
The actual response was:
```
{"input":{"value":"possibly a very long string"},"output":42,"id":null,"version":null,"created_at":null,"started_at":"2024-03-06T19:54:00.541667+00:00","completed_at":"2024-03-06T19:54:00.542524+00:00","logs":"","error":null,"status":"succeeded","metrics":{"predict_time":0.000857},"webhook":null,"webhook_events_filter":["start","output","logs","completed"],"output_file_prefix":null}
```
| open | 2024-03-06T20:42:37Z | 2024-03-06T20:42:37Z | https://github.com/replicate/cog/issues/1564 | [] | djkeyes | 0 |
gee-community/geemap | streamlit | 1,863 | Inconsistent use of gdal | Here https://github.com/gee-community/geemap/blob/9c990baa21d62337ead2c6a6fbd58e0a74038a6f/geemap/common.py#L10794
```python
from osgeo import gdal
import rasterio
# ... and suppress errors
gdal.PushErrorHandler("CPLQuietErrorHandler")
```
Whereas here https://github.com/gee-community/geemap/blob/9c990baa21d62337ead2c6a6fbd58e0a74038a6f/geemap/common.py#L15128
```python
try:
from osgeo import gdal, osr
except ImportError:
raise ImportError("GDAL is not installed. Install it with pip install GDAL")
# [SNIP]
gdal.UseExceptions()
```
1. One of the importss is wrapped in a try, but the others are not. I suggest dropping the try/except wrapping.
1. The `UseExceptions` is great as that will become the default behavior of GDAL in the future. I suggest always doing that right after the imports each time there is a `from osgeo import`
1. The `gdal.PushErrorHandler` calls are missing the matching `gdal.PopErrorHandler()`. You are welcome to use this context manager if it makes things easier. https://github.com/schwehr/gdal-autotest2/blob/577974592837fdfc18fd90b338cb657f1f2332bd/python/gcore/gcore_util.py#L50
```python
@contextlib.contextmanager
def ErrorHandler(error_name):
handler = gdal.PushErrorHandler(error_name)
try:
yield handler
finally:
gdal.PopErrorHandler()
```
| closed | 2023-12-31T17:42:29Z | 2024-06-15T04:59:34Z | https://github.com/gee-community/geemap/issues/1863 | [
"bug"
] | schwehr | 1 |
activeloopai/deeplake | data-science | 2,369 | [FEATURE] Allow users to disable tiling when encoding images | ## 🚨🚨 Feature Request
- [ ] Related to an existing [Issue](../issues)
- [x] A new implementation (Improvement, Extension)
### Is your feature request related to a problem?
Nope, not for deeplake users!
### If your feature will improve `deeplake`
Being able to "disable" the tiling of images when very large arrays are used (e.g. satellite imagery) would be a nice feature to have. I am currently building my own compression/decompression pipeline which crops and downscales/upscales images as they are loaded, and the fact that tiled images cannot be converted to `bytes` directly is quite limiting. The library currently throws:
```
NotImplementedError: `tobytes=True` is not supported by tiled samples as it can cause recompression.
```
...whenever a "bytes" array is requested out of a tensor dataset for large images.
### Description of the possible solution
I would assume that adding a `allow_tiling: bool = True` option to the `deeplake.core.dataset.Dataset.create_tensor` function would be the best way to expose this setting to end users. Defaulting it to `True` makes sense (keeps the current behavior intact by default). Setting it to false should simply skip tiling, and help me on my way to create amazingly fast/flexible loaders for big image arrays.
**Teachability, Documentation, Adoption, Migration Strategy**
The docs already do not mention anything about tiling, so I was quite surprised to see that it's used under the hood. I assume it's to speed up access to smaller image regions in such datasets, but this is limiting when "bytes" objects are expected instead of the numpy arrays. Simply changing the signature of `deeplake.core.dataset.Dataset.create_tensor` by adding a new parameter would probably not confuse users, and it would probably be the main place to look when someone's thinking about disabling tiling.
| closed | 2023-05-22T19:08:42Z | 2023-05-23T14:44:08Z | https://github.com/activeloopai/deeplake/issues/2369 | [
"enhancement"
] | plstcharles | 4 |
scanapi/scanapi | rest-api | 380 | Remove Spectrum Badge from README | ## Description
Remove Spectrum Badge from README. We will not use Spectrum anymore. See #376. | closed | 2021-05-27T19:45:22Z | 2021-05-27T19:49:46Z | https://github.com/scanapi/scanapi/issues/380 | [
"Documentation"
] | camilamaia | 0 |
mars-project/mars | pandas | 3,054 | [RFC] Make mapper output blocks consistent with num_reducers | # Background
When building chunk graph, Mars chunks won't be added to graph if some chunks not used by downstream, because mars build graph back to first. This will make shuffle optimization tricky because some reducer may be lost. For example, we may want to build map-reducer dag dynamiclly based on reducer ordinal. After we get mapper shuffle block refs, we can pass it to coresponding reducer based on reducer ordinal. If some reducers are not contained, we won't be able to know which blocks should be passed which reducer.
Mars `describe` is an exmaple. It use `PSRS` algorithm, which has a shuffle stage named `PSRSAlign`. Sometimes some reducer outputs for this shuffle will not be used, and the resulting chunk graph will has reducer chunks less than mapper blocks.
In the following exmaple, we have three `PSRSAlign` mappers each produce three shuffle blocks, but we only got two reducers of three in the final chunk graph. We need a way to make mapper blocks consistent with num_reducers
```
def test_describe_execution(setup):
s_raw = pd.Series(np.random.rand(10))
# test multi chunks
series = from_pandas_series(s_raw, chunk_size=3)
r = series.describe(percentiles=[])
r.execute().fetch()
```


# Solution1: Add whitelist to mapper operands
Since we don't know which reducers are lost, we can't add a blacklist to mappers which can be used to filter out output.
We can only pass a `reducer index whitelist` to mapper operands, which will be used to filter shuffle blocks.
This solution will make every mapper record much meta, whcih will make supervisor the bottleneck for large-scale task.
# Solution2: Add `n_reducers` and `reducer_ordinal ` to `MapReducerOperand`
If we add `n_reducers` and `reducer_ordinal` when creating `MapReducerOperand`, there will be not necessary to got it form chunk graph.
The problem for this is that we will need do it for every shuffle operand.
# Solution 3
Add those ignored reducers to chunk graph. For most compute tasks, there will be no reducers ignored. So additional scheduling for these unnecessary subtasks is also acceptable | open | 2022-05-19T13:17:29Z | 2022-05-19T13:33:57Z | https://github.com/mars-project/mars/issues/3054 | [] | chaokunyang | 0 |
noirbizarre/flask-restplus | flask | 491 | Uploading multiple files with swagger UI not working | I have an issue uploading several files through the Swagger UI. I have defined the following argument parser for uploading several files:
```python
data_parser = api.parser()
data_parser.add_argument('data',
type=werkzeug.FileStorage,
location="files",
dest='files',
required=False,
action="append")
```
And this works fine with the following curl request:
```bash
curl -X POST "http://0.0.0.0:5000/test" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "data=@test1" -F "data=@test2"
```
As I am getting the following in my arguments (once I have parsed them):
```
{"files": [<FileStorage: u'test1' ('application/octet-stream')>, <FileStorage: u'test2' ('application/octet-stream')>]}
```
However, if I use the swagger UI, it seems that the curl request it is doing is:
```bash
curl -X POST "http://127.0.0.1:5000/test" \
-H "accept: application/json" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "data=%5Bobject%20File%5D&data=%5Bobject%20File%5D"
```
Therefore my arguments are empty:
```
{"files": None}
```
I am using flask-restplus version 0.11.0. | open | 2018-07-11T10:39:08Z | 2021-02-19T17:10:31Z | https://github.com/noirbizarre/flask-restplus/issues/491 | [] | alvarolopez | 4 |
ultralytics/ultralytics | machine-learning | 19,226 | yolov8的pt推理时间和engine(fp32)推理时间为什么没有区别 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
使用engine推理和pt模型推理时间相同,请问怎么解决?下面是推理代码:
model = YOLO(r"D:\workspace\model_zero\ultralytics-main\runs\detect\nameplate2_small\weights\best.engine", task="detect") # tensorrt
result = model.predict(np_img)
### Additional
_No response_ | closed | 2025-02-13T09:12:16Z | 2025-02-14T05:27:59Z | https://github.com/ultralytics/ultralytics/issues/19226 | [
"question",
"detect",
"exports"
] | wuchaotao | 6 |
yuka-friends/Windrecorder | streamlit | 254 | issue with new update? | hello ant!
My windrecorder stopped recording around 3am eastern NYC time yesterday 12/2/2024 monday.
Just checked now. I rebooted my laptop but no change.
Please let me know if this is on your end as well.
Thank you! | closed | 2024-12-03T05:33:03Z | 2024-12-05T04:31:59Z | https://github.com/yuka-friends/Windrecorder/issues/254 | [] | morningstar41131411811717116112213 | 2 |
deepset-ai/haystack | pytorch | 8,947 | Add MCP Pipeline Wrapper | https://modelcontextprotocol.io/quickstart/server
- Pipeline wrapper similar to what we have in Hayhooks so that the wrapped pipeline follows the MCP standard and can be called as an MCP server | closed | 2025-03-03T08:52:43Z | 2025-03-24T09:15:34Z | https://github.com/deepset-ai/haystack/issues/8947 | [
"P1"
] | julian-risch | 1 |
wagtail/wagtail | django | 12,020 | Content metrics in page editor - word count and reading time | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
In the [Content quality checkers](https://github.com/wagtail/wagtail/discussions/11063) discussion the new combined panel showcasing content metrics, along with the accessibility and sustainability check results was introduced.
The content metrics that need to be added first are the word count and reading time.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
Screenshot of the proposed design:

### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
See the [Content quality checkers](https://github.com/wagtail/wagtail/discussions/11063) discussion
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
- use language-aware [Intl.Segmenter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/Segmenter) API for word count
- consider using [Axe plugins](https://github.com/dequelabs/axe-core/blob/develop/doc/plugins.md) for smooth cross-frame communication (support in headless)
- use debounce for the above metrics depending on the computation effort
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Assigning this to @albinazs. Feedback is also welcome from everyone.
| closed | 2024-06-06T17:01:39Z | 2024-07-11T13:58:20Z | https://github.com/wagtail/wagtail/issues/12020 | [
"type:Enhancement"
] | albinazs | 1 |
httpie/http-prompt | api | 133 | Enable to use external editor to make request body | Thanks for this awesome software.
I can now make HTTP request very easily.
But sometimes, I have to use an editor and other software to make requests with complicated body.
I use a simple script like this;
```sh
#!/usr/bin/zsh
# command name: httpj
TMPFILE=$(mktemp --suffix=.json)
vim $TMPFILE
http "$@" < $TMPFILE
rm $TMPFILE
```

It would be nice if http-prompt launches an external editor with a keyboard short cut, and stores the saved file as the next request's body. | open | 2017-09-20T22:51:35Z | 2017-09-21T02:39:22Z | https://github.com/httpie/http-prompt/issues/133 | [
"enhancement"
] | hamasho | 0 |
modelscope/modelscope | nlp | 1,239 | grpo训练报错 | # 问题描述
多卡不使用vllm时,infer_rank计算报错,num_infer_workers默认值为1
# 错误堆栈
```
[rank7]: File "/home/tiger/.local/lib/python3.11/site-packages/swift/trainers/rlhf_trainer/grpo_trainer.py", line 48, in on_train_begin
[rank7]: self.trainer._prefetch(state.train_dataloader)
[rank7]: File "/home/tiger/.local/lib/python3.11/site-packages/swift/trainers/rlhf_trainer/grpo_trainer.py", line 364, in _prefetch
[rank7]: if self.infer_rank >= 0:
[rank7]: ^^^^^^^^^^^^^^^
[rank7]: File "/home/tiger/.local/lib/python3.11/site-packages/swift/trainers/rlhf_trainer/grpo_trainer.py", line 253, in infer_rank
[rank7]: assert local_world_size + self.args.num_infer_workers == get_device_count()
[rank7]:
``` | closed | 2025-02-24T08:26:50Z | 2025-02-24T08:38:18Z | https://github.com/modelscope/modelscope/issues/1239 | [] | MrToy | 2 |
babysor/MockingBird | pytorch | 770 | ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. | **A.Summary**
```
pip install -r requirements.txt
```
之后出现这个问题
**B.Env & To Reproduce**
Win 11
**C.Screenshots & Code**
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
spyder 5.2.2 requires pyqtwebengine<5.13, which is not installed.
daal4py 2021.6.0 requires daal==2021.4.0, which is not installed.
spyder 5.2.2 requires pyqt5<5.13, but you have pyqt5 5.15.7 which is incompatible.
conda-repo-cli 1.0.20 requires clyent==1.2.1, but you have clyent 1.2.2 which is incompatible.
conda-repo-cli 1.0.20 requires nbformat==5.4.0, but you have nbformat 5.5.0 which is incompatible.
conda-repo-cli 1.0.20 requires PyYAML==6.0, but you have pyyaml 5.4.1 which is incompatible.
```
<img width="741" alt="bug" src="https://user-images.githubusercontent.com/35703636/198003704-2489b406-83a8-41d5-8d75-b084db04aa10.png">
原来的问题解决了就变成上面这个情况了。
<img width="627" alt="bug0" src="https://user-images.githubusercontent.com/35703636/198003740-49c58232-0ca1-483e-b2fa-377595a87a8b.png">
| closed | 2022-10-26T10:31:57Z | 2023-07-07T03:19:12Z | https://github.com/babysor/MockingBird/issues/770 | [] | alchemypunk | 0 |
pytest-dev/pytest-selenium | pytest | 330 | INTERNALERROR pytest_selenium.py", line 358, in format_log | INTERNALERROR> File "C:\agents\_work\10\s\venv\Lib\site-packages\pytest_selenium\pytest_selenium.py", line 358, in format_log
INTERNALERROR> datetime.utcfromtimestamp(entry["timestamp"] / 1000.0).strftime(
INTERNALERROR> ~~~~~^^^^^^^^^^^^^
INTERNALERROR> TypeError: string indices must be integers, not 'str' | closed | 2024-12-26T05:55:12Z | 2025-01-31T10:04:49Z | https://github.com/pytest-dev/pytest-selenium/issues/330 | [
"bug",
"help wanted",
"waiting on input"
] | anjanikv | 4 |
jazzband/django-oauth-toolkit | django | 1,305 | JSON endpoint for auth URL | I would like to render the authorization page on the front end rather than via a Django template. The reason is that I want to pull other information into that page that should logically come from other endpoints rather than being added to the template scope.
Anyone doing anything like this currently? I'd prefer not to rewrite the AuthorizationView completely.
I'm willing to work on a PR for this if it would be accepted. How about an optional parameter to that view that makes it handle JSON for GET and POST instead of html? Or should it be a completely new view? | open | 2023-08-11T12:26:10Z | 2025-01-09T19:07:13Z | https://github.com/jazzband/django-oauth-toolkit/issues/1305 | [
"question"
] | jhnbyrn | 13 |
aiogram/aiogram | asyncio | 730 | Task 2 | closed | 2021-10-12T21:24:43Z | 2021-10-12T21:32:03Z | https://github.com/aiogram/aiogram/issues/730 | [
"invalid",
"3.x"
] | JrooTJunior | 0 | |
redis/redis-om-python | pydantic | 470 | `all_pks` returns only part of complex primary keys | I have the following setup (minimized for simplicity):
```python
class Place(JsonModel, ABC):
id: str = Field(primary_key=True)
class Meta:
global_key_prefix = "places"
class Country(Place):
some_field: int = 1
class Meta:
model_key_prefix = "country"
class Province(Place):
another_field: bool = True
class Meta:
model_key_prefix = "province"
class City(Place):
yet_another_field: str = "bar"
class Meta:
model_key_prefix = "city"
```
And I want `id` fields (primary keys) to represent some hierarchy n order to be easily identifieable from the first glance.
For that I'd like to use the same `:` separator that is used for namespacing:
```python
country = Country(id="ca")
province = Province(id="ca:qc")
city = City(id="ca:qc:montreal")
```
So that the actual Redis keys look like that:
```
places:country:ca
places:province:ca:qc
places:city:ca:qc:montreal
```
This setup works for most actions.
For example, I can easily get the city like that:
```python
>>> City.get("ca:qc:montreal")
{"id": "ca:qc:montreal", "yet_another_field": "bar"}
```
But it does not work for `.all_pks()`.
Expected behaviour:
```python
>>> list(Country.all_pks())
["ca"]
>>> list(Province.all_pks())
["ca:qc"]
>>> list(City.all_pks())
["ca:qc:montreal"]
```
Actual behaviour:
```python
>>> list(Country.all_pks())
["ca"]
>>> list(Province.all_pks())
["qc"]
>>> list(City.all_pks())
["montreal"]
```
This behaviour is caused by this patch of code:
https://github.com/redis/redis-om-python/blob/721f734aff770f56c54701629f578fe175d9ddda/aredis_om/model/model.py#L1530-L1535
Upon finding the key, it splits the key by `:` and takes the last part of it. So `places:city:ca:qc:montreal` becomes `montreal`.
While it should just remove the `key_prefix` part. So that `places:city:ca:qc:montreal` becomes `ca:qc:montreal`. | closed | 2023-02-02T14:01:58Z | 2023-02-12T11:57:56Z | https://github.com/redis/redis-om-python/issues/470 | [] | YaraslauZhylko | 2 |
arogozhnikov/einops | numpy | 49 | [FR] Optional channels | Sometimes, I find myself working lists of tensors in which one tensor has a shape `(b, c)` (for `c` classes) and another tensor has shape `(b,)` (for a single class). My current approach is to pad the tensors that have only one class with an additional channel dimension, use `rearrange` on the list, and then squeeze the dimensions that need to be squeezed.
A great alternative to this would be supporting optional channels. Perhaps you could notate them with a question mark: `rearrange(x, "b c? -> (b c?)")`. | closed | 2020-06-08T04:01:51Z | 2020-09-11T06:07:27Z | https://github.com/arogozhnikov/einops/issues/49 | [
"feature suggestion"
] | ehknight | 5 |
sktime/sktime | data-science | 7,259 | [BUG] ConformalIntervals `predict` produces different horizon than `predict_interval` | **Describe the bug**
`predict` and `predict_interval` of ConformalIntervals produces forecasts with different indexes. Thus, plotting functionality fails when trying to plot them together.
**To Reproduce**
```python
from sktime.forecasting.conformal import ConformalIntervals
from sktime.forecasting.naive import NaiveForecaster
from sktime.datasets import load_airline
ci = ConformalIntervals(NaiveForecaster())
y = load_airline()
ci.fit(y, fh=range(1, 96))
pred_int = ci.predict_interval()
pred = ci.predict()
plot_series(y, pred, labels=["TS", "TTM"], pred_interval=pred_int)
```
**Expected behavior**
`predict` and `predict_interval` return the same index.
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
</details>
<!-- Thanks for contributing! -->
| closed | 2024-10-12T10:28:02Z | 2024-10-12T10:51:15Z | https://github.com/sktime/sktime/issues/7259 | [
"bug",
"module:forecasting"
] | benHeid | 1 |
K3D-tools/K3D-jupyter | jupyter | 67 | camera distance | If one makes a plot with small bbox and then:
sets
plot.grid_auto_fit=False
plot.camera_auto_fit=False
then
1. the direct voxel update (e.g p.volxels=b) bring back wrong dynamic calculations related to scene size.
2. detaching and attaching display producess -1,1 bbox again
example
import k3d
import numpy as np
import time
v = np.random.randint(0,10,size=(5,5,5))
plot = k3d.plot()
bbox = np.tile([0,1e-3],3)
plt_vox = k3d.voxels(v,bounds=bbox)
plot += plt_vox
plot.display()
time.sleep(1)
plot.grid_auto_fit=False
plot.camera_auto_fit=False
time.sleep(1)
plt_vox.voxels = np.random.randint(0,10,size=(5,5,5)) | closed | 2017-10-07T11:17:25Z | 2019-06-12T09:08:22Z | https://github.com/K3D-tools/K3D-jupyter/issues/67 | [] | marcinofulus | 1 |
jina-ai/serve | deep-learning | 6,025 | chore: draft release note v3.20.1 | # Release Note
This release contains 2 bug fixes and 2 documentation improvements.
## 🐞 Bug Fixes
### Make Gateway load balancer stream results (#6024)
[Streaming endpoints](https://docs.jina.ai/concepts/serving/executor/add-endpoints/#streaming-endpoints) in Executors can be deployed behind a Gateway (when using `include_gateway=True` in `Deployment`).
In this case, the Gateway acts as a load balancer. However, prior to this release, when the `HTTP` protocol is used, the Gateway would wait until all chunks of the responses had been streamed from the Executor.
Only when all the chunks were received would it send them back to the client. This resulted in delays and suppressed the desired behavior of a streaming endpoint (namely, displaying tokens streamed from an LLM with a typewriter effect).
This release fixes this issue by making the Gateway stream chunks of responses from the forwarded request as soon as they are received from the Executor.
No matter whether you are setting `include_gateway` to `True` or `False` in `Deployment`, streaming endpoints should give the same desired behavior.
### Fix deeply nested Schemas support in Executors and Flows(#6021)
When a deeply nested schema (DocArray schema with 2+ levels of nesting) was specified as an input or output of an Executor endpoint, and the Executor was deployed in a Flow, the Gateway would fail to fetch information about the endpoints and their input/output schemas:
```python
from typing import List, Optional
from docarray import BaseDoc, DocList
from jina import Executor, Flow, requests
class Nested2Doc(BaseDoc):
value: str
class Nested1Doc(BaseDoc):
nested: Nested2Doc
class RootDoc(BaseDoc):
nested: Optional[Nested1Doc]
class NestedSchemaExecutor(Executor):
@requests(on='/endpoint')
async def endpoint(self, docs: DocList[RootDoc], **kwargs) -> DocList[RootDoc]:
rets = DocList[RootDoc]()
rets.append(
RootDoc(
text='hello world', nested=Nested1Doc(nested=Nested2Doc(value='test'))
)
)
return rets
flow = Flow().add(uses=NestedSchemaExecutor)
with flow:
res = flow.post(
on='/endpoint', inputs=RootDoc(text='hello'), return_type=DocList[RootDoc]
)
```
```text
...
2023-08-07 02:49:32,529 topology_graph.py[608] WARNING Getting endpoints failed: 'definitions'. Waiting for another trial
```
This was due to an internal utility function failing to convert such deeply nested JSON schemas to DocArray models.
This release fixes the issue by propagating global schema definitions when generating models for nested schemas.
## 📗 Documentation Improvements
- Remove extra backtick in `create-app.md` (#6023)
- Fix streaming endpoint reference in README (#6017)
## 🤟 Contributors
We would like to thank all contributors to this release:
- Saba Sturua (@jupyterjazz)
- AlaeddineAbdessalem (@alaeddine-13)
- Joan Fontanals (@JoanFM)
- Naymul Islam (@ai-naymul) | closed | 2023-08-10T13:51:36Z | 2023-08-11T07:58:47Z | https://github.com/jina-ai/serve/issues/6025 | [] | alaeddine-13 | 1 |
jina-ai/clip-as-service | pytorch | 599 | Can I use this service load other pre-train models like GPT? | # Description
> Can I use this service load other pre-train models like GPT? If I just use this to get sentence vector with fixed length(768), not vocab size, Am I right?
I'm using this command to start the server:
```bash
bert-serving-start -model_dir GPT_model_checkpoint -num_worker=4 -max_seq_len=25 -gpu_memory_fraction=0.95
```
and calling the server via:
```python
bc = BertClient(['我喜欢你'])
bc.encode()
``` | open | 2020-11-02T08:37:32Z | 2020-11-02T08:37:32Z | https://github.com/jina-ai/clip-as-service/issues/599 | [] | Ritali | 0 |
babysor/MockingBird | pytorch | 452 | 大家好,请问在打开web后点击录制出现【Uncaught Error】“recStart”未定义...是存在什么问题,应该怎么解决呢? | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| open | 2022-03-13T00:57:05Z | 2022-03-13T00:57:05Z | https://github.com/babysor/MockingBird/issues/452 | [] | JJJIANGJIANG | 0 |
openapi-generators/openapi-python-client | fastapi | 658 | Ignore Content-Type parameters | **Is your feature request related to a problem? Please describe.**
RFC2616 allows parameters to content-types, for example `application/json; version=2.3.5` is a valid content type that should be treated like `application/json`. Currently this is not supported by this project.
**Describe the solution you'd like**
Ignore any parameters (things including and after `;`) in the content type.
**Describe alternatives you've considered**
Using #657.
**Additional context**
https://github.com/openapi-generators/openapi-python-client/discussions/655
| closed | 2022-08-21T17:38:30Z | 2023-08-13T01:32:30Z | https://github.com/openapi-generators/openapi-python-client/issues/658 | [
"✨ enhancement"
] | dbanty | 1 |
coqui-ai/TTS | deep-learning | 3,554 | [Bug] speedy_speech erroring on short inputs | ### Describe the bug
Something in the way that it passes input into the speedy_speech model (tts_models/en/ljspeech/speedy-speech) is bugged and errors out for short inputs. It wants them to be a specific length. I've only tested this for single-sentence input, I don't know what it does for other types of input.
### To Reproduce
```
tts --text "Test" --model_name tts_models/en/ljspeech/speedy-speech --out_path speedy-speech-test.wav
```
This returns a truly giant stacktrace, terminating in ``/.local/lib/python3.11/site-packages/torch/nn/modules/conv.py``, which gives:
```RuntimeError: Calculated padded input size per channel: (4). Kernel size: (7). Kernel size can't be greater than actual input size```
Changing the input string to "Testino" (7 characters) or "Testy westy" (11 characters) gives the same stacktrace:
```RuntimeError: Calculated padded input size per channel: (11). Kernel size: (13). Kernel size can't be greater than actual input size```
Adding two characters fixes this, but they can't be whitespace
* ``Testy westy `` (two extra spaces at the end): no.
* ``Testy westy`` (two extra spaces in the middle): no.
* ``Testy a-westy``: yes.
* ``Testy westy..`` (two periods): no, because it's parsed as "12 characters".
* ``Testy westy...`` (three periods, making it fourteen characters instead of thirteen) does.
### Expected behavior
I am not sure what the *optimal* method of fixing this is. What I'd do, without any knowledge of what's going on under the hood, is just figure out what length of string it wants (it seems like 13 is the minimum) and just pad out all short speedy_speech inputs with ellipses to get to 13. This is probably a bad idea, and there's probably a better way of doing it.
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2+cu121",
"TTS": "0.22.0",
"numpy": "1.24.4"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "",
"python": "3.11.6",
"version": "#1 SMP PREEMPT_DYNAMIC Wed Oct 11 04:07:58 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2024-02-01T12:48:42Z | 2024-11-10T12:59:44Z | https://github.com/coqui-ai/TTS/issues/3554 | [
"bug",
"wontfix"
] | jp-x-g | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,247 | IndexError: tuple index out of range | Environment:
- Python version: 3.11.2
- Flask-SQLAlchemy version: 3.0.5
- SQLAlchemy version: 2.0.19
```
class Bans(db.Model):
__tablename__ = 'bans'
uuid = db.Column(db.String(36), primary_key=True)
reason = db.Column(db.String(255), nullable=False)
ip = db.Column(db.String(255), nullable=True)
user = db.Column(db.String(36), db.ForeignKey('users.uuid'))
expires = db.Column(db.Integer(), nullable=False)
```
`Bans.query.filter_by(ip=ip).filter(Bans.expires > current_timestamp).first()`
```
File "/******lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2747, in first
return self.limit(1)._iter().first() # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1803, in first
return self._only_one_row(
^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 757, in _only_one_row
row: Optional[_InterimRowType[Any]] = onerow(hard_close=True)
^^^^^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1690, in _fetchone_impl
return self._real_result._fetchone_impl(hard_close=hard_close)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 2282, in _fetchone_impl
row = next(self.iterator, _NO_ROW)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 195, in chunks
rows = [proc(row) for row in fetch]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 195, in <listcomp>
rows = [proc(row) for row in fetch]
^^^^^^^^^
File "/******lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 1101, in _instance
_populate_full(
File "/******lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 1288, in _populate_full
dict_[key] = getter(row)
^^^^^^^^^^^
File "lib/sqlalchemy/cyextension/resultproxy.pyx", line 48, in sqlalchemy.cyextension.resultproxy.BaseRow.__getitem__
IndexError: tuple index out of range
```
It happens randomly. | closed | 2023-08-19T21:48:52Z | 2023-09-03T00:57:12Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1247 | [] | 0x1618 | 1 |
dask/dask | scikit-learn | 11,411 | min of timestamp in parquet failing: AttributeError: 'Timestamp' object has no attribute 'dtype' | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Reading in a parquet. Taking the min of a (time stamp with time zone) column and getting `AttributeError: 'Timestamp' object has no attribute 'dtype'` (see traceback below)
```
In [5]: ddf["valid_time"].min()
Out[5]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_core.py:470, in Expr.__getattr__(self, key)
469 try:
--> 470 return object.__getattribute__(self, key)
471 except AttributeError as err:
AttributeError: 'Min' object has no attribute 'dtype'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:624, in FrameBase.__getattr__(self, key)
621 try:
622 # Fall back to `expr` API
623 # (Making sure to convert to/from Expr)
--> 624 val = getattr(self.expr, key)
625 if callable(val):
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_core.py:491, in Expr.__getattr__(self, key)
490 link = "https://github.com/dask-contrib/dask-expr/blob/main/README.md#api-coverage"
--> 491 raise AttributeError(
492 f"{err}\n\n"
493 "This often means that you are attempting to use an unsupported "
494 f"API function. Current API coverage is documented here: {link}."
495 )
AttributeError: 'Min' object has no attribute 'dtype'
This often means that you are attempting to use an unsupported API function. Current API coverage is documented here: https://github.com/dask-contrib/dask-expr/blob/main/README.md#api-coverage.
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/core/formatters.py:711, in PlainTextFormatter.__call__(self, obj)
704 stream = StringIO()
705 printer = pretty.RepresentationPrinter(stream, self.verbose,
706 self.max_width, self.newline,
707 max_seq_length=self.max_seq_length,
708 singleton_pprinters=self.singleton_printers,
709 type_pprinters=self.type_printers,
710 deferred_pprinters=self.deferred_printers)
--> 711 printer.pretty(obj)
712 printer.flush()
713 return stream.getvalue()
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/lib/pretty.py:419, in RepresentationPrinter.pretty(self, obj)
408 return meth(obj, self, cycle)
409 if (
410 cls is not object
411 # check if cls defines __repr__
(...)
417 and callable(_safe_getattr(cls, "__repr__", None))
418 ):
--> 419 return _repr_pprint(obj, self, cycle)
421 return _default_pprint(obj, self, cycle)
422 finally:
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/lib/pretty.py:787, in _repr_pprint(obj, p, cycle)
785 """A pprint that just redirects to the normal repr function."""
786 # Find newlines and replace them with p.break_()
--> 787 output = repr(obj)
788 lines = output.splitlines()
789 with p.group():
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:4774, in Scalar.__repr__(self)
4773 def __repr__(self):
-> 4774 return f"<dask_expr.expr.Scalar: expr={self.expr}, dtype={self.dtype}>"
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:630, in FrameBase.__getattr__(self, key)
627 return val
628 except AttributeError:
629 # Raise original error
--> 630 raise err
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:619, in FrameBase.__getattr__(self, key)
616 def __getattr__(self, key):
617 try:
618 # Prioritize `FrameBase` attributes
--> 619 return object.__getattribute__(self, key)
620 except AttributeError as err:
621 try:
622 # Fall back to `expr` API
623 # (Making sure to convert to/from Expr)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:4798, in Scalar.dtype(self)
4796 @property
4797 def dtype(self):
-> 4798 return self._meta.dtype
AttributeError: 'Timestamp' object has no attribute 'dtype'
```
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
from datetime import datetime
# Sample data: List of timestamps as strings or datetime objects
timestamps = ['2024-10-02 12:00:00', '2024-10-02 14:00:00', '2024-10-02 16:00:00']
# Convert to pandas datetime objects with UTC timezone
timestamps_utc = pd.to_datetime(timestamps).tz_localize('UTC')
# Create DataFrame with the 'valid_time' column
df = pd.DataFrame({'valid_time': timestamps_utc})
df.to_parquet("test.parquet")
import dask.dataframe as dd
dd.read_parquet("test.parquet")["valid_time"].min()
Out[4]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_core.py:470, in Expr.__getattr__(self, key)
469 try:
--> 470 return object.__getattribute__(self, key)
471 except AttributeError as err:
AttributeError: 'Min' object has no attribute 'dtype'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:624, in FrameBase.__getattr__(self, key)
621 try:
622 # Fall back to `expr` API
623 # (Making sure to convert to/from Expr)
--> 624 val = getattr(self.expr, key)
625 if callable(val):
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_core.py:491, in Expr.__getattr__(self, key)
490 link = "https://github.com/dask-contrib/dask-expr/blob/main/README.md#api-coverage"
--> 491 raise AttributeError(
492 f"{err}\n\n"
493 "This often means that you are attempting to use an unsupported "
494 f"API function. Current API coverage is documented here: {link}."
495 )
AttributeError: 'Min' object has no attribute 'dtype'
This often means that you are attempting to use an unsupported API function. Current API coverage is documented here: https://github.com/dask-contrib/dask-expr/blob/main/README.md#api-coverage.
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/core/formatters.py:711, in PlainTextFormatter.__call__(self, obj)
704 stream = StringIO()
705 printer = pretty.RepresentationPrinter(stream, self.verbose,
706 self.max_width, self.newline,
707 max_seq_length=self.max_seq_length,
708 singleton_pprinters=self.singleton_printers,
709 type_pprinters=self.type_printers,
710 deferred_pprinters=self.deferred_printers)
--> 711 printer.pretty(obj)
712 printer.flush()
713 return stream.getvalue()
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/lib/pretty.py:419, in RepresentationPrinter.pretty(self, obj)
408 return meth(obj, self, cycle)
409 if (
410 cls is not object
411 # check if cls defines __repr__
(...)
417 and callable(_safe_getattr(cls, "__repr__", None))
418 ):
--> 419 return _repr_pprint(obj, self, cycle)
421 return _default_pprint(obj, self, cycle)
422 finally:
File ~/miniforge3/envs/main/lib/python3.11/site-packages/IPython/lib/pretty.py:787, in _repr_pprint(obj, p, cycle)
785 """A pprint that just redirects to the normal repr function."""
786 # Find newlines and replace them with p.break_()
--> 787 output = repr(obj)
788 lines = output.splitlines()
789 with p.group():
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:4774, in Scalar.__repr__(self)
4773 def __repr__(self):
-> 4774 return f"<dask_expr.expr.Scalar: expr={self.expr}, dtype={self.dtype}>"
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:630, in FrameBase.__getattr__(self, key)
627 return val
628 except AttributeError:
629 # Raise original error
--> 630 raise err
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:619, in FrameBase.__getattr__(self, key)
616 def __getattr__(self, key):
617 try:
618 # Prioritize `FrameBase` attributes
--> 619 return object.__getattribute__(self, key)
620 except AttributeError as err:
621 try:
622 # Fall back to `expr` API
623 # (Making sure to convert to/from Expr)
File ~/miniforge3/envs/main/lib/python3.11/site-packages/dask_expr/_collection.py:4798, in Scalar.dtype(self)
4796 @property
4797 def dtype(self):
-> 4798 return self._meta.dtype
AttributeError: 'Timestamp' object has no attribute 'dtype'
```
**Anything else we need to know?**:
Works ok in pandas
```pd.read_parquet("test.parquet")["valid_time"].min()```
**Environment**:
```
❯ conda list
# packages in environment at /Users/ray/miniforge3/envs/main:
#
# Name Version Build
dask 2024.9.1 pyhd8ed1ab_0
dask-core 2024.9.1 pyhd8ed1ab_0
dask-expr 1.1.15 pyhd8ed1ab_0
pyarrow 17.0.0 py311he764780_1
``` | closed | 2024-10-02T19:22:08Z | 2024-10-08T10:40:36Z | https://github.com/dask/dask/issues/11411 | [
"dask-expr"
] | raybellwaves | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 637 | Toolbox launches but doesn't work | I execute the demo_toolbox.py and the toolbox launches. I click Browse and pull in a wav file. The toolbox shows the wav file loaded and a mel spectrogram displays. The program then says "Loading the encoder \encoder\saved_models\pretrained.pt
but that is where the program just goes to sleep until I exit out. I verify the pretrained.pt file is there so I don't understand why the program stops. Any help would be greatly appreciated.


Edlaster58@gmail.com | closed | 2021-01-22T21:56:38Z | 2021-01-23T00:36:12Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/637 | [] | bigdog3604 | 4 |
deepspeedai/DeepSpeed | pytorch | 6,724 | `ZeRO-3 + MP8` Universal Checkpoint | Is it possible to convert a model trained using `ZeRO-3` and `MP=8` to a universal checkpoint?
Tracing through the universal checkpointing conversion tool (`ds_to_universal`), the model states remained unmerged, with 8 model parallel shards per each data parallel rank. E.g., with `world_size = 2048`, there are 2048 model state files,`zero_pp_rank_{0-255}_{0-7}` before and after the conversion.
When converting a model with `ZeRO <= 2, MP > 1`, the model state files are merged into a single file through `merge_tp_slices`.
If this is not possible, how would one extract and merge only the Z3 / MP checkpointed model states (along both z3 and model parallel partitions) to a single file?
The `zero_to_fp32` script does not work since it only handles ZeRO-{2,3} without model parallelism. | open | 2024-11-07T17:20:52Z | 2025-02-18T01:20:13Z | https://github.com/deepspeedai/DeepSpeed/issues/6724 | [] | jeromeku | 1 |
holoviz/panel | matplotlib | 7,467 | Polars: ValueError("Could not hash object of type function"). | I'm on panel==1.5.3 trying to support multiple types of data sources for panel-graphic-walker including polars. See https://github.com/panel-extensions/panel-graphic-walker/pull/22.
When I try to combine polars and `pn.cache` I get a `ValueError`.
```python
import polars as pl
import panel as pn
@pn.cache(max_items=20, ttl=60 * 5, policy="LRU")
def my_func(value):
return pl.DataFrame({"a": [3, 2, 1]})
value = pl.DataFrame({"a": [1, 2, 3]})
my_func(value)
```
```bash
File "/home/jovyan/repos/private/panel-graphic-walker/.venv/lib/python3.11/site-packages/panel/io/cache.py", line 235, in _generate_hash
hash_value = _generate_hash_inner(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel-graphic-walker/.venv/lib/python3.11/site-packages/panel/io/cache.py", line 212, in _generate_hash_inner
raise ValueError(
ValueError: User hash function <function _container_hash at 0x7f7ea007e520> failed for input (shape: (3, 1)
```
Please support caching polars similarly to pandas.
## Additional Context
Seem that Polars has some hashing support these days. See https://docs.pola.rs/api/python/stable/search.html?q=hash.
ChatGPT says you can hash
```python
import polars as pl
import hashlib
# Sample Polars DataFrame
df = pl.DataFrame({
"column1": [1, 2, 3],
"column2": ["a", "b", "c"]
})
# Convert DataFrame to JSON format and encode it to bytes
df_bytes = df.write_json().encode('utf-8')
# Generate a hash key using SHA-256
hash_key = hashlib.sha256(df_bytes).hexdigest()
print(hash_key)
``` | closed | 2024-11-07T03:58:23Z | 2024-11-08T10:58:16Z | https://github.com/holoviz/panel/issues/7467 | [] | MarcSkovMadsen | 1 |
microsoft/nlp-recipes | nlp | 38 | Verify licensing of pytorch-pretrained-BERT with CELA | Apache vs. MIT
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/LICENSE | closed | 2019-05-03T17:25:31Z | 2019-07-05T15:17:10Z | https://github.com/microsoft/nlp-recipes/issues/38 | [
"engineering"
] | saidbleik | 1 |
Lightning-AI/pytorch-lightning | pytorch | 19,671 | Introduce sharded checkpointing for NO_SHARD FSDP. | ### Description & Motivation
My recent benchmarks of considerably large (yet, still fitting on single H100) model training at scale show that depending show that it can take 30-40s per model save. This overhead becomes increasingly problematic with growing scale nad the frequency of checkpoint saves - e.g for 256 devices, single checkpoint save would introduce ~2-2.5 hours of idle GPU time (even if the checkpoint is saved on only one worker, the others need to wait). This problem can be significantly reduced with the introduction of general shared checkpointing mechanism for NO_SHARD FSDP strategy - each of the workers would save only portion of the model, yielding reduced storage latency at the cost of single "allgather" operation on checkpoint load/restore.
### Pitch
The problem could be solve in two ways:
1) Extend FSDPStrategy to allow sharded checkpointing even for NO_SHARD strategy - the training process would not change, but sharding would be done only for checkpoint saving.
2) Extend ModelCheckpoint callback to support sharded checkpoint save / load.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @awaelchli @carmocca | open | 2024-03-19T10:21:05Z | 2024-03-20T11:20:03Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19671 | [
"feature",
"strategy: fsdp"
] | ps-stability | 6 |
docarray/docarray | fastapi | 1,787 | [pydantic v2] Validation with default value is not working | # Context
```python
class MyDocument(BaseDoc):
caption: TextDoc
MyDocument(caption='A tiger in the jungle')
```
this will fail bc I did not find a way yet to override pydantic validation for BaseDoc
many test are skipped because of this example:
https://github.com/docarray/docarray/blob/4ef493943d1ee73613b51800980239a30fe5ae73/tests/units/util/test_filter.py#L247
Many of `tests/predefined_document` failed for this reason. (search for skipif inside the folder to see which one).
Hint: to solve this one we need to highjack pydantic validation to convert the single value into a dict. We are already doing this with pydantic v1 but since pydantic v2 validation has partially been moved to the pydantic_core package it is less obvious how to do it.
Obviously, in python we can always make it work, the question is how to make it work so that it will still be aligned with pydantic guardrails regarding validation
| closed | 2023-09-15T13:01:49Z | 2023-09-27T15:59:23Z | https://github.com/docarray/docarray/issues/1787 | [
"pydantic-v2"
] | samsja | 0 |
modin-project/modin | data-science | 6,677 | suggestions on handling out of memory matrix operation on large dataset | One of my task is to compute a matrix transformation on large feature vectors, for example, I have a 20K x 60K size DataFrame `df` and a square matrix 60K x 60K DataFrame `mat_df`, and I will call `df.dot(mat_df)`. Because such large matrix operation cannot work on single machine, I was hoping that distributed dataframe could help and even provide better scalability if I want to work on even larger dataset.
I am running on Ray Clusters, and my system is configured as the following:
1. Head,16 cores, 64GB memory, >100GB disk space
1. --object-store-memory 15000000000 # set object store to 15GB
2. --system-config='{"object_spilling_config":"{\"type\":\"filesystem\",\"params\":{\"directory_path\":\"/mnt/nas/tmp/spill\"}}"}' # I have mounted 8TB NAS and use it for spilled objects.
3. --num-cpus 2 # I tried to limited the cores/workers to avoid OOM issue
4. raylet, 16 cores, 64GB memory, >100GB disk space
5. --object-store-memory 10000000000 # set object store to 10GB
6. --num-cpus=4 # limited cores/workers to avoid OOM.
7. Modin envs:
8. MODIN_EXPERIMENTAL_GROUPBY=True # I need to enable this to make my apply(func, axis=1) to work, otherwise, my `func` cannot access all of the columns.
9. MODIN_ENGINE=ray # ray engine
10. MODIN_RAY_CLUSTER=True # indicate that the client code is running on Ray Cluster
11. MODIN_NPARTITIONS=2 # another attempt to avoid OOM
12. due to environment constrain, I am currently running on Python 3.8.10
```py
INSTALLED VERSIONS
------------------
commit : b5545c686751f4eed6913f1785f9c68f41f4e51d
python : 3.8.10.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-86-generic
Version : #96~20.04.1-Ubuntu SMP Thu Sep 21 13:23:37 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.23.1
ray : 2.7.1
dask : 2023.5.0
distributed : 2023.5.0
hdk : None
pandas dependencies
-------------------
pandas : 2.0.3
numpy : 1.24.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 56.0.0
pip : 23.3
Cython : None
pytest : 7.4.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : None
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : 2023.9.2
gcsfs : None
matplotlib : 3.7.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 13.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
```
my running example is as the following:
```py
import modin.pandas as pd
import numpy as np
n_features = 60000
n_samples = 20000
features = [f'f{i+1}' for i in range(n_features)]
samples = [f's{i+1}' for i in range(n_samples)]
df = pd.DataFrame(np.random.rand(n_samples, n_features), columns=features, index=samples)
mat_df = pd.DataFrame(np.random.rand(n_features, n_features), columns=features, index=features)
res_df = df.dot(mat_df)
```
The Python program will likely be killed during `dot()`:
```py
>>> df = df.dot(mat_df)
(raylet) Spilled 66383 MiB, 28 objects, write throughput 64 MiB/s.
Killed
```
Some warning messages that I've seen:
object spilling:
```
(raylet) Spilled 9156 MiB, 5 objects, write throughput 75 MiB/s. Set RAY_verbose_spill_logs=0 to disable this message.
```
killing worker(s):
```
(raylet) [2023-10-24 09:56:57,431 E 390581 390581] (raylet) node_manager.cc:3007: 1 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: a343585c2e9b086071e513b714c99c79de4badb6303e52e3f44985d2, IP: <raylet_addr>) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip <raylet_addr>`
```
object spilling request failure:
```
(raylet) [2023-10-24 09:57:00,615 E 390581 390581] (raylet) local_object_manager.cc:360: Failed to send object spilling request: GrpcUnavailable: RPC Error message: Socket closed; RPC Error details:
```
worker not registering on time:
```
(raylet) [2023-10-24 10:04:30,463 E 390581 390581] (raylet) worker_pool.cc:548: Some workers of the worker process(393356) have not registered within the timeout. The process is still alive, probably it's hanging during start.
```
So far I have tried to 1) limit workers, 2) limit partitions and 3) set less object store memory to try to utilize object spilling, but I still could not resolve the OOM issue. At this point, I would like to hear your opinions and suggestions on how I could handle the OOM problem on matrix operation by either tweaking my ray/modin configurations or improve my code, or is it feasible at all to do on my setups in the first place, if so, how do I determine the limit of the size of my feature vectors and resources to do the 60K one? | open | 2023-10-24T14:27:26Z | 2024-01-08T16:40:08Z | https://github.com/modin-project/modin/issues/6677 | [
"question ❓",
"External"
] | SiRumCz | 11 |
tqdm/tqdm | pandas | 1,118 | TqdmWarning: clamping frac to range [0, 1]. How to prevent this ? | ``tqdm`` sometimes show this warning, and I wonder how to prevent this:
```
/Library/Python/3.7/site-packages/tqdm/std.py:535: TqdmWarning: clamping frac to range [0, 1]██████████████████████████▊| 99.84615384615518/100 [1:51:16<00:02, 15.16s/it]
```
This is my demo:
```python
pbar2 = trange(100)
pbar2.set_description("blabla")
pbar2.reset()
for t in range(0, a):
for k in b.keys():
run_task(e, f, g)
time.sleep(c)
pbar2.update(100.0 / d)
pbar2.reset()
``` | closed | 2021-01-22T05:09:29Z | 2021-01-28T04:21:52Z | https://github.com/tqdm/tqdm/issues/1118 | [
"duplicate 🗐",
"question/docs ‽"
] | JamesZBL | 2 |
ultralytics/ultralytics | python | 18,828 | Computing mAP@custom iOU threshold | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, for my use case the ground truth bounding box is somewhat subjective and I would like to accommodate this when computing the performance metrics on my validation data. I am using Yolov8 with `task=detect` and I would like the validation metrics to compute the average precision at some custom threshold e.g. mAP@0.3 instead of mAP@0.5. Can someone please point me as to where in the source code I can make this change? Thanks!
### Additional
_No response_ | closed | 2025-01-22T20:16:17Z | 2025-01-23T00:27:48Z | https://github.com/ultralytics/ultralytics/issues/18828 | [
"question",
"detect"
] | arjunchandra2 | 3 |
statsmodels/statsmodels | data-science | 9,008 | ENH: outlier, influence add looo, jackknife residuals | motivated by https://github.com/statsmodels/statsmodels/issues/9005#issuecomment-1729932721
We don't have explicit looo, jackknife residuals (resid_response).
I'm not sure whether we already have all the computation for it, for other statistics.
looo residuals look like a useful tool.
In the GLM, discrete cases we could also add other residuals based on looo, e.g. resid_pearson.
other predictive statistics, looo-fittedvalues, looo-implied-var and similar might also be useful.
(model.predict has "which" option and takes params as argument, but likely not a 2dim params vectorized by rows)
| open | 2023-09-22T15:24:16Z | 2023-09-23T19:44:19Z | https://github.com/statsmodels/statsmodels/issues/9008 | [
"type-enh",
"topic-diagnostic"
] | josef-pkt | 2 |
streamlit/streamlit | machine-learning | 10,637 | Enum formatting in TextColumn does not work with a styled data frame | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Expected enum value, but after pandas styling returns enum object
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10637)
```Python
import enum
import pandas as pd
import streamlit as st
class Status(str, enum.Enum):
success = "Success status"
running = "Running status"
error = "Error status"
df = pd.DataFrame(
{"pipeline": ["Success", "Error", "Running"], "status": [Status.success, Status.error, Status.running]}
)
def status_highlight(value: Status):
color = ""
match value:
case Status.error:
color = "red"
case Status.running:
color = "blue"
case Status.success:
color = "green"
case _:
color = "gray"
return "color: %s" % color
df = df.style.map(status_highlight, subset=["status"])
st.dataframe(df, column_config={"status": st.column_config.TextColumn("Status column")}, hide_index=True)
```
### Steps To Reproduce
just run application
### Expected Behavior
Expected enum value with styling
### Current Behavior

### Debug info
- Streamlit version: 1.42.2
- Python version: 3.10.11
- Operating System: Windows
- Browser: Chrome
### Additional Information
_No response_ | open | 2025-03-04T12:50:49Z | 2025-03-09T08:30:57Z | https://github.com/streamlit/streamlit/issues/10637 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P4"
] | FabienLariviere | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.