repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
horovod/horovod | machine-learning | 2,961 | PyTorch sparse allreduce fails with torch nightly | Repro:
```
pip install --no-cache-dir --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
# install horovod
horovodrun -np 2 pytest -vs "test/parallel/test_torch.py::TorchTests::test_async_sparse_allreduce"
```
The test hangs, which has been causing test failures.
cc @chongxiaoc @romerojosh
@chongxiaoc can you or @irasit take a look? | closed | 2021-06-08T19:31:19Z | 2021-06-10T03:11:05Z | https://github.com/horovod/horovod/issues/2961 | [
"bug"
] | tgaddair | 2 |
huggingface/datasets | numpy | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (supported) data files found in cis-lmu/Glot500",
"cause_traceback": [
"Traceback (most recent call last):\n",
" File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n",
" File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n",
"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n"
]
}
```
because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4
Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).
### Motivation
Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.
### Your contribution
Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config. | open | 2024-04-17T11:19:47Z | 2024-04-17T11:19:47Z | https://github.com/huggingface/datasets/issues/6819 | [
"enhancement"
] | severo | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 706 | Can't start demo_cli or demo_toolbox.py | C:\Users\n-har\Desktop\deepaudio>python demo_cli.py
> Traceback (most recent call last):
File "demo_cli.py", line 4, in <module>
from synthesizer.inference import Synthesizer
File "C:\Users\n-har\Desktop\deepaudio\synthesizer\inference.py", line 1, in <module>
import torch
File "C:\Users\n-har\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\torch\__init__.py", line 81, in <module>
ctypes.CDLL(dll)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.2544.0_x64__qbz5n2kfra8p0\lib\ctypes\__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] Das angegebene Modul wurde nicht gefunden
C:\Users\n-har\Desktop\deepaudio>python` demo_toolbox.py
> Traceback (most recent call last):
File "demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "C:\Users\n-har\Desktop\deepaudio\toolbox\__init__.py", line 1, in <module>
from toolbox.ui import UI
File "C:\Users\n-har\Desktop\deepaudio\toolbox\ui.py", line 2, in <module>
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
File "C:\Users\n-har\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\matplotlib\backends\backend_qt5agg.py", line 11, in <module>
from .backend_qt5 import (
File "C:\Users\n-har\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\matplotlib\backends\backend_qt5.py", line 16, in <module>
import matplotlib.backends.qt_editor.figureoptions as figureoptions
File "C:\Users\n-har\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\matplotlib\backends\qt_editor\figureoptions.py", line 11, in <module>
from matplotlib.backends.qt_compat import QtGui
File "C:\Users\n-har\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\matplotlib\backends\qt_compat.py", line 177, in <module>
raise ImportError("Failed to import any qt binding")
ImportError: Failed to import any qt binding
PIP Freeze:
> appdirs==1.4.4
audioread==2.1.9
certifi==2020.12.5
cffi==1.14.5
chardet==4.0.0
cycler==0.10.0
decorator==4.4.2
dill==0.3.3
ffmpeg==1.4
future==0.18.2
idna==2.10
inflect==5.3.0
joblib==1.0.1
jsonpatch==1.32
jsonpointer==2.1
kiwisolver==1.3.1
librosa==0.8.0
llvmlite==0.36.0
matplotlib==3.3.4
multiprocess==0.70.11.1
numba==0.53.0
numpy==1.19.3
packaging==20.9
Pillow==8.1.2
pooch==1.3.0
pycparser==2.20
pynndescent==0.5.2
pyparsing==2.4.7
PyQt5==5.15.4
PyQt5-Qt5==5.15.2
PyQt5-sip==12.8.1
python-dateutil==2.8.1
pyzmq==22.0.3
requests==2.25.1
resampy==0.2.2
scikit-learn==0.24.1
scipy==1.6.1
six==1.15.0
sounddevice==0.4.1
SoundFile==0.10.3.post1
threadpoolctl==2.1.0
torch==1.5.1+cpu
torchfile==0.1.0
torchvision==0.6.1+cpu
tornado==6.1
tqdm==4.59.0
typing-extensions==3.7.4.3
umap-learn==0.5.1
Unidecode==1.2.0
urllib3==1.26.4
visdom==0.1.8.9
websocket-client==0.58.0
Using Windows 10 with Python 3.7.9 - Did i missed something? | closed | 2021-03-16T19:17:31Z | 2021-03-16T20:55:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/706 | [] | t0g3pii | 1 |
fastapi/fastapi | fastapi | 12,402 | [BUG] In version 0.115.0 of FastAPI, the pydantic model that has declared an alias cannot correctly receive query parameters | Thank you for all the work you have done. I have initiated the [discussion ](https://github.com/fastapi/fastapi/discussions/12401)as requested, but I think this issue is quite important. Initiating this issue is just to prevent the discussion from being drowned out, and I apologize for any offense.
### Example Code
```python
import uvicorn
from typing import Literal
from fastapi import FastAPI, Query
from pydantic import BaseModel, ConfigDict, Field
from pydantic.alias_generators import to_camel
app = FastAPI()
class FilterParams(BaseModel):
model_config = ConfigDict(alias_generator=to_camel)
limit: int = Field(100, gt=0, le=100)
offset: int = Field(0, ge=0)
order_by: Literal['created_at', 'updated_at'] = 'created_at'
tags: list[str] = []
@app.get('/items/')
async def read_items(filter_query: FilterParams = Query()):
return filter_query
if __name__ == '__main__':
uvicorn.run(app='app:app')
```
### Description
Running the code in the example above, I encountered an incorrect result when accessing http://127.0.0.1:8000/items/?offset=1&orderBy=updated_at in the browser, orderBy did not receive successfully.
```
{
"limit": 100,
"offset": 1,
"orderBy": "created_at",
"tags": []
}
```
The correct result should be as follows
```
{
"limit": 100,
"offset": 1,
"orderBy": "updated_at",
"tags": []
}
```
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.115.0
### Pydantic Version
2.9.2
### Python Version
3.9.19
### Additional Context
_No response_ | open | 2024-10-08T06:42:41Z | 2025-01-21T06:56:01Z | https://github.com/fastapi/fastapi/issues/12402 | [] | insistence | 10 |
miguelgrinberg/Flask-SocketIO | flask | 754 | Buffer/Queue fills up when client is not consuming socketio emit | I have a flask-socketio server which when the client is connected emits log messages. The intention is for real time log viewing. I noticed the application was leaking memory (occasionally) so after some investigation I found that on occasion the disconnect message from the client was getting lost and therefore the server was unaware that the client was no longer listening and this was causing memory to be eaten up.
After some more research I found a reference that said that the client MUST be consuming or the buffer/queue would fill up rather than the messages be silently dropped (like they are if no client is connected). So the memory is leaked in the gap between the client disconnecting and the ping timeout closing the connection on the server side.
While I understand I need to fix the lost disconnect message it is clear that it can be the case that the client silently disconnects and so I wonder if there is a way to reset/clear this buffer/queue it and get that memory back?
Sorry if the answer is out there but I did try my best.
Thanks
Max | closed | 2018-08-01T19:58:49Z | 2019-06-08T23:16:06Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/754 | [
"investigate"
] | maximillion90 | 3 |
JaidedAI/EasyOCR | deep-learning | 492 | When do you plan to release v1.4 to pypi | Hi! You have added dict output at v1.4. I look forward to trying this opportunity, but latest version at pypi is still 1.3.2. When do you plan to update it? | closed | 2021-07-19T15:16:32Z | 2021-07-20T08:57:28Z | https://github.com/JaidedAI/EasyOCR/issues/492 | [] | AndreyGurevich | 1 |
python-restx/flask-restx | flask | 420 | flask restx user case to adopt flask-oidc authentication | Hello Team,
Recently I am working on oidc auth for my flask restx app.
The most examples I see online about flask-oidc is just based on a barebone flask app.
That usually works.
But through googling, I do not see any user case where flask restx can adopt flask-oidc for authentication so we can enjoy the benefit of testing api through swagger UI.
?
any thought or quick example you have in mind?
| open | 2022-03-17T19:13:57Z | 2022-03-17T19:13:57Z | https://github.com/python-restx/flask-restx/issues/420 | [
"question"
] | zhoupoko2000 | 0 |
SYSTRAN/faster-whisper | deep-learning | 618 | faster_whisper batch encode time-consume issue | I have made a test, for batching in faster-whisper.
But faster_whisper batch encode consume multiple time as sample's amount, it seems encode in batch not work as expected by CTranslate?
By the way, the decode/generate have the same issue when in-Batch ops.


| open | 2023-12-14T06:45:25Z | 2023-12-14T09:30:22Z | https://github.com/SYSTRAN/faster-whisper/issues/618 | [] | dyyzhmm | 2 |
ContextLab/hypertools | data-visualization | 251 | tests for backend management | the matplotlib backend management code only works in ipython/jupyter notebook-based environments. we could use some of the tricks @paxtonfitzpatrick is using in the [davos](https://github.com/ContextLab/davos) package to run tests for that code. | open | 2021-08-07T15:16:47Z | 2021-08-07T15:16:47Z | https://github.com/ContextLab/hypertools/issues/251 | [
"enhancement",
"help wanted"
] | jeremymanning | 0 |
mljar/mercury | jupyter | 474 | Notebook hangs in "WorkerState.Busy" indefinitely | Hi, I haven't been able to deploy any notebooks in Mercury (unable than the demo). I've followed the documentation and set up the requirements.txt file, but the Worker never finishes. I've waited up to an hour. Any suggestions? | open | 2024-12-04T19:10:26Z | 2024-12-05T21:50:02Z | https://github.com/mljar/mercury/issues/474 | [] | Pancake205 | 3 |
iperov/DeepFaceLab | machine-learning | 5,576 | Merge error | after learning through Quick96, I put on the merge. As a result, it writes "no faces found for 00001.png, copying without faces" and a window with hot keys appears. Inside the data_dst folder, a merged folder appeared, containing one frame that really does not contain the desired face, as well as a merged_masked folder, respectively containing a file with a black square. That's all. My dst video contains more than just the face I'm about to replace. How then to replace the face with a video in which there are other faces? (P.S. in the frame where there is a face that I am going to replace, while other faces do not fall. P.P.S. In the aligned folder, I deleted the frames without the face that I want to replace). Tell me what's wrong?


| open | 2022-10-30T16:54:12Z | 2023-09-21T04:21:43Z | https://github.com/iperov/DeepFaceLab/issues/5576 | [] | Margaret93 | 2 |
tfranzel/drf-spectacular | rest-api | 1,094 | How to only include `ApiKeyAuth` authentication/authorization strategy? | ### Describe the bug
We are only exposing endpoints that use `djangorestframework-api-key` in our generated schemas. We're following the blueprint instructions [here](https://github.com/tfranzel/drf-spectacular/blob/0.25.1/docs/blueprints.rst#djangorestframework-api-key) to include `ApiKeyAuth`.
However, we're also seeing `basicAuth` and `cookieAuth` in the generated docs, how do we suppress those?
Including an empty list for `AUTHENTICATION_WHITELIST` does not seem to affect it.
### To Reproduce
In `settings.py`:
```py
SPECTACULAR_SETTINGS = {
# ...
"AUTHENTICATION_WHITELIST": [],
"APPEND_COMPONENTS": {
"securitySchemes": {
"ApiKeyAuth": {
"type": "apiKey",
"in": "header",
"name": "Authorization",
}
}
},
"SECURITY": [{"ApiKeyAuth": []}],
}
```
### Expected behavior
We would like to only see the `ApiKeyAuth` strategy in generated docs
### Observed behavior
We also see `basicAuth` and `cookieAuth`
In Swagger "Authorize":

In Redoc "Authorizations":

In `.yaml` file "securitySchemes":
```yml
components:
securitySchemes:
ApiKeyAuth:
type: apiKey
in: header
name: Authorization
basicAuth:
type: http
scheme: basic
cookieAuth:
type: apiKey
in: cookie
name: sessionid
```
| closed | 2023-10-31T15:58:33Z | 2023-11-02T17:57:37Z | https://github.com/tfranzel/drf-spectacular/issues/1094 | [] | alexburner | 6 |
dynaconf/dynaconf | fastapi | 449 | [bug] LazyFormatted TypeError exception in Django TEMPLATES DIRS | **Describe the bug**
TypeError exception (expected str, bytes or os.PathLike object, not Lazy) if add templates dir path with `@format` or `@jinja` tokens.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**.env**
```bash
DJANGO_ENV=development
DJANGO_SETTINGS_MODULE=project.settings
```
and
**settings.yaml**
```yaml
[default]
TEMPLATES:
- APP_DIRS: true
BACKEND: django.template.backends.django.DjangoTemplates
DIRS:
- "@format {this.BASE_DIR}/templates"
OPTIONS:
context_processors:
- django.template.context_processors.debug
- django.template.context_processors.request
- django.contrib.auth.context_processors.auth
- django.contrib.messages.context_processors.messages
```
</details>
2. Having the following app code:
<details>
<summary> Code </summary>
**settings.py**
```python
import os
import dynaconf
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
settings = dynaconf.DjangoDynaconf(__name__, BASE_DIR=BASE_DIR)
```
</details>
**Environment (please complete the following information):**
- OS: Windows
- Dynaconf version: 3.1.2
- Frameworks: Django 2.2.16 | closed | 2020-10-14T16:46:43Z | 2021-03-01T17:50:04Z | https://github.com/dynaconf/dynaconf/issues/449 | [
"bug"
] | dgavrilov | 0 |
NVlabs/neuralangelo | computer-vision | 214 | Error during "requirements.txt" installation | Hello,
I did everything like in this tutorial video "https://www.youtube.com/watch?v=NEF5bGyTqmk" but when I run
`pip install -r requirements.txt`
I got this error:
`Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch (from -r requirements.txt (line 3))
Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-1bu7dhfy
Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-1bu7dhfy
Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit c91138bcd4c6877c8d5e60e483c0581aafc70cce
Running command git submodule update --init --recursive -q
Preparing metadata (setup.py) ... done
Collecting addict (from -r requirements.txt (line 1))
Using cached addict-2.4.0-py3-none-any.whl.metadata (1.0 kB)
Requirement already satisfied: gdown in /home/stein/miniconda3/envs/neuralangelo/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (5.2.0)
Requirement already satisfied: gpustat in /home/stein/miniconda3/envs/neuralangelo/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (1.1.1)
Collecting icecream (from -r requirements.txt (line 5))
Using cached icecream-2.1.3-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting imageio-ffmpeg (from -r requirements.txt (line 6))
Using cached imageio_ffmpeg-0.5.1-py3-none-manylinux2010_x86_64.whl.metadata (1.6 kB)
Collecting imutils (from -r requirements.txt (line 7))
Using cached imutils-0.5.4.tar.gz (17 kB)
Preparing metadata (setup.py) ... done
Collecting ipdb (from -r requirements.txt (line 8))
Using cached ipdb-0.13.13-py3-none-any.whl.metadata (14 kB)
Collecting k3d (from -r requirements.txt (line 9))
Using cached k3d-2.16.1-py3-none-any.whl.metadata (6.8 kB)
Collecting kornia (from -r requirements.txt (line 10))
Using cached kornia-0.7.3-py2.py3-none-any.whl.metadata (7.7 kB)
Collecting lpips (from -r requirements.txt (line 11))
Using cached lpips-0.1.4-py3-none-any.whl.metadata (10 kB)
Collecting matplotlib (from -r requirements.txt (line 12))
Using cached matplotlib-3.7.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (5.7 kB)
Collecting mediapy (from -r requirements.txt (line 13))
Using cached mediapy-1.2.2-py3-none-any.whl.metadata (4.8 kB)
Collecting nvidia-ml-py3 (from -r requirements.txt (line 14))
Using cached nvidia-ml-py3-7.352.0.tar.gz (19 kB)
Preparing metadata (setup.py) ... done
Collecting open3d (from -r requirements.txt (line 15))
Using cached open3d-0.18.0-cp38-cp38-manylinux_2_27_x86_64.whl.metadata (4.2 kB)
Collecting opencv-python-headless (from -r requirements.txt (line 16))
Using cached opencv_python_headless-4.10.0.84-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting OpenEXR (from -r requirements.txt (line 17))
Using cached openexr-3.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)
Collecting pathlib (from -r requirements.txt (line 18))
Using cached pathlib-1.0.1-py3-none-any.whl.metadata (5.1 kB)
Requirement already satisfied: pillow in /home/stein/miniconda3/envs/neuralangelo/lib/python3.8/site-packages (from -r requirements.txt (line 19)) (10.4.0)
Collecting plotly (from -r requirements.txt (line 20))
Using cached plotly-5.24.1-py3-none-any.whl.metadata (7.3 kB)
Collecting pyequilib (from -r requirements.txt (line 21))
Using cached pyequilib-0.5.8-py3-none-any.whl.metadata (8.4 kB)
Collecting pyexr (from -r requirements.txt (line 22))
Using cached pyexr-0.4.0-py3-none-any.whl.metadata (4.5 kB)
Collecting PyMCubes (from -r requirements.txt (line 23))
**Using cached pymcubes-0.1.6.tar.gz (109 kB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [9 lines of output]
Collecting setuptools
Using cached setuptools-75.2.0-py3-none-any.whl.metadata (6.9 kB)
Collecting wheel
Using cached wheel-0.44.0-py3-none-any.whl.metadata (2.3 kB)
Collecting Cython
Using cached Cython-3.0.11-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.2 kB)
ERROR: Ignored the following versions that require a different python version: 1.25.0 Requires-Python >=3.9; 1.25.1 Requires-Python >=3.9; 1.25.2 Requires-Python >=3.9; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.1 Requires-Python <3.13,>=3.9; 1.26.2 Requires-Python >=3.9; 1.26.3 Requires-Python >=3.9; 1.26.4 Requires-Python >=3.9; 2.0.0 Requires-Python >=3.9; 2.0.1 Requires-Python >=3.9; 2.0.2 Requires-Python >=3.9; 2.1.0 Requires-Python >=3.10; 2.1.0rc1 Requires-Python >=3.10; 2.1.1 Requires-Python >=3.10; 2.1.2 Requires-Python >=3.10
ERROR: Could not find a version that satisfies the requirement numpy~=2.0 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4)
ERROR: No matching distribution found for numpy~=2.0
[end of output]**
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
`
I tried to install numpy 2.0 but this time i got this error
` pip install numpy==2.0.0
ERROR: Ignored the following versions that require a different python version: 1.25.0 Requires-Python >=3.9; 1.25.1 Requires-Python >=3.9; 1.25.2 Requires-Python >=3.9; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.1 Requires-Python <3.13,>=3.9; 1.26.2 Requires-Python >=3.9; 1.26.3 Requires-Python >=3.9; 1.26.4 Requires-Python >=3.9; 2.0.0 Requires-Python >=3.9; 2.0.1 Requires-Python >=3.9; 2.0.2 Requires-Python >=3.9; 2.1.0 Requires-Python >=3.10; 2.1.0rc1 Requires-Python >=3.10; 2.1.1 Requires-Python >=3.10; 2.1.2 Requires-Python >=3.10
ERROR: Could not find a version that satisfies the requirement numpy==2.0.0 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4)
ERROR: No matching distribution found for numpy==2.0.0
`
I also tried to update Python but failed. Any solutions or ideas?
| open | 2024-10-25T21:20:28Z | 2024-10-25T21:36:59Z | https://github.com/NVlabs/neuralangelo/issues/214 | [] | canonar | 1 |
microsoft/nni | data-science | 5,110 | Using 2 GPUs training to train DARTS in parallel, but get 2 different search architecture? | **Describe the issue**:
I used 2 GPUs to train DARTS, but from the output, I find that I get 2 different results.
And I used 'export_onnx', but I didn't get the output model under the specified directory.
```python
if __name__ == "__main__":
parser = ArgumentParser("darts")
parser.add_argument("--layers", default=8, type=int)
parser.add_argument("--batch-size", default=64, type=int)
parser.add_argument("--log-frequency", default=10, type=int)
parser.add_argument("--epochs", default=1, type=int)
parser.add_argument("--channels", default=16, type=int)
parser.add_argument("--unrolled", default=False, action="store_true")
parser.add_argument("--visualization", default=False, action="store_true")
parser.add_argument("--v1", default=False, action="store_true")
parser.add_argument("--local_rank", default=0, type=int,help="node rank for distributed training")
args = parser.parse_args()
dataset_train, dataset_valid = datasets.get_dataset("cifar10")
length=len(dataset_train)
train_size,validate_size=int(0.5*length),int(0.5*length)
train_set,validate_set=torch.utils.data.random_split(dataset_train,[train_size,validate_size])
model = CNN(32, 3, args.channels, 10, args.layers)
evaluator = pl.Classification(
train_dataloaders=pl.DataLoader(train_set, batch_size=45,pin_memory=True,num_workers=4),
val_dataloaders=pl.DataLoader(validate_set, batch_size=45,pin_memory=True,num_workers=4),
max_epochs=1,
accelerator="gpu",
devices=2,
strategy='ddp',
log_every_n_steps=10,
export_onnx=Path("searched_models/"),
)
exploration_strategy = strategy.DARTS()
exp = RetiariiExperiment(model, evaluator=evaluator, strategy=exploration_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'cifa10'
exp_config.max_trial_number = 1
exp_config.trial_concurrency = 1
exp_config.trial_gpu_number = 2
exp_config.training_service.use_active_gpu = True
exp_config.execution_engine = 'oneshot'
exp.run(exp_config, 8081)
exported_arch=exp.export_top_models()[0]
print(exported_arch)
```
output model:
It seems two processes on 2 GPUs print different results. Should I just use the model trained by rank 0 or is there anything wrong with my use?
```shell
{'normal_n2_p0': 'maxpool', 'normal_n2_p1': 'maxpool', 'normal_n2_switch': [0, 1], 'normal_n3_p0': 'maxpool', 'normal_n3_p1': 'maxpool', 'normal_n3_p2': 'maxpool', 'normal_n3_switch': [0, 1], 'normal_n4_p0': 'maxpool', 'normal_n4_p1': 'maxpool', 'normal_n4_p2': 'maxpool', 'normal_n4_p3': 'maxpool', 'normal_n4_switch': [0, 2], 'normal_n5_p0': 'maxpool', 'normal_n5_p1': 'maxpool', 'normal_n5_p2': 'maxpool', 'normal_n5_p3': 'maxpool', 'normal_n5_p4': 'dilconv5x5', 'normal_n5_switch': [0, 3], 'reduce_n2_p0': 'maxpool', 'reduce_n2_p1': 'maxpool', 'reduce_n2_switch': [0, 1], 'reduce_n3_p0': 'maxpool', 'reduce_n3_p1': 'maxpool', 'reduce_n3_p2': 'maxpool', 'reduce_n3_switch': [0, 2], 'reduce_n4_p0': 'maxpool', 'reduce_n4_p1': 'maxpool', 'reduce_n4_p2': 'maxpool', 'reduce_n4_p3': 'maxpool', 'reduce_n4_switch': [0, 2], 'reduce_n5_p0': 'maxpool', 'reduce_n5_p1': 'sepconv5x5', 'reduce_n5_p2': 'maxpool', 'reduce_n5_p3': 'dilconv5x5', 'reduce_n5_p4': 'dilconv3x3', 'reduce_n5_switch': [0, 3]}
{'normal_n2_p0': 'maxpool', 'normal_n2_p1': 'maxpool', 'normal_n2_switch': [0, 1], 'normal_n3_p0': 'maxpool', 'normal_n3_p1': 'maxpool', 'normal_n3_p2': 'maxpool', 'normal_n3_switch': [0, 1], 'normal_n4_p0': 'maxpool', 'normal_n4_p1': 'maxpool', 'normal_n4_p2': 'sepconv5x5', 'normal_n4_p3': 'dilconv5x5', 'normal_n4_switch': [0, 3], 'normal_n5_p0': 'maxpool', 'normal_n5_p1': 'maxpool', 'normal_n5_p2': 'maxpool', 'normal_n5_p3': 'maxpool', 'normal_n5_p4': 'maxpool', 'normal_n5_switch': [0, 3], 'reduce_n2_p0': 'maxpool', 'reduce_n2_p1': 'maxpool', 'reduce_n2_switch': [0, 1], 'reduce_n3_p0': 'maxpool', 'reduce_n3_p1': 'maxpool', 'reduce_n3_p2': 'maxpool', 'reduce_n3_switch': [0, 2], 'reduce_n4_p0': 'maxpool', 'reduce_n4_p1': 'maxpool', 'reduce_n4_p2': 'maxpool', 'reduce_n4_p3': 'maxpool', 'reduce_n4_switch': [0, 2], 'reduce_n5_p0': 'maxpool', 'reduce_n5_p1': 'dilconv5x5', 'reduce_n5_p2': 'dilconv5x5', 'reduce_n5_p3': 'dilconv5x5', 'reduce_n5_p4': 'sepconv5x5', 'reduce_n5_switch': [0, 3]}
```

| open | 2022-09-05T06:59:23Z | 2023-10-12T02:12:07Z | https://github.com/microsoft/nni/issues/5110 | [
"NAS 2.0"
] | toufunao | 2 |
huggingface/peft | pytorch | 1,363 | Error while fetching adapter layer from huggingface library | ### System Info
```
pa_extractor = LlamaForCausalLM.from_pretrained(LLAMA_MODEL_NAME,
token=HF_ACCESS_TOKEN,
max_length=LLAMA2_MAX_LENGTH,
pad_token_id=cls.tokenizer.eos_token_id,
device_map="auto",
quantization_config=bnb_config)
pa_extractor.load_adapter(PEFT_MODEL_NAME, token=HF_ACCESS_TOKEN, device_map="auto")
```
# getting the below error while executing :
401 client error, Repository Not Found for url: https://huggingface.co/muskan/llama2/resolve/main/adapter_model.safetensors.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password
This error occurs while calling pa_extractor.load_adapter
### Who can help?
@pacman100 @younesbelkada @sayakpaul
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
The model is present in my private repo, should be replicable if you will try to use load_adapter to fetch any adapter layer from hf directly.
### Expected behavior
Should be able to download the peft adapter layer successfully | closed | 2024-01-16T11:39:08Z | 2024-03-10T15:03:37Z | https://github.com/huggingface/peft/issues/1363 | [] | Muskanb | 2 |
comfyanonymous/ComfyUI | pytorch | 7,000 | Pinning | ### Feature Idea
To pin means to fix something in place.

If I move a group with pinned nodes, I expect, that the nodes are pinned to the group, but it isn't. The nodes are teared off the group instead. Looks strange.

I would expect, that a pinned node in a group can't be moved inside the group, but moved with the group. It should not be possible to change the groups border, so that the pinned node comes lying outside of the group.

A group should also have the option to be pinned.
Maybe also a possibility to collapse / expand, to hide it's content.
Because Group nodes can be set to be disabled or bypassed, it makes no sense, if a pinned node will be left outside after a movement of the group and needs to be reset separately outside of the group.



### Existing Solutions
None
### Other
None | open | 2025-02-27T16:36:36Z | 2025-02-27T23:43:13Z | https://github.com/comfyanonymous/ComfyUI/issues/7000 | [
"Feature",
"Frontend"
] | schoenid | 0 |
huggingface/datasets | machine-learning | 6,695 | Support JSON file with an array of strings | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | closed | 2024-02-26T12:35:11Z | 2024-03-08T14:16:25Z | https://github.com/huggingface/datasets/issues/6695 | [
"enhancement"
] | albertvillanova | 1 |
huggingface/datasets | deep-learning | 7,194 | datasets.exceptions.DatasetNotFoundError for private dataset | ### Describe the bug
The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work.
``` py
from datasets import load_dataset
_ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly")
```
This seems to be just an issue with my machine config as the code above works with a colleague's machine. So far I have tried:
- logging back out and in from the Huggingface CLI using `huggingface-cli logout`
- manually removing the token cache at `/Users/kalyan/.cache/huggingface/token` (found using `huggingface-cli env`)
- manually passing a token in `load_dataset`
My output of `huggingface-cli whoami`:
```
kdutia
orgs: ClimatePolicyRadar
```
### Steps to reproduce the bug
```
python
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 2074, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1659, in dataset_module_factory
raise e1 from None
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1597, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.
>>>
```
### Expected behavior
The dataset downloads successfully.
### Environment info
From `huggingface-cli env`:
```
- huggingface_hub version: 0.25.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.12.2
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /Users/kalyan/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: kdutia
- Configured git credential helpers: osxkeychain
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 2.1.1
- pydantic: N/A
- aiohttp: 3.10.8
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /Users/kalyan/.cache/huggingface/hub
- HF_ASSETS_CACHE: /Users/kalyan/.cache/huggingface/assets
- HF_TOKEN_PATH: /Users/kalyan/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
from `datasets-cli env`:
```
- `datasets` version: 3.0.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
``` | closed | 2024-10-03T07:49:36Z | 2024-10-03T10:09:28Z | https://github.com/huggingface/datasets/issues/7194 | [] | kdutia | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 110 | About target_category | Would you mind explaining the function of this parameter in detail? | closed | 2021-07-07T02:14:08Z | 2021-07-10T14:08:18Z | https://github.com/jacobgil/pytorch-grad-cam/issues/110 | [] | m250317460 | 4 |
aio-libs-abandoned/aioredis-py | asyncio | 567 | xread with non-integer timeout argument hangs indefinitely | The `timeout` argument [is passed verbatim to redis](https://github.com/aio-libs/aioredis/blob/master/aioredis/commands/streams.py#L252). When that argument is not an integer, the XREAD will hang.
Example:
`await r.xread(["system_event_stream"], timeout=0.1, latest_ids=[0])` shows up in `redis monitor` as `1554191321.856104 [3 172.18.0.16:58130] "XREAD" "BLOCK" "0.1" "STREAMS" "system_event_stream" "0"` and hangs forever.
It's probably best to assert the argument is an integer. Other python functions often accept seconds for timeouts, so a warning/error would be good. | closed | 2019-04-02T07:55:47Z | 2019-07-09T14:17:28Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/567 | [] | tino | 1 |
holoviz/panel | matplotlib | 7,190 | test | Thanks for contacting us! Please read and follow these instructions carefully, then delete this introductory text to keep your issue easy to read. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc)
#### Description of expected behavior and the observed behavior
#### Complete, minimal, self-contained example code that reproduces the issue
```
# code goes here between backticks
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2024-08-27T08:48:25Z | 2024-08-27T08:49:23Z | https://github.com/holoviz/panel/issues/7190 | [
"TRIAGE"
] | hoxbro | 0 |
microsoft/unilm | nlp | 1,583 | Kosmo2.5 Chinese performance very bad | Why not consider add Chinese support? | open | 2024-06-23T03:30:22Z | 2024-07-14T14:56:19Z | https://github.com/microsoft/unilm/issues/1583 | [] | luohao123 | 2 |
Miserlou/Zappa | django | 1,509 | Can't parse ".serverless/requirements/xlrd/biffh.py" unless encoding is latin | <!--- Provide a general summary of the issue in the Title above -->
## Context
when detect_flask is called during zappa init, it fails on an encoding issue because of the commented out block at the head of xlrd/biffh.py
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
It should not error out.
## Actual Behavior
<!--- Tell us what happens instead -->
It fails with an encoding exception at f.readlines()
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
just add encoding='latin' to the open call
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. have xlrd as a dependency
2. call zappa init
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version: OSX, python 3
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
| open | 2018-05-14T18:00:14Z | 2018-05-14T18:00:14Z | https://github.com/Miserlou/Zappa/issues/1509 | [] | joshmalina | 0 |
httpie/http-prompt | api | 193 | Preview request? | While the `env` command is somewhat useful it would be nice to preview a request before sending it… perhaps a `show` or `dump` or `req` command would be helpful? | open | 2021-04-24T02:03:26Z | 2021-05-07T06:40:29Z | https://github.com/httpie/http-prompt/issues/193 | [] | jenstroeger | 2 |
newpanjing/simpleui | django | 401 | simpleui 的 layer对话框 失效。 | **bug描述**
* *Bug description * *
layer对话框,点击确定按钮,ajax 请求地址错误。导致失效,代码相关问题如下。
simpletags.py 文件 get_model_ajax_url 方法。
key 值编写错误,应将: {}:{}_{}_changelist 改为 {}:{}_{}_ajax | closed | 2021-10-15T02:12:01Z | 2021-10-15T06:27:45Z | https://github.com/newpanjing/simpleui/issues/401 | [
"bug"
] | cqjinxiaotao | 0 |
JohnSnowLabs/nlu | streamlit | 246 | Model Loading | I am loading model like this
```
import sparknlp
import nlu
spark = sparknlp.start()
df = spark.read.csv("nlp_data.csv")
res = nlu.load("pos").predict(df[["text"]].rdd.flatMap(lambda x: x).collect())
print(res)
spark.stop()
```
Each time I get the following messages in my console:
```
com.johnsnowlabs.nlp#spark-nlp_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-3f17e4b8-0bdf-40c5-9879-d62f9c2dc974;1.0
confs: [default]
found com.johnsnowlabs.nlp#spark-nlp_2.12;5.2.3 in central
found com.typesafe#config;1.4.2 in central
found org.rocksdb#rocksdbjni;6.29.5 in central
found com.amazonaws#aws-java-sdk-s3;1.12.500 in central
found com.amazonaws#aws-java-sdk-kms;1.12.500 in central
found com.amazonaws#aws-java-sdk-core;1.12.500 in central
found commons-logging#commons-logging;1.1.3 in central
found commons-codec#commons-codec;1.15 in central
found org.apache.httpcomponents#httpclient;4.5.13 in central
found org.apache.httpcomponents#httpcore;4.4.13 in central
found software.amazon.ion#ion-java;1.0.2 in central
found com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.12.6 in central
found joda-time#joda-time;2.8.1 in central
found com.amazonaws#jmespath-java;1.12.500 in central
found com.github.universal-automata#liblevenshtein;3.0.0 in central
found com.google.protobuf#protobuf-java-util;3.0.0-beta-3 in central
found com.google.protobuf#protobuf-java;3.0.0-beta-3 in central
found com.google.code.gson#gson;2.3 in central
found it.unimi.dsi#fastutil;7.0.12 in central
found org.projectlombok#lombok;1.16.8 in central
found com.google.cloud#google-cloud-storage;2.20.1 in central
found com.google.guava#guava;31.1-jre in central
found com.google.guava#failureaccess;1.0.1 in central
found com.google.guava#listenablefuture;9999.0-empty-to-avoid-conflict-with-guava in central
found com.google.errorprone#error_prone_annotations;2.18.0 in central
found com.google.j2objc#j2objc-annotations;1.3 in central
found com.google.http-client#google-http-client;1.43.0 in central
found io.opencensus#opencensus-contrib-http-util;0.31.1 in central
found com.google.http-client#google-http-client-jackson2;1.43.0 in central
found com.google.http-client#google-http-client-gson;1.43.0 in central
found com.google.api-client#google-api-client;2.2.0 in central
found com.google.oauth-client#google-oauth-client;1.34.1 in central
found com.google.http-client#google-http-client-apache-v2;1.43.0 in central
found com.google.apis#google-api-services-storage;v1-rev20220705-2.0.0 in central
found com.google.code.gson#gson;2.10.1 in central
found com.google.cloud#google-cloud-core;2.12.0 in central
found io.grpc#grpc-context;1.53.0 in central
found com.google.auto.value#auto-value-annotations;1.10.1 in central
found com.google.auto.value#auto-value;1.10.1 in central
found javax.annotation#javax.annotation-api;1.3.2 in central
found com.google.cloud#google-cloud-core-http;2.12.0 in central
found com.google.http-client#google-http-client-appengine;1.43.0 in central
found com.google.api#gax-httpjson;0.108.2 in central
found com.google.cloud#google-cloud-core-grpc;2.12.0 in central
found io.grpc#grpc-alts;1.53.0 in central
found io.grpc#grpc-grpclb;1.53.0 in central
found org.conscrypt#conscrypt-openjdk-uber;2.5.2 in central
found io.grpc#grpc-auth;1.53.0 in central
found io.grpc#grpc-protobuf;1.53.0 in central
found io.grpc#grpc-protobuf-lite;1.53.0 in central
found io.grpc#grpc-core;1.53.0 in central
found com.google.api#gax;2.23.2 in central
found com.google.api#gax-grpc;2.23.2 in central
found com.google.auth#google-auth-library-credentials;1.16.0 in central
found com.google.auth#google-auth-library-oauth2-http;1.16.0 in central
found com.google.api#api-common;2.6.2 in central
found io.opencensus#opencensus-api;0.31.1 in central
found com.google.api.grpc#proto-google-iam-v1;1.9.2 in central
found com.google.protobuf#protobuf-java;3.21.12 in central
found com.google.protobuf#protobuf-java-util;3.21.12 in central
found com.google.api.grpc#proto-google-common-protos;2.14.2 in central
found org.threeten#threetenbp;1.6.5 in central
found com.google.api.grpc#proto-google-cloud-storage-v2;2.20.1-alpha in central
found com.google.api.grpc#grpc-google-cloud-storage-v2;2.20.1-alpha in central
found com.google.api.grpc#gapic-google-cloud-storage-v2;2.20.1-alpha in central
found com.fasterxml.jackson.core#jackson-core;2.14.2 in central
found com.google.code.findbugs#jsr305;3.0.2 in central
found io.grpc#grpc-api;1.53.0 in central
found io.grpc#grpc-stub;1.53.0 in central
found org.checkerframework#checker-qual;3.31.0 in central
found io.perfmark#perfmark-api;0.26.0 in central
found com.google.android#annotations;4.1.1.4 in central
found org.codehaus.mojo#animal-sniffer-annotations;1.22 in central
found io.opencensus#opencensus-proto;0.2.0 in central
found io.grpc#grpc-services;1.53.0 in central
found com.google.re2j#re2j;1.6 in central
found io.grpc#grpc-netty-shaded;1.53.0 in central
found io.grpc#grpc-googleapis;1.53.0 in central
found io.grpc#grpc-xds;1.53.0 in central
found com.navigamez#greex;1.0 in central
found dk.brics.automaton#automaton;1.11-8 in central
found com.johnsnowlabs.nlp#tensorflow-cpu_2.12;0.4.4 in central
found com.microsoft.onnxruntime#onnxruntime;1.16.3 in central
:: resolution report :: resolve 1966ms :: artifacts dl 54ms
:: modules in use:
com.amazonaws#aws-java-sdk-core;1.12.500 from central in [default]
com.amazonaws#aws-java-sdk-kms;1.12.500 from central in [default]
com.amazonaws#aws-java-sdk-s3;1.12.500 from central in [default]
com.amazonaws#jmespath-java;1.12.500 from central in [default]
com.fasterxml.jackson.core#jackson-core;2.14.2 from central in [default]
com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.12.6 from central in [default]
com.github.universal-automata#liblevenshtein;3.0.0 from central in [default]
com.google.android#annotations;4.1.1.4 from central in [default]
com.google.api#api-common;2.6.2 from central in [default]
com.google.api#gax;2.23.2 from central in [default]
com.google.api#gax-grpc;2.23.2 from central in [default]
com.google.api#gax-httpjson;0.108.2 from central in [default]
com.google.api-client#google-api-client;2.2.0 from central in [default]
com.google.api.grpc#gapic-google-cloud-storage-v2;2.20.1-alpha from central in [default]
com.google.api.grpc#grpc-google-cloud-storage-v2;2.20.1-alpha from central in [default]
com.google.api.grpc#proto-google-cloud-storage-v2;2.20.1-alpha from central in [default]
com.google.api.grpc#proto-google-common-protos;2.14.2 from central in [default]
com.google.api.grpc#proto-google-iam-v1;1.9.2 from central in [default]
com.google.apis#google-api-services-storage;v1-rev20220705-2.0.0 from central in [default]
com.google.auth#google-auth-library-credentials;1.16.0 from central in [default]
com.google.auth#google-auth-library-oauth2-http;1.16.0 from central in [default]
com.google.auto.value#auto-value;1.10.1 from central in [default]
com.google.auto.value#auto-value-annotations;1.10.1 from central in [default]
com.google.cloud#google-cloud-core;2.12.0 from central in [default]
com.google.cloud#google-cloud-core-grpc;2.12.0 from central in [default]
com.google.cloud#google-cloud-core-http;2.12.0 from central in [default]
com.google.cloud#google-cloud-storage;2.20.1 from central in [default]
com.google.code.findbugs#jsr305;3.0.2 from central in [default]
com.google.code.gson#gson;2.10.1 from central in [default]
com.google.errorprone#error_prone_annotations;2.18.0 from central in [default]
com.google.guava#failureaccess;1.0.1 from central in [default]
com.google.guava#guava;31.1-jre from central in [default]
com.google.guava#listenablefuture;9999.0-empty-to-avoid-conflict-with-guava from central in [default]
com.google.http-client#google-http-client;1.43.0 from central in [default]
com.google.http-client#google-http-client-apache-v2;1.43.0 from central in [default]
com.google.http-client#google-http-client-appengine;1.43.0 from central in [default]
com.google.http-client#google-http-client-gson;1.43.0 from central in [default]
com.google.http-client#google-http-client-jackson2;1.43.0 from central in [default]
com.google.j2objc#j2objc-annotations;1.3 from central in [default]
com.google.oauth-client#google-oauth-client;1.34.1 from central in [default]
com.google.protobuf#protobuf-java;3.21.12 from central in [default]
com.google.protobuf#protobuf-java-util;3.21.12 from central in [default]
com.google.re2j#re2j;1.6 from central in [default]
com.johnsnowlabs.nlp#spark-nlp_2.12;5.2.3 from central in [default]
com.johnsnowlabs.nlp#tensorflow-cpu_2.12;0.4.4 from central in [default]
com.microsoft.onnxruntime#onnxruntime;1.16.3 from central in [default]
com.navigamez#greex;1.0 from central in [default]
com.typesafe#config;1.4.2 from central in [default]
commons-codec#commons-codec;1.15 from central in [default]
commons-logging#commons-logging;1.1.3 from central in [default]
dk.brics.automaton#automaton;1.11-8 from central in [default]
io.grpc#grpc-alts;1.53.0 from central in [default]
io.grpc#grpc-api;1.53.0 from central in [default]
io.grpc#grpc-auth;1.53.0 from central in [default]
io.grpc#grpc-context;1.53.0 from central in [default]
io.grpc#grpc-core;1.53.0 from central in [default]
io.grpc#grpc-googleapis;1.53.0 from central in [default]
io.grpc#grpc-grpclb;1.53.0 from central in [default]
io.grpc#grpc-netty-shaded;1.53.0 from central in [default]
io.grpc#grpc-protobuf;1.53.0 from central in [default]
io.grpc#grpc-protobuf-lite;1.53.0 from central in [default]
io.grpc#grpc-services;1.53.0 from central in [default]
io.grpc#grpc-stub;1.53.0 from central in [default]
io.grpc#grpc-xds;1.53.0 from central in [default]
io.opencensus#opencensus-api;0.31.1 from central in [default]
io.opencensus#opencensus-contrib-http-util;0.31.1 from central in [default]
io.opencensus#opencensus-proto;0.2.0 from central in [default]
io.perfmark#perfmark-api;0.26.0 from central in [default]
it.unimi.dsi#fastutil;7.0.12 from central in [default]
javax.annotation#javax.annotation-api;1.3.2 from central in [default]
joda-time#joda-time;2.8.1 from central in [default]
org.apache.httpcomponents#httpclient;4.5.13 from central in [default]
org.apache.httpcomponents#httpcore;4.4.13 from central in [default]
org.checkerframework#checker-qual;3.31.0 from central in [default]
org.codehaus.mojo#animal-sniffer-annotations;1.22 from central in [default]
org.conscrypt#conscrypt-openjdk-uber;2.5.2 from central in [default]
org.projectlombok#lombok;1.16.8 from central in [default]
org.rocksdb#rocksdbjni;6.29.5 from central in [default]
org.threeten#threetenbp;1.6.5 from central in [default]
software.amazon.ion#ion-java;1.0.2 from central in [default]
:: evicted modules:
commons-logging#commons-logging;1.2 by [commons-logging#commons-logging;1.1.3] in [default]
commons-codec#commons-codec;1.11 by [commons-codec#commons-codec;1.15] in [default]
com.google.protobuf#protobuf-java-util;3.0.0-beta-3 by [com.google.protobuf#protobuf-java-util;3.21.12] in [default]
com.google.protobuf#protobuf-java;3.0.0-beta-3 by [com.google.protobuf#protobuf-java;3.21.12] in [default]
com.google.code.gson#gson;2.3 by [com.google.code.gson#gson;2.10.1] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 85 | 0 | 0 | 5 || 80 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-3f17e4b8-0bdf-40c5-9879-d62f9c2dc974
confs: [default]
0 artifacts copied, 80 already retrieved (0kB/27ms)
pos_anc download started this may take some time.
Approximate size to download 3.9 MB
[ / ]pos_anc download started this may take some time.
Approximate size to download 3.9 MB
[ — ]Download done! Loading the resource.
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[ | ]sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[ / ]Download done! Loading the resource.
[ — ]2024-02-06 14:43:45.340048: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[OK!]
```
Is it indicating that I am downloading the model(s) from the internet agin and again, or am I downloading it from the jar files?
I assume that the jar files are now on my local system since it took some time when I first installed spark-nlp, and now it just prints the jars information almost immediately when I run the code | open | 2024-02-06T09:22:44Z | 2024-02-06T09:22:44Z | https://github.com/JohnSnowLabs/nlu/issues/246 | [] | ArijitSinghEDA | 0 |
python-visualization/folium | data-visualization | 1,596 | Formatted popups or tooltips with variable strings? | I'm using Python 3.10.4 and the most recent version of folium. My needs are simple: to create popups or tooltips on a map showing the name of the location and some information. I am fetching the results from a pandas dataframe, over which I'm iterating with iterrows. The salient code currently is this:
tooltip = row["place"] + ", Population: "+str(row["population"])
which produces a single line, like
Crumbletown, Population: 4.5
But what I want is for the text to be formatted, so it looks for example like this:
**Crumbletown**
Population: 4.5
maybe with a larger font for the heading. I know formatting can be done with html (as per the popup.ipynb example notebook). But what I don't know is how to include a string variable in the html code, and whether this issue is a Python issue, an html issue, or something else. Note that I am a folium beginner, so forgive me if this is a trivial question. Many thanks! | closed | 2022-05-23T05:29:25Z | 2022-11-17T15:29:11Z | https://github.com/python-visualization/folium/issues/1596 | [] | amca01 | 1 |
microsoft/unilm | nlp | 1,275 | [BEIT3] How to apply GradCam on the beit3 models? | Hi. I want to see the gradcam image of the beit3 model. I used the grad-cam library [https://github.com/jacobgil/pytorch-grad-cam], but I got this error. 'RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn'.
Please help..
Thank you! | closed | 2023-08-30T08:29:48Z | 2023-08-30T10:12:15Z | https://github.com/microsoft/unilm/issues/1275 | [] | TheNha | 6 |
scrapy/scrapy | web-scraping | 5,850 | Invalid Copyright Notice | I am not a lawyer but I believe your copyright notice at https://github.com/scrapy/scrapy/blob/master/LICENSE
is invalid in the USA (and probably elsewhere) because it lacks a date.
Copyright (c) Scrapy developers.
Should be:
Copyright (c) 2011 Scrapy developers.
Or
Copyright (c) 2011-2023 Scrapy developers.
Or a variation of the above.
From https://www.copyright.gov/circs/circ01.pdf#page=7
"""
Copyright Notice
A copyright notice is a statement placed on copies or phonorecords of a work to inform the public that a copyright owner is claiming ownership of the work. A copyright notice consists of three elements:
• The copyright symbol © or (p) for phonorecords, the word “Copyright,” or the abbreviation “Copr.”;
• The year of first publication of the work (or of creation if the work is unpublished); and
• The name of the copyright owner, an abbreviation by which the name can be recognized, or a generally known alternative designation.
"""
| closed | 2023-03-15T16:38:43Z | 2024-01-03T18:17:56Z | https://github.com/scrapy/scrapy/issues/5850 | [] | RogerHaase | 1 |
QuivrHQ/quivr | api | 2,960 | TOtot | closed | 2024-08-07T10:59:51Z | 2024-08-07T11:00:57Z | https://github.com/QuivrHQ/quivr/issues/2960 | [] | StanGirard | 1 | |
python-gitlab/python-gitlab | api | 2,575 | How to use python-gitlab library to search a string in every commits? | I noticed `commits = project.commits.list(all=True)` can list every commits, but I don't know how to perform a search against each commits, can it be done? :) | closed | 2023-05-25T02:39:41Z | 2024-05-27T01:20:03Z | https://github.com/python-gitlab/python-gitlab/issues/2575 | [] | umeharasang | 1 |
modelscope/data-juicer | data-visualization | 98 | [Bug]: alphanumeric_filter, char.isalnum() | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
ubuntu
### Installation Method 安装方式
pip
### Data-Juicer Version Data-Juicer版本
v0.1.2
### Python Version Python版本
3.8
### Describe the bug 描述这个bug
https://github.com/alibaba/data-juicer/blob/main/data_juicer/ops/filter/alphanumeric_filter.py#L75
``````
alnum_count = sum(
map(lambda char: 1
if char.isalnum() else 0, sample[self.text_key]))
``````
Python3默认使用Unicode编码,所以`'汉字'.isalnum()`会返回True;encode()默认编码是UTF-8,编码成utf8之后,汉字就不会返回True了
``````
alnum_count = sum(
map(lambda char: 1
if char.encode().isalnum() else 0, sample[self.text_key]))
``````
### To Reproduce 如何复现
python tools/analyze_data.py --config configs/demo/analyser.yaml
### Configs 配置信息
``````
project_name: 'demo-analyser'
dataset_path: 'demos/data/demo-dataset.jsonl' # path to your dataset directory or file
np: 4 # number of subprocess to process your dataset
text_keys: 'text'
export_path: './outputs/demo-analyser/demo-analyser-result.jsonl'
# process schedule
# a list of several process operators with their arguments
process:
- alphanumeric_filter:
``````
### Logs 报错日志
_No response_
### Screenshots 截图

第五行全中文,alnum_ratio字母数字比例,正确应该是0
### Additional 额外信息
_No response_ | closed | 2023-11-24T01:52:00Z | 2023-12-22T09:32:20Z | https://github.com/modelscope/data-juicer/issues/98 | [
"bug",
"stale-issue"
] | simplew2011 | 3 |
ageitgey/face_recognition | python | 1,355 | [Question] : Is any way to recognise masked face img | Hello guys ,
Face_recognition is failing while detecting faces with mask and person name.
How we can tackle this issue ?
Any suggestions
Thanks | open | 2021-08-11T11:34:08Z | 2021-08-24T00:39:04Z | https://github.com/ageitgey/face_recognition/issues/1355 | [] | VinayChaudhari1996 | 1 |
StructuredLabs/preswald | data-visualization | 231 | [FEATURE] Use API endpoints as a source | **Is your feature request related to a problem? Please describe.**
Today, users can add in CSV, Postgres, and Clickhouse as sources. S3 is coming soon too. We want to support APIs as sources.
**Describe the solution you'd like**
An API source type which pulls from the API (w/ necessary keys/auth) upon running a query.
**Describe alternatives you've considered**
Separately dumping an API output JSON to a file, and then importing that via pandas.
**Additional context**
Take a look at issue 153, as well as the current implementations of sources (the 3 above) in `preswald/engine/managers/data.py` and `preswald/interfaces/data.py` | open | 2025-03-13T00:27:50Z | 2025-03-15T14:46:04Z | https://github.com/StructuredLabs/preswald/issues/231 | [
"enhancement"
] | shivam-singhal | 1 |
ultralytics/yolov5 | deep-learning | 13,216 | gpu memory usage is low but out of memory | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When training yolov5, I found that the GPU memory was not used, GPU memory usage is versy low, and the memory increased very quickly, resulting in out of memory. I would like to know how to use the GPU for training?

command is `python .\train.py --device 0 --epochs 1 --batch-size 16`

### Additional
_No response_ | open | 2024-07-24T15:52:07Z | 2024-10-27T13:30:50Z | https://github.com/ultralytics/yolov5/issues/13216 | [
"question"
] | leooobreak | 2 |
AutoGPTQ/AutoGPTQ | nlp | 611 | [QUESTION] How to unload AutoGPTQForCausalLM.from_quantized model from GPU to CPU in order to free up GPU memory | Hi
I have loaded a model with the follow code:
```
DEVICE = "cuda:0" if torch.cuda.is_available() else "cpu"
print(DEVICE)
embeddings = HuggingFaceInstructEmbeddings(
model_name="hkunlp/instructor-xl",model_kwargs={"device":DEVICE}
)
model_name_or_path = "./models/Llama-2-13B-chat-GPTQ"
model_basename = "model"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
revision= "gptq-4bit-128g-actorder_True", #"gptq-8bit-128g-actorder_False", #revision="gptq-4bit-128g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
inject_fused_attention=False,
device=DEVICE,
quantize_config=None,
)
```
And if I print out this model, it has the following structure:
```
LlamaGPTQForCausalLM(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 5120, padding_idx=0)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(rotary_emb): LlamaRotaryEmbedding()
(k_proj): QuantLinear()
(o_proj): QuantLinear()
(q_proj): QuantLinear()
(v_proj): QuantLinear()
)
(mlp): LlamaMLP(
(act_fn): SiLUActivation()
(down_proj): QuantLinear()
(gate_proj): QuantLinear()
(up_proj): QuantLinear()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=5120, out_features=32000, bias=False)
)
)
```
Is there a way that I can unload this model from the GPU to free up GPU memory?
I tried ```model.to(torch.device("cpu")) ```
and it did not work.
Thanks | open | 2024-03-26T10:58:28Z | 2024-03-26T10:58:28Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/611 | [] | tommycmy | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 787 | LibriTTS & older models | Hey - I've been playing with the Tensorflow version of this repo with a LibriTTS model for some time now & have had some good results from it. The model i've been using was from here & had a partial train of a LibriTTS dataset which was really good for punctuation etc.
Just upgraded my GPU to a 3080ti (rare I know!) & tried to get the repo working with this GPU.. the cuda sdk doesn't appear to work unfortunately (10.1) & every time I try to load an audio file to clone the toolkit just hangs..
So i've ended up pulling the latest repo & pretrained models, but noticed there's no punctuation - which is a real shame...
Would appreciate any thoughts on either getting the older repo working or how to get LibriTTS into the mix?
()Also if needed I can provide a 5950x and GPU to train models if it would help restore some of the functionality around this) | closed | 2021-07-02T20:46:51Z | 2021-09-14T17:35:29Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/787 | [] | ThePowerOfMonkeys | 1 |
inducer/pudb | pytest | 375 | source not displayed (linecache filename mismatch) | I'm debugging a Python 3 script, which I typically invoke from the directory it lives in, as:
> ./fooBar.py parm1
If I instrument my script with a `set_trace()` call and start the script as above, it hits the breakpoint and stops. However, no source code is displayed. If, on the other hand, I enter the debugger from the command line, as:
> python3 -m pudb fooBar.py parm1
and _then_ hit continue, the source is displayed when I reach the `set_trace()` call.
Things to note:
* The filename is mixed case, as in my examples above.
* I am running this on Linux.
* The file is on a network storage device (netApp), not local to my Linux server.
* The linecache key for the filename in the non-working case is `./fooBar.py` as would be expected. Apparently, the filename used when looking up the file upon encountering the breakpoint is not exactly the same.
* If, instead of using the relative path to invoke the script from the command line (e.g., `./fooBar.py`), I use the absolute path (e.g., `$PWD/fooBar.py`), it hits the breakpoint and displays the source as it should.
| open | 2020-01-23T16:00:46Z | 2020-01-27T22:40:09Z | https://github.com/inducer/pudb/issues/375 | [] | dccarson | 1 |
whitphx/streamlit-webrtc | streamlit | 1,119 | Camera doesn't start when offline state | I'm creating a camera app that runs locally.
Also, I want to run it even when the PC is not connected to network, but if I press the start button when the PC is not connected to network, the camera does not start and nothing is output to the log.
<img width="617" alt="camera_image" src="https://user-images.githubusercontent.com/13214003/199183279-cd5779d3-2fe4-4aee-b281-23f431da2e13.png">
Of course, the camera is displayed correctly when connected to the Internet.
Currently, when using streamlit-webrtc, we only do simple processing like the code below.
```
webrtc_streamer(
key="image-filter",
mode=WebRtcMode.SENDRECV,
video_frame_callback=callback_func,
media_stream_constraints={"video": True},
)
```
What do I need to do to make streamlit-webrtc work offline? | closed | 2022-11-01T07:47:25Z | 2022-12-06T04:26:35Z | https://github.com/whitphx/streamlit-webrtc/issues/1119 | [] | RoloAfrole | 2 |
ultralytics/ultralytics | deep-learning | 19,320 | About YOLOv12 support | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
Since YOLOv12 is released at https://github.com/sunsmarterjie/yolov12, will the models of YOLOv12 be supported? Thank you.
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-20T01:09:22Z | 2025-02-21T08:41:36Z | https://github.com/ultralytics/ultralytics/issues/19320 | [
"enhancement",
"question",
"fixed"
] | curtis18 | 5 |
HIT-SCIR/ltp | nlp | 268 | srl_srl_train项目编译错误 |

| closed | 2017-12-09T06:51:22Z | 2017-12-10T03:54:40Z | https://github.com/HIT-SCIR/ltp/issues/268 | [] | JaneWangle | 1 |
SciTools/cartopy | matplotlib | 2,228 | Source code no longer included on PyPI | On conda-forge, we're trying to build 0.22.0:
https://github.com/conda-forge/cartopy-feedstock/pull/156
However, we're running into issues because a `.tar.gz` file wasn't included in the PyPI release. First, our bot was not able to create a PR for the release (see https://github.com/conda-forge/cartopy-feedstock/issues/155). Then, when I tried to use the source from GitHub instead, `setuptools-scm` complains:
```
LookupError: setuptools-scm was unable to detect version for /home/conda/feedstock_root/build_artifacts/cartopy_1691225778495/work.
Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
For example, if you're using pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj
[end of output]
```
While there are workarounds that I've used in the past (I'm a little rusty so I'll have to dig them up), it seems like the cleanest solution would be to include the source on PyPI agian. | closed | 2023-08-05T09:04:07Z | 2023-08-07T22:04:16Z | https://github.com/SciTools/cartopy/issues/2228 | [] | xylar | 4 |
modin-project/modin | pandas | 7,043 | `BaseQueryCompiler.repartition()` works slow | [`BaseQueryCompiler.repartition()`](https://github.com/modin-project/modin/blob/14452a8414bdec10e3b5cfa05e98bd26c6e1bafc/modin/core/storage_formats/base/query_compiler.py#L6711) works slower than the same logic implemented on partition's level (see perf measurements in [this PR](https://github.com/modin-project/modin/pull/7016)) | open | 2024-03-08T13:16:37Z | 2024-03-08T13:16:38Z | https://github.com/modin-project/modin/issues/7043 | [
"Performance 🚀",
"P2"
] | dchigarev | 0 |
ultralytics/yolov5 | machine-learning | 12,730 | export int8 tflite with customdataset | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, there
I trained custom dataset and got a .pt file and I tried to convert .pt to .tflite.
Shell script is running well, and I tried to infer a image to predict.
But class name of detection result is not correct.
**ex) custom dataset class label are A, B, C and predicted class has to be 'C' which is class_index=2,
but the class of inferenced image is 'car' which is class_index=2 of coco128.yaml**
below is my export.py shell script:
**python export.py --data custom_dataset.yaml --weight /home/aaa/yolov5/runs/train/moel_res/weights/best.pt --int8 --include tflite**
I tried to find if --data parameter is correct but I can't find yet.
Thanks.
### Additional
_No response_ | closed | 2024-02-13T05:34:06Z | 2024-02-13T11:45:58Z | https://github.com/ultralytics/yolov5/issues/12730 | [
"question"
] | timingisnow | 2 |
2noise/ChatTTS | python | 744 | decoder error on all_codes.masked_fill & what's the correct vesion of vector_quantize_pytorch | An error occurred as follows during the process of changing the default decoder to DVAE (inferring with `use_decoder=False`). Could it be attributed to an incompatible version of `vector_quantize_pytorch==1.17.3`? However, I have attempted `vector-quantize-pytorch==1.16.1`, `vector-quantize-pytorch==1.15.5`, and `vector-quantize-pytorch==1.14.24`.
```code
File "/workspace/ChatTTS/ChatTTS/model/dvae.py", line 95, in _embed
feat = self.quantizer.get_output_from_indices(x)
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 248, in get_output_from_indices
outputs = tuple(rvq.get_output_from_indices(chunk_indices) for rvq, chunk_indices in zip(self.rvqs, indices))
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 248, in <genexpr>
outputs = tuple(rvq.get_output_from_indices(chunk_indices) for rvq, chunk_indices in zip(self.rvqs, indices))
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 134, in get_output_from_indices
codes = self.get_codes_from_indices(indices)
File "/usr/local/lib/python3.10/dist-packages/vector_quantize_pytorch/residual_fsq.py", line 120, in get_codes_from_indices
all_codes = all_codes.masked_fill(rearrange(mask, 'b n q -> q b n 1'), 0.)
RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [2,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
```
environment:
```
av 13.0.0
chattts 0.0.0 /workspace/ChatTTS
gradio 4.42.0
gradio_client 1.3.0
nemo_text_processing 1.0.2
numba 0.60.0
numpy 1.26.4
pybase16384 0.3.7
pydub 0.25.1
pynini 2.1.5
torch 2.1.2
torchaudio 2.1.2
tqdm 4.66.5
transformers 4.44.2
transformers-stream-generator 0.0.5
vector-quantize-pytorch 1.16.1
vocos 0.1.0
WeTextProcessing 1.0.3
```
Would you be able to offer me some suggestions, please? | closed | 2024-09-05T08:09:57Z | 2024-10-23T04:01:31Z | https://github.com/2noise/ChatTTS/issues/744 | [
"question",
"stale"
] | unbelievable3513 | 2 |
netbox-community/netbox | django | 17,688 | GraphQL filters (AND, OR and NOT) don't work for custom filterset fields | ### Deployment Type
Self-hosted
### NetBox Version
v4.1.3
### Python Version
3.10
### Steps to Reproduce
Using a GraphQL filter with AND, OR, NOT for a field that has custom implementation in the filterset (or only appears in the filterset) for example asn_id on Site. Doesn't work
1. Create 4 sites with ID's 1 to 4
2. Create 4 ASNs and assign each to a single Site (1-4)
3. Use the following GraphQL query
```
{
site_list(filters: {asn_id: "1", OR: {asn_id: "4"}}) {
name
asns {
id
}
}
}
```
### Expected Behavior
Will get a list of 2 sites.
### Observed Behavior
Get an empty list. | closed | 2024-10-07T17:00:19Z | 2025-03-10T19:19:02Z | https://github.com/netbox-community/netbox/issues/17688 | [
"type: bug",
"status: accepted",
"topic: GraphQL",
"severity: medium",
"netbox"
] | arthanson | 6 |
sgl-project/sglang | pytorch | 3,880 | [Feature] Run DeepSeek V3 W4-only | ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
I don't have a hopper GPU, I only have the A100. I want to run the deepseek v3 model, can I use the W4A16 quantization to run the deepseek v3?
### Related resources
_No response_ | open | 2025-02-26T08:04:05Z | 2025-03-09T12:25:05Z | https://github.com/sgl-project/sglang/issues/3880 | [] | QingshuiL | 3 |
deepspeedai/DeepSpeed | machine-learning | 6,501 | [BUG] Config mesh_device None | I am using ds 0.15.1 on two A6000 GPUs, following the [huggingface Non-Trainer DeepSpeed integration](https://huggingface.co/docs/transformers/main/en/deepspeed?models=pretrained+model#non-trainer-deepspeed-integration),
got assertion error:
```
guanhua@guanhua-Lambda:~/DiscQuant$ deepspeed test_hf_ds.py
[2024-09-06 15:53:29,210] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:29,660] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:30,664] [WARNING] [runner.py:212:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2024-09-06 15:53:30,664] [INFO] [runner.py:585:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None test_hf_ds.py
[2024-09-06 15:53:32,031] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:32,476] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:33,468] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2024-09-06 15:53:33,468] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=2, node_rank=0
[2024-09-06 15:53:33,468] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2024-09-06 15:53:33,468] [INFO] [launch.py:164:main] dist_world_size=2
[2024-09-06 15:53:33,468] [INFO] [launch.py:168:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2024-09-06 15:53:33,469] [INFO] [launch.py:256:main] process 513898 spawned with command: ['/usr/bin/python3', '-u', 'test_hf_ds.py', '--local_rank=0']
[2024-09-06 15:53:33,469] [INFO] [launch.py:256:main] process 513899 spawned with command: ['/usr/bin/python3', '-u', 'test_hf_ds.py', '--local_rank=1']
[2024-09-06 15:53:34,951] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:34,990] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:35,366] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:53:35,401] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-09-06 15:54:00,929] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 1
[2024-09-06 15:54:00,930] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 1
Traceback (most recent call last):
File "/home/guanhua/DiscQuant/test_hf_ds.py", line 47, in <module>
model = AutoModel.from_pretrained("openai-community/gpt2")
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3821, in from_pretrained
init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 933, in __init__
_ds_config = deepspeed.runtime.config.DeepSpeedConfig(config_dict_or_path,
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 798, in __init__
Traceback (most recent call last):
File "/home/guanhua/DiscQuant/test_hf_ds.py", line 47, in <module>
self._configure_train_batch_size()
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 981, in _configure_train_batch_size
model = AutoModel.from_pretrained("openai-community/gpt2")
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
self._batch_assertion()
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 929, in _batch_assertion
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3821, in from_pretrained
assert train_batch == micro_batch * grad_acc * self.world_size, (
AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 2 != 1 * 1 * 1
init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 933, in __init__
_ds_config = deepspeed.runtime.config.DeepSpeedConfig(config_dict_or_path,
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 798, in __init__
self._configure_train_batch_size()
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 981, in _configure_train_batch_size
self._batch_assertion()
File "/home/guanhua/.local/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 929, in _batch_assertion
assert train_batch == micro_batch * grad_acc * self.world_size, (
AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 2 != 1 * 1 * 1
[2024-09-06 15:54:01,510] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 513898
[2024-09-06 15:54:01,532] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 513899
[2024-09-06 15:54:01,532] [ERROR] [launch.py:325:sigkill_handler] ['/usr/bin/python3', '-u', 'test_hf_ds.py', '--local_rank=1'] exits with return code = 1
```
I think the root cause is because `[config.py:733:__init__] Config mesh_device None world_size = 1`, somehow ds_init did not pass the correct `mesh_device` argument which makes world_size=1 (correct should be 2).
To reproduce, below is the python script I am using, cmd is `deepspeed --num_gpus 2 BELOW_PYTHON.py`
```
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {
#"fp16": {
# "enabled": "auto",
# "loss_scale": 0,
# "loss_scale_window": 1000,
# "initial_scale_power": 16,
# "hysteresis": 2,
# "min_loss_scale": 1
#},
"bf16": {
"enabled": "auto"
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": True,
"contiguous_gradients": True,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"gather_16bit_weights_on_model_save": True
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"train_batch_size": 2,
"train_micro_batch_size_per_gpu": 1,
"steps_per_print": 1e5,
"wall_clock_breakdown": False,
"data_parallel_size": 2
}
ds_cf = HfDeepSpeedConfig(ds_config)
model = AutoModel.from_pretrained("openai-community/gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, dist_init_required=True)
``` | open | 2024-09-06T22:56:15Z | 2024-11-18T02:10:44Z | https://github.com/deepspeedai/DeepSpeed/issues/6501 | [
"bug",
"training"
] | GuanhuaWang | 2 |
axnsan12/drf-yasg | rest-api | 720 | api/swagger/?format=openapi response 500 |

| open | 2021-06-04T09:25:46Z | 2025-03-07T12:13:02Z | https://github.com/axnsan12/drf-yasg/issues/720 | [
"triage"
] | dpreal | 2 |
ludwig-ai/ludwig | data-science | 3,760 | Jupyter/Colab notebooks that utilize Ludwig should require only standard "pip install ludwig" (i.e., without latest development branch) | **Describe the bug**
Today, in order to use Ludwig in a Jupyter/Colab notebook, one needs to install the latest Ludwig from the development tree:
```
pip install -U git+https://github.com/ludwig-ai/ludwig.git@master
pip install -U git+https://github.com/huggingface/transformers
pip install -U git+https://github.com/huggingface/peft.git
```
One should not have to be required to use the "upgrade" option for Ludwig (and for the other utilizes, too, as much as possible).
**To Reproduce**
Steps to reproduce the behavior:
Please see #3758 for one typical way to reproduce the problem.
Please provide code, yaml config file and a sample of data in order to entirely reproduce the issue.
Issues that are not reproducible will be ignored.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: \[e.g. iOS\]
- Version \[e.g. 22\]
- Python version
- Ludwig version
**Additional context**
Add any other context about the problem here.
| closed | 2023-11-01T22:03:14Z | 2023-11-11T16:07:42Z | https://github.com/ludwig-ai/ludwig/issues/3760 | [] | alexsherstinsky | 3 |
LibreTranslate/LibreTranslate | api | 141 | English -> Turkish translation results in inappropriate websites ? | Hi, I am really confused right now. If you enter meaningless words like random characters, English to Turkish translation has some really weirds results.
Here they are :





I did not manipulate the website in any ways. You can try by going to https://libretranslate.com/ and selecting English -> Turkish then typing things I wrote above. I was about to use this on a project of mine but I luckily saw this. How is this even possible ? When I write something meaningless like "aaaa", it should output as "aaaa" not "xHamster". I'm both angry and hilarious right now. This is ridiculous. Please fix this and ban contributors of these translations. | closed | 2021-09-22T22:42:13Z | 2022-05-19T08:31:16Z | https://github.com/LibreTranslate/LibreTranslate/issues/141 | [
"bug"
] | CaptainCaptcha | 6 |
httpie/cli | rest-api | 1,270 | [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997) | ## Checklist
- [*] I've searched for similar issues.
- [*] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
Call https drwg.ru
## Current result

## Expected result

---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
https --debug -F drwg.ru
HTTPie 2.6.0
Requests 2.26.0
Pygments 2.10.0
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
D:\Python310\python.exe
Windows 10
<Environment {'colors': 256,
'config': {'default_options': []},
'config_dir': WindowsPath('C:/Users/kkrasnov/AppData/Roaming/httpie'),
'devnull': <property object at 0x00000232C296B150>,
'is_windows': True,
'log_error': <function Environment.log_error at 0x00000232C29B1750>,
'program_name': 'https',
'stderr': <colorama.ansitowin32.StreamWrapper object at 0x00000232C2984C70>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <colorama.ansitowin32.StreamWrapper object at 0x00000232C2984610>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': {'User-Agent': b'HTTPie/2.6.0'},
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x00000232C2A22B20>,
'url': 'https://drwg.ru'})
https: error: SSLError: HTTPSConnectionPool(host='drwg.ru', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))) while doing a GET request to URL: https://drwg.ru/
Traceback (most recent call last):
File "D:\Python310\lib\site-packages\urllib3\connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "D:\Python310\lib\site-packages\urllib3\connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "D:\Python310\lib\site-packages\urllib3\connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "D:\Python310\lib\site-packages\urllib3\connection.py", line 416, in connect
self.sock = ssl_wrap_socket(
File "D:\Python310\lib\site-packages\urllib3\util\ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "D:\Python310\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "D:\Python310\lib\ssl.py", line 512, in wrap_socket
return self.sslsocket_class._create(
File "D:\Python310\lib\ssl.py", line 1070, in _create
self.do_handshake()
File "D:\Python310\lib\ssl.py", line 1341, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Python310\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "D:\Python310\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "D:\Python310\lib\site-packages\urllib3\util\retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='drwg.ru', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\Python310\Scripts\https.exe\__main__.py", line 7, in <module>
File "D:\Python310\lib\site-packages\httpie\__main__.py", line 9, in main
exit_status = main()
File "D:\Python310\lib\site-packages\httpie\core.py", line 70, in main
exit_status = program(
File "D:\Python310\lib\site-packages\httpie\core.py", line 169, in program
for message in messages:
File "D:\Python310\lib\site-packages\httpie\client.py", line 102, in collect_messages
response = requests_session.send(
File "D:\Python310\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "D:\Python310\lib\site-packages\requests\adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='drwg.ru', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')))
```
| closed | 2022-01-20T14:59:42Z | 2022-01-20T16:18:10Z | https://github.com/httpie/cli/issues/1270 | [
"bug",
"new"
] | Kirill | 3 |
pallets/quart | asyncio | 411 | Application configuration and context to run pytest? | <!--
This issue tracker is a tool to address bugs in Quart itself. Please use
GitHub Discussions or the Pallets Discord for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
```
import pytest, sys, asyncio
from hypercorn.config import Config
from hypercorn.asyncio import serve
from os.path import dirname, join, abspath
from src.app import create_app
#from src.main import app
from quart_cors import cors
sys.path.insert(0, abspath(join(dirname(__file__), '../src')))
from common.Authentication import Authentication
pytest_plugins = ('pytest_asyncio',)
@pytest.fixture
async def app_context():
config = Config()
config.bind = ["localhost:4433"]
config.insecure_bind = ["localhost:8080"]
config.worker_class = "asyncio"
config.alt_svc_headers = ["h3=\":443\"; ma=3600, h3-29=\":443\"; ma=3600"]
config.loglevel = "DEBUG"
config.quic_bind = ["localhost:4433"]
app = create_app()
app = cors(app, allow_credentials=True, allow_origin="https://localhost:4433")
asyncio.run(serve(app, config))
async with app.app_context():
yield
@pytest.mark.asyncio
async def test_tokengeneration_pass(app_context):
""" JWT token generation should pass with valid user input parameter """
token = Authentication.generate_token("test_user")
assert type(token) is str
assert token != ""
```
Error:
```
E RuntimeError: Not within an app context
```
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version: 3.12.7
- Quart version:
```
$ pipenv graph|grep -i quart
quart-cors==0.8.0
└── Quart
quart-flask-patch==0.3.0
└── Quart
``` | closed | 2025-02-22T07:12:34Z | 2025-02-22T07:17:55Z | https://github.com/pallets/quart/issues/411 | [] | khteh | 0 |
recommenders-team/recommenders | data-science | 1,197 | [ASK] Running RBM_movielens.ipynb | ### Description
Hi, in general just trying to learn more about different algorithms and new to all this. When I try to run the RBM notebook using TensorFlow 1.12, I’m told other functions are not available in this version of TensorFlow but when I use a newer version, it says different functions have been deprecated with no new equivalents. Thoughts? Very new to this so may just be confused. Thanks in advance and sorry for any inconvenience/if it’s a simple fix on my end.
### Other Comments
| closed | 2020-09-01T04:49:30Z | 2020-09-02T02:35:27Z | https://github.com/recommenders-team/recommenders/issues/1197 | [
"help wanted"
] | festusojo123 | 0 |
flasgger/flasgger | api | 601 | How can I protect the swagger endpoints, by making it a child of another existing endpoint? | I have an application, flask, that contains all the api endpoints below /api/. as a before_request on the api blueprint I am making the validation of credentials, tokens, etc. I would like to set swagger to be a child of that /api/, so that whenever a request comes for it the credentials are also checked.
I have been reading the documentation but, in case this is possible, I have not been able to spot it. Can somebody give me a hand on this? | closed | 2023-11-10T17:24:34Z | 2023-12-17T12:49:25Z | https://github.com/flasgger/flasgger/issues/601 | [] | flixman | 1 |
keras-team/keras | data-science | 20,058 | "No gradients provided for any variable." when variable uses an integer data type | When using an integer data type for a trainable variable, training will always throw a "No gradients provided for any variable." `ValueError`. Here is a very simple example to reproduce the issue:
```python
import keras
import tensorflow as tf
import numpy as np
variable_dtype = tf.int32
# variable_dtype = tf.float32 # Uncommenting this fixes the issue
class BugTestLayer(keras.layers.Layer):
# Layer is just y = self.var * x
def build(self, input_shape):
self.var = self.add_variable(
(1,), initializer="zeros", dtype=variable_dtype)
def call(self, input):
return input * self.var
input_layer = keras.Input((1,), dtype=tf.int32)
test_layer = BugTestLayer()
output = test_layer(input_layer)
model = keras.Model(inputs=[input_layer], outputs=[output])
values = np.array([[i] for i in range(1000)])
model.compile(
loss=[keras.losses.MeanSquaredError()],
optimizer=keras.optimizers.RMSprop(),
metrics=[keras.metrics.MeanSquaredError()],
)
# This will always raise a `ValueError: No gradients provided for any variable.`
# when using an integer type
history = model.fit(values, values, batch_size=1, epochs=2)
```
Unfortunately this error message is vague enough that the root of the issue is unclear. The full error message is:
```console
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /workspaces/a8d3c9dff26b642ae4afaf1584a676512f9b8e8ce73bdaa449ed5ed373627eb7/test_bug.py:14
3 values = np.array([
4 [i]
5 for i in range(1000)
6 ])
8 model.compile(
9 loss=[keras.losses.MeanSquaredError()],
10 optimizer=keras.optimizers.RMSprop(),
11 metrics=[keras.metrics.MeanSquaredError()],
12 )
---> 14 history = model.fit(values, values, batch_size=1, epochs=2)
File ~/.local/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/.local/lib/python3.12/site-packages/keras/src/optimizers/base_optimizer.py:662, in BaseOptimizer._filter_empty_gradients(self, grads, vars)
659 missing_grad_vars.append(v.name)
661 if not filtered_grads:
--> 662 raise ValueError("No gradients provided for any variable.")
663 if missing_grad_vars:
664 warnings.warn(
665 "Gradients do not exist for variables "
666 f"{list(reversed(missing_grad_vars))} when minimizing the loss."
667 " If using `model.compile()`, did you forget to provide a "
668 "`loss` argument?"
669 )
ValueError: No gradients provided for any variable.
```
If it helps, here is what the model looks like:

Full disclosure: I'm not certain if this is a Keras bug or a Tensorflow bug. If this is suspected to be a Tensorflow bug, let me know and I'll file an issue there instead. | closed | 2024-07-29T05:39:48Z | 2024-08-28T04:48:49Z | https://github.com/keras-team/keras/issues/20058 | [
"type:support"
] | solidDoWant | 9 |
OFA-Sys/Chinese-CLIP | computer-vision | 282 | 在GPU 推理报错 Segmentation fault | 在GPU 推理报错 Segmentation fault,CPU上是没有问题的 | open | 2024-04-06T03:26:38Z | 2024-04-28T16:10:25Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/282 | [] | shiqwang | 1 |
gradio-app/gradio | data-science | 9,983 | Multi-user session state conflict using gr.State (and gr.BrowserState) | ### Describe the bug
I have a Gradio app with a complex Blocks UI, which has a set of variables (including dictionaries, lists, and even other class instances) that I encapsulate in a `deepcopy`-able class and maintain an instance of that class as a `gr.State` object. I need to support simultaneous multi-user access through the browser such that the changes to those variables made by one user do not conflict with that from another user.
However, I notice state conflicts. The data from one user overrides the data of the other user. Either this is a bug or I am doing something wrong. There are others who have brought this up (e.g., #4920 #3541) but I do not see the solution.
My code is too big to explain the point, so I will write below sort-of-a-pseudocode to illustrate the situation. I am guessing that there is only ever one copy of the `user_state` that is being passed along. What can I do to make sure that for every independent user access, a new `user_state` is created?
Slightly related, `gr.BrowserState` (as described at https://www.gradio.app/guides/state-in-blocks) is not available in my version of Gradio (5.5.0).
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
class StateData:
def __init__(self, create_uninitialised: bool = False):
if not create_uninitialised:
self.state_var1: List[SomeType] = []
self.state_var2 = SomeObject()
# And so on
def __deepcopy__(self, memo):
newone = type(self)(create_uninitialised=True)
newone.__dict__.update(self.__dict__)
return newone
class GradioApp:
def some_sub_component(self, user_state: gr.State):
# Build this component using information from the user_state by accessing it as user_state.value.
# This sub component can call other functions including those for user interactions, which are omitted for brevity.
def create_ui(self) -> gr.Blocks:
user_state = gr.State(StateData())
gr.Markdown("Some text")
self.some_sub_component(user_state)
if __name__ == "__main__":
app = GradioApp()
ui = app.create_ui()
ui.queue().launch()
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.9.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-11-18T09:28:09Z | 2024-11-21T14:20:06Z | https://github.com/gradio-app/gradio/issues/9983 | [
"bug",
"pending clarification"
] | anirbanbasu | 14 |
tensorflow/tensor2tensor | machine-learning | 1,582 | 2 foundational questions about self-attention/multihead | Hello everyone,
I have 2 questions.
1. We know multi-head. So among each head, why we use different matrix Wiq, Wik, Wiv to multiply the output of last layer, instead of using a same matrix Wi
I mean, like 8 heads, we have the output of last layer T = 512dim. first get K = 64dim, V=64dim, Q=64dim. K≠V≠Q. why we use then 3 * 8 = 24 (Wiq * 8, Wik * 8, Wiv * 8 ) matrices to get 8 groups K,V,Q, instead of 8 matrices (Wi * 8)
2. imagine that we only use 1 head, in other words, no multihead mechanism. why we use 3 different linear transformation to get different K, V, Q, instead of using an identical T(output of last layer or word embedding) to be K, V, Q.
Thank you so much if you can answer these questions. I googled then found no information.
| open | 2019-05-23T17:32:02Z | 2019-05-23T18:34:56Z | https://github.com/tensorflow/tensor2tensor/issues/1582 | [] | Mingyu-academic | 1 |
ultralytics/ultralytics | deep-learning | 19,661 | yolov11 how to improve large object performace? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
<img width="466" alt="Image" src="https://github.com/user-attachments/assets/d64bf551-e496-472d-b436-769c2792ad0a" />
I use yolo-obb train the doclayout model, But the performance on large targets is relatively poor. How can this be improved?
### Additional
_No response_ | open | 2025-03-12T08:40:05Z | 2025-03-12T08:45:12Z | https://github.com/ultralytics/ultralytics/issues/19661 | [
"question",
"OBB"
] | sjtu-cz | 2 |
onnx/onnx | deep-learning | 6,215 | reporting a vulnerability of download_model function | A vulnerability in the `download_model` function of the onnx/onnx framework, version 1.16.1, allows for arbitrary file overwrite due to inadequate prevention of path traversal attacks in malicious tar files.
I found the same vulnerability as CVE-2024-5187 in the download_model function that has not been fixed as follow:
```
@classmethod
@retry_execute(3)
def download_model(
cls,
model_test: TestCase,
model_dir: str,
models_dir: str,
) -> None:
# On Windows, NamedTemporaryFile can not be opened for a
# second time
del model_dir
download_file = tempfile.NamedTemporaryFile(delete=False)
try:
download_file.close()
assert model_test.url
print(
f"Start downloading model {model_test.model_name} from {model_test.url}"
)
urlretrieve(model_test.url, download_file.name)
print("Done")
with tarfile.open(download_file.name) as t:
t.extractall(models_dir)
```
I want to know wether it's because the download model's address has already been written dead so you didn't fix it . | closed | 2024-07-06T14:52:30Z | 2024-08-07T10:30:18Z | https://github.com/onnx/onnx/issues/6215 | [
"bug"
] | Arashimu | 12 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,906 | Exsist Java Version? | open | 2024-06-02T15:24:03Z | 2024-06-02T15:24:03Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1906 | [] | 488442 | 0 | |
pytest-dev/pytest-mock | pytest | 282 | Unclear Docs - mocker.patch - no attribute assert_called_once_with | I can't seem to find in the documentation anywhere how to assert mock function created with mocker.patch is called
Example:
```
def test(self, x):
return True
mock_a = mocker.patch('SomeClass.test', test)
SomeClass.some_test_method_that_calls_mocked_method()
mock_a.assert_called_once_with("a")
```
I get an error stating function object has no attribute 'assert_called_once_with'
The only example in the docs I see use this method but there is no example setting up the mock like I have. Am I missing something? Am I correct that https://docs.python.org/3/library/unittest.mock.html these functions should be supported? Honestly I find the docs pretty hard to follow as to what types of assertions are actually supported... | closed | 2022-02-26T20:34:10Z | 2022-06-10T00:54:51Z | https://github.com/pytest-dev/pytest-mock/issues/282 | [
"question"
] | ahurlburt | 3 |
python-gitlab/python-gitlab | api | 2,200 | RFE: Support Personal Access Token deletion using value | In GitLab 15.0 they added the ability to delete a Personal Access Token (PAT) by value. Previously could only delete a PAT by the ID. But if you have a lot of PATs and want to delete one with a specific value it was difficult, unless you had maintained a document mapping PAT values to IDs.
New API:
https://docs.gitlab.com/ee/api/personal_access_tokens.html#using-a-request-header
Release notes: https://about.gitlab.com/releases/2022/05/22/gitlab-15-0-released/
In previous versions of GitLab, personal access tokens could be deleted only by the ID. Because none of the endpoints return an ID from a given value, you couldn’t delete a personal access token if you only had the token value. **(This isn't technically true)**
You can also now use the personal_access_tokens/self endpoint to revoke a PAT with a single request. The endpoint revokes the PAT used to make the request, making it easy to quickly revoke PATs in case of a leak. | closed | 2022-07-29T04:15:43Z | 2023-07-31T01:24:28Z | https://github.com/python-gitlab/python-gitlab/issues/2200 | [] | JohnVillalovos | 1 |
PokemonGoF/PokemonGo-Bot | automation | 6,131 | Smart pinap function | ### Short Description
<!-- Tell us a short description of your request -->
I made a "smart pinap" function to maximize the use of pinap berries.
Ex1) If catch rate is over 85% for a normal mon use a pinap, but spare 3 for VIPs
Ex2) If catch rate is over a threshold (90%) for a VIP use a pinap, if under that threshold the original razz berry logic kicks in.
The current logic for VIP is fixed to razz or pinap, and for normal mon the level or CP determines the use of pinap. I think the actual catch rate is more important.
How do I add this to the bot? I am new to github, and I want to contribute to this bot.
### Possible solution
<!-- Tell us how you would include it in the current bot -->
I made something like this in pokemon_catch_worker.py
# SMART_PINAP: Use pinap when high catch rate, but spare some for VIP with high catch rate
if self.smart_pinap_enabled and ( (not is_vip and self.inventory.get(ITEM_PINAPBERRY).count > self.smart_pinap_to_keep and catch_rate_by_ball[current_ball] > self.pinap_threshold) or (is_vip and self.inventory.get(ITEM_PINAPBERRY).count > 0 and catch_rate_by_ball[current_ball] >= self.vip_berry_threshold) ) and not used_berry:
berry_id = ITEM_PINAPBERRY
new_catch_rate_by_ball = self._use_berry(berry_id, berry_count, encounter_id, catch_rate_by_ball, current_ball)
self.inventory.get(berry_id).remove(1)
berry_count -= 1
used_berry = True
### How it would help others
<!-- Tell us how you think it would help yourself and other users -->
Instead of throwing away pinap berries, you can get more candy!
<!-- ==========END OF FEATURE REQUEST SECTION========== -->
| closed | 2017-07-25T12:17:00Z | 2017-07-25T14:57:27Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6131 | [] | ChiefM3 | 1 |
mars-project/mars | scikit-learn | 2,560 | Reduction over different columns of a single DataFrame can be merged | When calculating series aggregations like `execute(df.a.sum(),df.b.mean())`, aggregations over different columns can be merged as `df.agg({'a': 'sum', 'b': 'mean'})`. An optimizer can be added to reduce num of subtasks. | open | 2021-10-28T08:55:47Z | 2021-10-28T08:56:50Z | https://github.com/mars-project/mars/issues/2560 | [
"type: enhancement",
"mod: dataframe"
] | wjsi | 0 |
ExpDev07/coronavirus-tracker-api | rest-api | 49 | I'm using your API | Thanks for implementing this Web API!
[CoronavirusDashboard](https://github.com/kw0006667/CoronavirusDashboardDemo) | closed | 2020-03-15T23:19:15Z | 2020-04-19T17:53:16Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/49 | [
"user-created"
] | kw0006667 | 0 |
dfki-ric/pytransform3d | matplotlib | 45 | Quaternion SLERP fails for identical Quaternions | def _slerp_weights(angle, t):
return (np.sin((1.0 - t) * angle) / np.sin(angle),
np.sin(t * angle) / np.sin(angle))
_slerp_weights divides by 0 when the angle between the rotations to interpolate between is zero, this leads to a nan Quaternion in SLERP. It would be preferable if a SLERP between two identical rotations would always return the same rotation instead of an invalid one. | closed | 2020-05-06T15:48:01Z | 2020-05-07T12:07:22Z | https://github.com/dfki-ric/pytransform3d/issues/45 | [
"bug"
] | ruehr | 4 |
roboflow/supervision | computer-vision | 1,143 | I want to filter detections so that only the classes "No Helmet", "Person", "Rider" are detected. | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
My model has 4 classes: `"No Helmet", "Person", "Rider", "Wear a helmet"`
**I want to filter detections so that only the classes "No Helmet", "Person", "Rider" are detected. How do I edit or add code?**
I set selected_classes = [0, 1, 2] to be "No Helmet", "Person", "Rider"
by following the example of Supervision:
```python
import numpy as np
import supervision as s
selected_classes = [0, 2, 3]
detections = sv.Detections(...)
detections = detections[np.isin(detections.class_id, selected_classes)]
```
and found error
```
Traceback (most recent call last):
File "d:\Private\Y3Project\python_project\PyCode\preath.py", line 187, in <module>
main()
File "d:\Private\Y3Project\python_project\PyCode\preath.py", line 143, in main
mask = zone.trigger(detections=detections)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\myenv\Lib\site-packages\supervision\detection\tools\polygon_zone.py", line 82, in trigger
clipped_detections = replace(detections, xyxy=clipped_xyxy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\myenv\Lib\dataclasses.py", line 1501, in replace
return obj.__class__(**changes)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 9, in __init__
File "D:\anaconda3\envs\myenv\Lib\site-packages\supervision\detection\core.py", line 120, in __post_init__
validate_detections_fields(
File "D:\anaconda3\envs\myenv\Lib\site-packages\supervision\validators\__init__.py", line 126, in validate_detections_fields
validate_tracker_id(tracker_id, n)
File "D:\anaconda3\envs\myenv\Lib\site-packages\supervision\validators\__init__.py", line 77, in validate_tracker_id
raise ValueError(
ValueError: tracker_id must be a 1D np.ndarray with shape (2,), but got shape (3,)
```
**My code below.**
```python
import cv2
import argparse
from ultralytics import YOLO
import supervision as sv
import numpy as np
import time
from pymongo import MongoClient
from datetime import datetime
from pytz import timezone
import base64
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'Using device: {device}')
torch.cuda.set_device(0)
mongo_uri = "myMongo"
database_name = "databasename"
collection_name = "collectionname"
# สร้าง MongoClient
client = MongoClient(mongo_uri)
database = client[database_name]
collection = database[collection_name]
cooldown_period = 10 # 1 minute
ZONE_POLYGON = np.array([
[0.1, 0.1],
[0.9, 0.1],
[0.9, 0.9],
[0.1, 0.9],
[0.1, 0.1]
])
model_path = "D:\\Private\\Y3Project\\python_project\\runs\\detect\\train\\weights\\best.pt"
#rstp_url = 'rtsp://user:user@192.xxx.xx.x:554/'
myvdo = "D:\Private\mydrive\myvdo\ss100met.mp4"
def parse_arguments() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="YOLOv8 live")
parser.add_argument(
"--webcam-resolution",
default=[1280, 720],
nargs=2,
type=int
)
args = parser.parse_args()
return args
def save_image_to_mongodb(image_binary, count_no_helmet, count_rider):
# Convert time to Thailand timezone
thai_timezone = timezone('Asia/Bangkok')
upload_time = datetime.now(thai_timezone).strftime('%Y-%m-%d')
# Save image and time in MongoDB
image_data = {
"image": base64.b64encode(image_binary).decode('utf-8'),
"upload_time": upload_time,
"count_no_helmet": count_no_helmet,
"count_rider": count_rider
}
result = collection.insert_one(image_data)
print(f"Image uploaded successfully. Object ID: {result.inserted_id}")
def main():
last_save_time = time.time()
args = parse_arguments()
frame_width, frame_height = args.webcam_resolution
model = YOLO(model_path)
cap = cv2.VideoCapture(myvdo)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
box_annotator = sv.RoundBoxAnnotator(
thickness=2
)
label_annotator = sv.LabelAnnotator(
text_position=sv.Position.TOP_CENTER,
text_thickness=2,
text_scale=1
)
zone_polygon = (ZONE_POLYGON * np.array(args.webcam_resolution)).astype(int)
zone = sv.PolygonZone(polygon=zone_polygon, frame_resolution_wh=tuple(args.webcam_resolution))
zone_annotator = sv.PolygonZoneAnnotator(
zone=zone,
color=sv.Color.RED,
thickness=2,
text_thickness=4,
text_scale=2
)
# Initialize FPS display text
fps_text = "FPS: calculating..."
fps_start_time = time.time()
fps_counter = 0
while True:
ret, frame = cap.read()
fps_counter += 1
bg_color = (0, 0, 0) # Black background
text_color = (255, 255, 255) # White text
for result in model.track(source=frame, stream=True, persist=True):
frame = result.orig_img
selected_classes = [0, 1, 2]
detections = sv.Detections.from_ultralytics(result)
detections = detections[np.isin(detections.class_id,selected_classes)]
if result.boxes.id is not None:
detections.tracker_id = result.boxes.id.cpu().numpy().astype(int)
labels = []
if detections.tracker_id is not None:
labels = [
f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
for class_id, confidence, tracker_id
in zip(detections.class_id, detections.confidence, detections.tracker_id)
]
if labels:
frame = box_annotator.annotate(
scene=frame.copy(),
detections=detections
)
frame = label_annotator.annotate(
scene=frame.copy(),
detections=detections,
labels=labels
)
else:
print("No Labels")
mask = zone.trigger(detections=detections)
frame = zone_annotator.annotate(scene=frame)
# ตรวจสอบว่า detections มีขนาดไม่เท่ากับ 0
if len(detections) > 0:
count_no_helmet = np.count_nonzero((detections.class_id == 0) & (detections.confidence > 0.5) & mask) #No Helmet
count_rider = np.count_nonzero((detections.class_id == 2) & (detections.confidence > 0.5) & mask) #Rider
# Print ค่าจำนวนออกมา
print(f"จำนวนคนไม่สวมหมวก: {count_no_helmet}")
print(f"จำนวนคนผู้ขับขี่: {count_rider}")
if count_no_helmet >= 1 and count_rider >= 1:
current_time = time.time()
if current_time - last_save_time >= cooldown_period:
image_binary = cv2.imencode('.jpg', frame)[1].tobytes()
save_image_to_mongodb(image_binary, count_no_helmet, count_rider)
print("Save Images Successfully")
last_save_time = current_time
else:
count_no_helmet = 0
count_rider = 0
# Update FPS text
fps_counter += 1
if time.time() - fps_start_time >= 1:
fps = fps_counter / (time.time() - fps_start_time)
fps_counter = 0
fps_start_time = time.time()
fps_text = f"FPS: {fps:.2f}"
cv2.rectangle(frame, (10, 10), (200, 50), bg_color, -1)
cv2.putText(frame, fps_text, (15, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, text_color, 2)
cv2.imshow("yolov8", frame)
if (cv2.waitKey(30) == 27):
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
```
### Additional
_No response_ | closed | 2024-04-26T11:21:30Z | 2024-04-26T12:03:17Z | https://github.com/roboflow/supervision/issues/1143 | [
"question"
] | REZIZ-TER | 1 |
man-group/arctic | pandas | 81 | Need to check that metadata exists before enumerating | https://github.com/manahl/arctic/commit/702ac62789642b159f03382a4a7246be0c1cd039
(see _pandas_ndarray_store.py, line 75)
Should check that recarr.dtype.metadata.get('index_tz') is not None before enumerating to avoid the error "TypeError: 'NoneType' object is not iterable"
| closed | 2016-01-04T15:19:50Z | 2016-01-04T17:28:34Z | https://github.com/man-group/arctic/issues/81 | [
"bug"
] | bmoscon | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 154 | how can i visual a model which not is a Classifier model? | closed | 2021-10-22T11:52:51Z | 2021-11-05T18:43:21Z | https://github.com/jacobgil/pytorch-grad-cam/issues/154 | [] | GuangtaoLyu | 1 | |
sinaptik-ai/pandas-ai | data-science | 988 | Trailing space in Column Header of CSV file causes incorrect response | ### System Info
OS Version: Ubuntu Ubuntu 22.04.4 LTS
Python Version: 3.9
pandasai Version: 2.0.2
### 🐛 Describe the bug
pandasai shows wrong results if column name has trailing space in it.
The following code works well on a CSV file with two columns 'Description' & 'Amount'.
The same code will just show the individual rows if a trailing space is added to the fist column i.e 'Description '
```
from pandasai import SmartDataframe
from pandasai.llm import GooglePalm
llm = GooglePalm(api_key="******************************")
df = SmartDataframe("/home/001/Documents/NLP Projects/PandasAI/wedding expenses.csv", config={"llm": llm})
response = df.chat("Waht was the total of top 5 transactions?")
print(response)
```
```
Output:
3540000
```
However, with the trailing space added to the Description column the output is:
```
Output:
Description Amount
9 Marquee remaining payment 2944000
10 Guest house payment 180000
26 Photographer Payment 140000
4 Advance Photography 140000
21 Guest House Payment 136000
```
| closed | 2024-03-03T18:42:37Z | 2024-06-13T16:03:37Z | https://github.com/sinaptik-ai/pandas-ai/issues/988 | [
"bug"
] | ulloogeo | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,370 | Potential dependency conflicts between flask-appbuilder and marshmallow | Hi, as shown in the following full dependency graph of **_flask-appbuilder_**, **_flask-appbuilder_** requires **_marshmallow <3.0.0,>=2.18.0_**, **_flask-appbuilder_** requires **_marshmallow-sqlalchemy >=0.16.1,<1_** (**_marshmallow-sqlalchemy 0.23.0_** will be installed, i.e., the newest version satisfying the version constraint), and directed dependency **_marshmallow-sqlalchemy 0.23.0_** transitively introduces **_marshmallow >=2.15.2_**.
Obviously, there are multiple version constraints set for **_marshmallow_** in this project. However, according to pip's _“first found wins”_ installation strategy, **_marshmallow 2.21.0_** (i.e., the newest version satisfying constraint **_<3.0.0,>=2.18.0_**) is the actually installed version.
Although the first found package version **_marshmallow 2.21.0_** just satisfies the later dependency constraint (**_marshmallow >=2.15.2_**), such installed version is very close to the upper bound of the version constraint of Marshmallow specified by **_marshmallow-sqlalchemy 0.23.0_**.
Once **_marshmallow-sqlalchemy_** upgrades,its newest version will be installed. Therefore, it will easily cause a dependency conflict (build failure), if the upgraded **_marshmallow-sqlalchemy_** version introduces a higher version of **_Marshmallow_**, violating its another version constraint **_<3.0.0,>=2.18.0_**.
According to the release history of **_marshmallow-sqlalchemy_**, it habitually upgrates **_Marshmallow_** in its recent releases. For instance, **_marshmallow-sqlalchemy 0.6.0_** upgrated Marshmallow’s constraint from **_>=2.0.0b4 to >=2.0.0_**, and **_marshmallow-sqlalchemy 2.0.0_** upgrated Marshmallow’s constraint from **_>=2.0.0 to >=2.15.2_**.
As such, it is a warm warning of a potential dependency conflict issue for flask-appbuilder.
### Dependency tree
```
flask-appbuilder - 2.3.4
| +- apispec(install version:1.3.3 version range:<2,>=1.1.1)
| +- click(install version:7.1.2 version range:<8,>=6.7)
| +- colorama(install version:0.4.3 version range:<1,>=0.3.9)
| +- flask(install version:1.1.2 version range:>=0.12,<2)
| | +- click(install version:7.1.2 version range:>=5.1)
| | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| +- flask-babel(install version:1.0.0 version range:>=1,<2)
| | +- Babel(install version:2.8.0 version range:>=2.3)
| | | +- pytz(install version:2019.3 version range:>=2015.7)
| | +- Flask(install version:1.1.2 version range:*)
| | | +- click(install version:7.1.2 version range:>=5.1)
| | | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| | +- Jinja2(install version:2.11.2 version range:>=2.5)
| | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | +- pytz(install version:2019.3 version range:*)
| +- flask-jwt-extended(install version:3.24.1 version range:<4,>=3.18)
| | +- flask(install version:1.1.2 version range:>=1.0)
| | | +- click(install version:7.1.2 version range:>=5.1)
| | | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| | +- pyjwt(install version:1.7.1 version range:>=1.6.4)
| | +- six(install version:1.14.0 version range:*)
| | +- werkzeug(install version:1.0.1 version range:>=0.14)
| +- flask-login(install version:0.4.1 version range:<0.5,>=0.3)
| | +- flask(install version:1.1.2 version range:*)
| | | +- click(install version:7.1.2 version range:>=5.1)
| | | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| +- flask-openid(install version:1.2.5 version range:>=1.2.5,<2)
| +- flask-sqlalchemy(install version:2.4.1 version range:>=2.4,<3)
| | +- flask(install version:1.1.2 version range:>=0.10)
| | | +- click(install version:7.1.2 version range:>=5.1)
| | | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| | +- sqlalchemy(install version:1.3.16 version range:>=0.8.0)
| +- flask-wtf(install version:0.14.3 version range:<1,>=0.14.2)
| | +- Flask(install version:1.1.2 version range:*)
| | | +- click(install version:7.1.2 version range:>=5.1)
| | | +- itsdangerous(install version:1.1.0 version range:>=0.24)
| | | +- Jinja2(install version:2.11.2 version range:>=2.10.1)
| | | | +- MarkupSafe(install version:2.0.0a1 version range:>=0.23)
| | | +- Werkzeug(install version:1.0.1 version range:>=0.15)
| | +- itsdangerous(install version:1.1.0 version range:*)
| | +- WTForms(install version:2.2.1 version range:*)
| +- jsonschema(install version:3.2.0 version range:>=3.0.1,<4)
| +- marshmallow(install version:2.21.0 version range:<3.0.0,>=2.18.0)
| +- marshmallow-enum(install version:1.5.1 version range:>=1.4.1,<2)
| +- marshmallow-sqlalchemy(install version:0.23.0 version range:>=0.16.1,<1)
| | +-marshmallow(install version:2.21.0 version range:>=2.15.2 )
| +- prison(install version:0.1.3 version range:>=0.1.3,<1.0.0)
| | +- six(install version:1.14.0 version range:*)
| +- pyjwt(install version:1.7.1 version range:>=1.7.1)
| +- python-dateutil(install version:2.8.1 version range:>=2.3,<3)
| +- sqlalchemy-utils(install version:0.36.5 version range:>=0.32.21,<1)
```
Thanks for your help.
Best,
Neolith
| closed | 2020-05-13T08:16:01Z | 2020-08-22T19:01:48Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1370 | [
"urgent",
"stale",
"dependency-bump"
] | NeolithEra | 3 |
D4Vinci/Scrapling | web-scraping | 42 | stealth=True parameter fails with sandbox error when running as root | ### Have you searched if there an existing issue for this?
- [x] I have searched the existing issues
### Python version (python --version)
3.10.12
### Scrapling version (scrapling.__version__)
0.2.95
### Dependencies version (pip3 freeze)
root@Ubuntu-2204-jammy-amd64-base ~ # pip3 freeze --all
aiofiles==24.1.0
aiohappyeyeballs==2.4.6
aiohttp==3.11.12
aiosignal==1.3.2
aiosqlite==0.21.0
annotated-types==0.7.0
annoy==1.17.3
anthropic==0.47.0
anyio==4.8.0
async-timeout==5.0.1
attrs==25.1.0
babel==2.17.0
backoff==2.2.1
beautifulsoup4==4.13.3
blis==1.2.0
Brotli==1.1.0
browserforge==1.2.3
cachetools==5.5.2
camoufox==0.4.11
catalogue==2.0.10
certifi==2025.1.31
cffi==1.17.1
chardet==4.0.0
charset-normalizer==3.4.1
click==8.1.8
cloudpathlib==0.20.0
colorama==0.4.6
confection==0.1.5
courlan==1.3.2
Crawl4AI==0.4.248
cryptography==44.0.1
cssselect==1.2.0
cymem==2.0.11
dateparser==1.2.1
Deprecated==1.2.18
distro==1.9.0
dnspython==2.7.0
docker==5.0.3
dockerpty==0.4.1
docopt==0.6.2
exceptiongroup==1.2.2
fake-http-header==0.3.5
fake-useragent==2.0.3
fastapi==0.115.8
feedfinder2==0.0.4
feedparser==6.0.11
filelock==3.17.0
frozenlist==1.5.0
fsspec==2025.2.0
geoip2==5.0.1
google==3.0.0
greenlet==3.1.1
h11==0.14.0
htmldate==1.9.3
httpcore==1.0.7
httpx==0.27.2
huggingface-hub==0.29.1
idna==3.10
importlib_metadata==8.6.1
jieba3k==0.35.1
Jinja2==3.1.5
jiter==0.8.2
joblib==1.4.2
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jusText==3.0.2
langcodes==3.5.0
langdetect==1.0.9
language-tags==1.2.0
language_data==1.3.0
limits==4.0.1
litellm==1.61.15
lxml==5.3.1
lxml_html_clean==0.4.1
marisa-trie==1.2.1
markdown-it-py==3.0.0
markdownify==1.0.0
MarkupSafe==3.0.2
maxminddb==2.6.3
mdurl==0.1.2
more-itertools==8.10.0
mpmath==1.3.0
multidict==6.1.0
murmurhash==1.0.12
networkx==3.4.2
nltk==3.9.1
numpy==2.2.3
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-cusparselt-cu12==0.6.2
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
openai==1.64.0
orjson==3.10.15
packaging==24.2
pandas==2.2.3
pillow==10.4.0
pip==22.0.2
platformdirs==4.3.6
playwright==1.50.0
preshed==3.0.9
propcache==0.3.0
psutil==7.0.0
pyahocorasick==2.1.0
pybind11==2.13.6
pycparser==2.22
pydantic==2.10.6
pydantic_core==2.27.2
pyee==12.0.0
Pygments==2.19.1
pymongo==4.11.1
pyOpenSSL==25.0.0
pyrsistent==0.18.1
PySocks==1.7.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2025.1
PyYAML==6.0.2
rank-bm25==0.2.2
rebrowser_playwright==1.49.1
referencing==0.36.2
regex==2024.11.6
requests==2.32.3
requests-file==2.1.0
rich==13.9.4
rpds-py==0.23.1
safetensors==0.5.2
scikit-learn==1.6.1
scipy==1.15.2
scrapling==0.2.95
screeninfo==0.8.1
setuptools==75.8.0
sgmllib3k==1.0.0
shellingham==1.5.4
six==1.17.0
smart-open==7.1.0
sniffio==1.3.1
snowballstemmer==2.2.0
socksio==1.0.0
srsly==2.5.1
starlette==0.45.3
sympy==1.13.1
texttable==1.6.4
tf-playwright-stealth==1.1.2
thinc==8.3.4
threadpoolctl==3.5.0
tiktoken==0.9.0
tinysegmenter==0.3
tld==0.13
tldextract==5.1.3
tokenizers==0.21.0
torch==2.6.0
tqdm==4.67.1
transformers==4.49.0
triton==3.2.0
typer==0.15.1
typing_extensions==4.12.2
tzdata==2025.1
tzlocal==5.3
ua-parser==1.0.1
ua-parser-builtins==0.18.0.post1
urllib3==2.3.0
uvicorn==0.34.0
w3lib==2.3.1
wasabi==1.1.3
weasel==0.4.1
websocket-client==1.2.3
wheel==0.37.1
wrapt==1.17.2
xxhash==3.5.0
yarl==1.18.3
zipp==3.21.0
zstandard==0.23.0
### What's your operating system?
Ubuntu 22.04 (jammy)
### Are you using a separate virtual environment?
No
### Expected behavior
The PlayWrightFetcher should successfully fetch web pages with stealth=True parameter when running as root. When running with elevated privileges, the library should either automatically disable the sandbox or honor environment variables/configuration to disable it. The fetcher should:
1. Launch the browser successfully
2. Apply stealth mode JS injections and configurations
3. Return the requested web page content
4. Not throw any sandbox-related errors
### Actual behavior
When using PlayWrightFetcher with stealth=True as root, the script consistently fails with a sandbox error:
CopyError: BrowserType.launch: Target page, context or browser has been closed
Browser logs:
```
Chromium sandboxing failed!
================================
To avoid the sandboxing issue, do either of the following:
- (preferred): Configure your environment to support sandboxing
- (alternative): Launch Chromium without sandbox using 'chromiumSandbox: false' option
================================
```
The error persists even when setting various environment variables like `PLAYWRIGHT_SKIP_BROWSER_VALIDATION, PLAYWRIGHT_CHROMIUM_NO_SANDBOX,` and `PLAYWRIGHT_CHROMIUM_ARGS="--no-sandbox"` The root issue appears to be that the custom rebrowser_playwright library used in stealth mode doesn't properly handle the sandbox configuration when running as root.
Notably, the same code works perfectly when using stealth=False.
### Steps To Reproduce
Steps To Reproduce
Install Scrapling on an Ubuntu 22.04 server:
`pip install scrapling
`
Create a minimal test script named test_stealth.py:
```
import asyncio
import os
from scrapling import PlayWrightFetcher
# Try to disable sandbox with environment variables (doesn't work)
os.environ["PLAYWRIGHT_SKIP_BROWSER_VALIDATION"] = "1"
os.environ["PLAYWRIGHT_CHROMIUM_ARGS"] = "--no-sandbox --disable-setuid-sandbox"
async def test_stealth():
try:
print("Testing PlayWrightFetcher with stealth mode...")
response = await PlayWrightFetcher().async_fetch(
"https://example.com",
headless=True,
stealth=True,
timeout=60000
)
print(f"Success: {response.status}")
return response.html_content
except Exception as e:
print(f"Error: {str(e)}")
return None
if __name__ == "__main__":
result = asyncio.run(test_stealth())
if result:
print("First 100 chars of response:", result[:100])
```
Run the script as root:
`sudo python3 test_stealth.py
`
Observe the sandbox error in the output.
Change stealth=True to stealth=False and run again to verify it works without stealth mode. | closed | 2025-03-02T10:19:16Z | 2025-03-02T19:29:50Z | https://github.com/D4Vinci/Scrapling/issues/42 | [
"bug"
] | antonyderoshan | 3 |
chaos-genius/chaos_genius | data-visualization | 405 | [BUG] KPI validation for datetime column does not work | ## Describe the bug
The validation that checks the datetime column datatype does not work properly, as it does add valid datetime columns as well
## Current behavior
Valid datetime columns can not be added
## Expected behavior
Datetime columns should pass through the validation checks
## Screenshots

| closed | 2021-11-15T10:34:35Z | 2021-11-15T13:07:48Z | https://github.com/chaos-genius/chaos_genius/issues/405 | [] | Fletchersan | 1 |
arnaudmiribel/streamlit-extras | streamlit | 162 | 🐛 [BUG] - dataframe sort by date column seems broken | ### Description
Hi, I passed a dataframe into dataframe_explorer, in the UI when I try to sort by date columns it raise a warning which indicates a problem in the source code, and in the UI the table is not sorted correctly for the date columns.
```
filtered_df = dataframe_explorer(df, case=False)
col_config={
"DATE_1": st.column_config.TimeColumn(format="YYYY-MM-DD"),
"DATE_2": st.column_config.TimeColumn(format="YYYY-MM-DD")
}
st.dataframe(filtered_df, use_container_width=True, column_config=col_config)
```
```
lib/python3.10/site-packages/streamlit_extras/dataframe_explorer/__init__.py:34: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
df[col] = pd.to_datetime(df[col])
```
<img width="65" alt="image" src="https://github.com/arnaudmiribel/streamlit-extras/assets/19778682/a4ca8771-7a69-4589-86e4-d825a827ba3e">
### Version of streamlit
1.25.0
### Version of streamlit-extras
0.3.0 | open | 2023-08-07T09:43:52Z | 2024-07-31T12:30:32Z | https://github.com/arnaudmiribel/streamlit-extras/issues/162 | [
"bug"
] | yulevern | 1 |
Kanaries/pygwalker | matplotlib | 564 | [BUG] pygwalker bug report: retrieving specs when app URL has path | **Describe the bug**
It looks like Pygwalker cannot **read** specs when the app's url is composed of a host + path. **Writting** seems fine though.
For example, I have an app deployed with the following URL structure:
ORIGIN/PATHNAME
This URL structure comes from corporate constraints within the company I work for.
I have taken a look at "pygwalkjer-app.iife.js", which I believe is generated by "communication.tsx", and I see this:
i=`/${e}/${A}`
I believe this should be updated to i=`${window.parent.document.location.href}/${e}/${A}`
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy your Streamlit app with the following URL structure: i=`/${e}/${A}`
2. Use the following Python code
`StreamlitRenderer(dataset=df, spec=PATH_TO_SPEC, spec_io_mode="rw")`
4. Try load your Pygwalker chart
5. The page gets stuck on "Loading Graphic- Walker UI..."
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions**
- pygwalker version: 0.4.8.4
- python version: 3.11
- browser: Google Chrome
**Additional context**
I have opened a PR to address this issue: https://github.com/Kanaries/pygwalker/pull/563
| closed | 2024-05-31T09:47:37Z | 2024-05-31T23:53:56Z | https://github.com/Kanaries/pygwalker/issues/564 | [
"bug"
] | thomasbs17 | 2 |
exaloop/codon | numpy | 122 | why linux install conda module not found? | closed | 2022-12-19T05:55:07Z | 2022-12-19T07:34:30Z | https://github.com/exaloop/codon/issues/122 | [] | DongYangYang621 | 0 | |
comfyanonymous/ComfyUI | pytorch | 6,623 | CFG++ implementation for gradient_estimation sampler | ### Feature Idea
I would love to see a CFG++ implementation for the gradient_estimation sampler (https://github.com/comfyanonymous/ComfyUI/pull/6554)!
### Existing Solutions
See: `res_multistep_cfg_pp` sampler: https://github.com/comfyanonymous/ComfyUI/blob/255edf22463f597a1e136091e0f5cbbbe5f400a4/comfy/k_diffusion/sampling.py#L1270
### Other
_No response_ | open | 2025-01-27T22:22:06Z | 2025-01-28T02:45:10Z | https://github.com/comfyanonymous/ComfyUI/issues/6623 | [
"Feature"
] | not-ski | 2 |
AntonOsika/gpt-engineer | python | 688 | Sweep: Add a continuous mode | This new mode and capability will allow for autonomous and continuous code generation, debugging, and code running/testing by engineer. When this mode is enabled, only the user/operator can manually stop it by exiting the terminal window by hand. Otherwise, gpt-engineer should especially remember to keep running, even after executing code that it runs for tests in its coding>debugging>test running loop that never ends, allowing for infinite self improvement of code and similar projects. Ensure you add a new continous command that gpt-engineer recognizes (test this extensively). Sweep must use the gpt-4-32k model always. Run the code and fix errors and keep improving at least 20 times before stopping or shutting down.
<details open>
<summary>Checklist</summary>
- [X] `gpt_engineer/main_script.py` ✅ Commit [`921bf9c`](https://github.com/AntonOsika/gpt-engineer/commit/921bf9ceccaaea01566c07cf474ff96d5b4a41ac)
> • Add a new command 'continuous' in the command parser to trigger the continuous mode of operation.
> • Implement a continuous loop for code generation, debugging, and testing that only stops when the user manually interrupts it.
> • Ensure that the gpt-4-32k model is always used in the continuous mode.
> • Implement a mechanism to run the code, fix errors, and keep improving at least 20 times before stopping or shutting down in the continuous mode.
- [X] `gpt_engineer/steps_module.py` ✅ Commit [`921bf9c`](https://github.com/AntonOsika/gpt-engineer/commit/921bf9ceccaaea01566c07cf474ff96d5b4a41ac)
> • Modify the code generation, debugging, and testing steps to support continuous operation.
</details>
| closed | 2023-09-11T19:49:40Z | 2023-09-12T09:29:36Z | https://github.com/AntonOsika/gpt-engineer/issues/688 | [
"sweep"
] | meyerjohn1 | 2 |
twelvedata/twelvedata-python | matplotlib | 47 | [Bug] Weekend data (inconsistently) appearing in time_series data. | **Describe the bug**
When requesting time_series data, saturday/sunday candles randomly appear. (I'm assuming this is a bug server side and not python side. But not sure how to raise that).
`1day`: saturday candle (2022-03-05)
```
{'datetime': '2022-03-07', 'open': '1.32290', 'high': '1.32410', 'low': '1.31025', 'close': '1.31245'}
{'datetime': '2022-03-05', 'open': '1.32385', 'high': '1.32495', 'low': '1.32175', 'close': '1.32275'}
{'datetime': '2022-03-04', 'open': '1.33515', 'high': '1.33575', 'low': '1.32025', 'close': '1.32375'}
```
`8hour`: sunday candles appear (no saturday candle though) (2022-03-06)
```
{'datetime': '2022-03-06 22:00:00', 'open': '1.31995', 'high': '1.32290', 'low': '1.31415', 'close': '1.31540'}
{'datetime': '2022-03-06 14:00:00', 'open': '1.32290', 'high': '1.32410', 'low': '1.31860', 'close': '1.31990'}
{'datetime': '2022-03-04 14:00:00', 'open': '1.32385', 'high': '1.32495', 'low': '1.32175', 'close': '1.32275'}
{'datetime': '2022-03-04 06:00:00', 'open': '1.33120', 'high': '1.33170', 'low': '1.32025', 'close': '1.32375'}
```
**To Reproduce**
Steps to reproduce the behavior:
```
c = TDClient(apikey=XXXX)
td = c.time_series(
symbol="GBP/USD",
interval="1day",
outputsize=10,
timezone="UTC",
)
```
**Expected behavior**
Data for times when the market is closed doesn't appear, or at least appears consistently. | closed | 2022-03-09T12:03:55Z | 2022-03-10T09:48:18Z | https://github.com/twelvedata/twelvedata-python/issues/47 | [] | EdgyEdgemond | 1 |
napari/napari | numpy | 7,074 | Shape gets highlighted if hovered on loosing focus | ### 🐛 Bug Report
When a Shape is hovered with the mouse, and the napari application looses focus, the Shape will be highlighted. This also emits a highlight event.
### 💡 Steps to Reproduce
Add a `Shapes` layer and hover over a shape.
Then loose the focus from the napari application with for example tabbing out.
The highlight event for the shapes layer will be emitted, and the shape will be visually highlighted.
[demonstraion video](https://www.loom.com/share/681186ebfb044d309f7a22b98610f0ce?sid=83305afc-42fb-4046-90dc-11197f892d4b)
### 💡 Expected Behavior
I would expect this not to happen.
### 🌎 Environment
napari: 0.1.dev3376+ga6882ad
Platform: Windows-10-10.0.19045-SP0
Python: 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.10
NumPy: 1.26.4
SciPy: 1.13.0
Dask: 2024.4.2
VisPy: 0.14.2
magicgui: 0.8.2
superqt: 0.6.4
in-n-out: 0.2.1
app-model: 0.2.6
npe2: 0.7.5
OpenGL:
GL version: 4.6.0 Compatibility Profile Context 24.3.1.240216
MAX_TEXTURE_SIZE: 16384
GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
screen 1: resolution 1920x1080, scale 1.0
Optional:
numba: 0.59.1
triangle not installed
Settings path:
C:\Users\tomi\AppData\Local\napari.venv_665327481d5ffa929bfcfc10d3499e5b2e1e735a\settings.yaml
Plugins:
napari: 0.1.dev3376+ga6882ad (81 contributions)
napari-console: 0.0.9 (0 contributions)
napari-svg: 0.1.10 (2 contributions)
### 💡 Additional Context
This issue was discovered together with @melonora. | closed | 2024-07-05T22:10:12Z | 2024-11-21T05:50:51Z | https://github.com/napari/napari/issues/7074 | [
"bug",
"UI/UX"
] | OnionKiller | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,139 | pruning questions | When I use the point cloud generated by open3d as input, the number of Gauss rises briefly, and then filters out some of the non-eligible Gauss at the 3000th iteration, but why filter out more than half of the original Gauss I input | open | 2025-01-09T07:05:52Z | 2025-01-10T23:29:37Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1139 | [] | thespecialyan | 1 |
unytics/bigfunctions | data-visualization | 19 | [new] `json_items(json_string)` | Takes a json_string as input which has flat (no nested) key values and returns an `array<struct<key string, value string>>` | closed | 2022-12-06T23:28:17Z | 2022-12-09T09:58:49Z | https://github.com/unytics/bigfunctions/issues/19 | [
"good first issue",
"new-bigfunction"
] | unytics | 1 |
tensorly/tensorly | numpy | 113 | Error: Too many values to unpack (expected 2) | Hi, @JeanKossaifi
When I was using your latest version (0.4.4) installed from the github website, an error occurs which provides the following error messages:
`
13 sparse_factors = sparse_parafac(tensor=dataset['source_tensor'], rank=5, init='random', random_state=123456, return_errors=True, mask=dataset['mask_tensor'])
14 print(time.time() - t)
e:\code\opensourcetools\tensorly-master\tensorly\contrib\sparse\core.py in inner(*args, **kwargs)
8 def inner(*args, **kwargs):
9 with sparse_context():
---> 10 return func(*args, **kwargs)
11
12 return inner
e:\code\opensourcetools\tensorly-master\tensorly\decomposition\candecomp_parafac.py in parafac(tensor, rank, n_iter_max, init, svd, normalize_factors, tol, orthogonalise, random_state, verbose, return_errors, non_negative, mask)
177 if mask is not None:
178 #tensor = tensor*mask + tl.kruskal_to_tensor(factors, mask=1-mask)
--> 179 tensor = tensor*mask + tl.kruskal_to_tensor(factors)
180
181 mttkrp = unfolding_dot_khatri_rao(tensor, (weights, factors), mode)
e:\code\opensourcetools\tensorly-master\tensorly\kruskal_tensor.py in kruskal_to_tensor(kruskal_tensor)
175
176 """
--> 177 shape, _ = _validate_kruskal_tensor(kruskal_tensor)
178 weights, factors = kruskal_tensor
179
e:\code\opensourcetools\tensorly-master\tensorly\kruskal_tensor.py in _validate_kruskal_tensor(kruskal_tensor)
71 return kruskal_tensor.shape, kruskal_tensor.rank
72
---> 73 weights, factors = kruskal_tensor
74
75 if len(factors) < 2:
ValueError: too many values to unpack (expected 2)
`
So I check the code in the tensorly/kruskal_tensor.py, found problem in line 73:
weights, factors = kruskal_tensor
It seems strange, could you check it and let me know how to fix it?
Thanks a lot in front.
ZhuYifan | closed | 2019-05-29T07:33:52Z | 2019-06-16T05:54:41Z | https://github.com/tensorly/tensorly/issues/113 | [] | zhuyf8899 | 4 |
miguelgrinberg/Flask-Migrate | flask | 375 | sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table sqlite_sequence may not be dropped | when i run 'db migrate' , show

Then i run 'db upgrade' ,i get expection:

how can i ignore sqlite_sequence ???
Libary version:
flask-migrate 2.5.3
flask-sqlalchemy 2.4.4
python 2.7.16
| closed | 2020-11-29T18:02:06Z | 2022-07-09T18:50:28Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/375 | [
"question"
] | Mr-NB | 3 |
huggingface/datasets | tensorflow | 6,891 | Unable to load JSON saved using `to_json` | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset.json")
# This works
loaded_test = load_dataset("json", data_files="full_dataset.json")
# This fails
loaded_test = json.load(open("full_dataset.json", "r"))
```
### Expected behavior
The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`.
### Environment info
Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing | closed | 2024-05-12T01:02:51Z | 2024-05-16T14:32:55Z | https://github.com/huggingface/datasets/issues/6891 | [] | DarshanDeshpande | 2 |
microsoft/qlib | deep-learning | 1,317 | on qrun:"mlflow.exceptions.MlflowException: Param value .... had length 780, which exceeded length limit of 500 " | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
when I do the example:
qrun qrun benchmarks\GATs\workflow_config_gats_Alpha158.yaml
I got the error info:
(py38) D:\worksPool\works2021\adair2021\S92\P4\qlib-main\examples>qrun benchmarks\GATs\workflow_config_gats_Alpha158_full02.yaml
[7724:MainThread](2022-10-14 07:53:33,890) INFO - qlib.Initialization - [config.py:413] - default_conf: client.
[7724:MainThread](2022-10-14 07:53:33,890) INFO - qlib.workflow - [expm.py:31] - experiment manager uri is at file:D:\worksPool\works2021\adair2021\S92\P4\qlib-main\examples\mlruns
[7724:MainThread](2022-10-14 07:53:33,890) INFO - qlib.Initialization - [__init__.py:74] - qlib successfully initialized based on client settings.
[7724:MainThread](2022-10-14 07:53:33,890) INFO - qlib.Initialization - [__init__.py:76] - data_path={'__DEFAULT_FREQ': WindowsPath('C:/Users/adair2019/.qlib/qlib_data/cn_data')}
[7724:MainThread](2022-10-14 07:53:33,906) INFO - qlib.workflow - [expm.py:316] - <mlflow.tracking.client.MlflowClient object at 0x0000017B5D406F40>
[7724:MainThread](2022-10-14 07:53:33,906) INFO - qlib.workflow - [exp.py:260] - Experiment 3 starts running ...
[7724:MainThread](2022-10-14 07:53:34,124) INFO - qlib.workflow - [recorder.py:339] - Recorder 41d40d173e614811bad721127a3204b8 starts running under Experiment 3 ...
'git' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
[7724:MainThread](2022-10-14 07:53:34,140) INFO - qlib.workflow - [recorder.py:372] - Fail to log the uncommitted code of $CWD when run `git diff`
'git' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
[7724:MainThread](2022-10-14 07:53:34,158) INFO - qlib.workflow - [recorder.py:372] - Fail to log the uncommitted code of $CWD when run `git status`
'git' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
[7724:MainThread](2022-10-14 07:53:34,164) INFO - qlib.workflow - [recorder.py:372] - Fail to log the uncommitted code of $CWD when run `git diff --cached`
Exception in thread Thread-1:
Traceback (most recent call last):
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\tracking\_tracking_service\client.py", line 301, in log_param
self.store.log_param(run_id, param)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\store\tracking\file_store.py", line 887, in log_param
_validate_param(param.key, param.value)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\utils\validation.py", line 148, in _validate_param
_validate_length_limit("Param value", MAX_PARAM_VAL_LENGTH, value)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\utils\validation.py", line 269, in _validate_length_limit
raise MlflowException(
mlflow.exceptions.MlflowException: Param value '[{'class': 'SignalRecord', 'module_path': 'qlib.workflow.record_temp', 'kwargs': {'model': '<MODEL>', 'dataset': '<DATASET>'}}, {'class': 'SigAnaRecord', 'module_path': 'qlib.workflow.record_temp', 'kwargs': {'ana_long_short': False, 'ann_scaler': 25' had length 780, which exceeded length limit of 500
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\ProgramData\Anaconda3\envs\py38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "d:\ProgramData\Anaconda3\envs\py38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\pyqlib-0.8.6.99-py3.8-win-amd64.egg\qlib\utils\paral.py", line 91, in run
data()
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\pyqlib-0.8.6.99-py3.8-win-amd64.egg\qlib\workflow\recorder.py", line 441, in log_params
self.client.log_param(self.id, name, data)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\tracking\client.py", line 858, in log_param
self._tracking_client.log_param(run_id, key, value)
File "d:\ProgramData\Anaconda3\envs\py38\lib\site-packages\mlflow-1.29.0-py3.8.egg\mlflow\tracking\_tracking_service\client.py", line 305, in log_param
raise MlflowException(msg, INVALID_PARAMETER_VALUE)
mlflow.exceptions.MlflowException: Param value '[{'class': 'SignalRecord', 'module_path': 'qlib.workflow.record_temp', 'kwargs': {'model': '<MODEL>', 'dataset': '<DATASET>'}}, {'class': 'SigAnaRecord', 'module_path': 'qlib.workflow.record_temp', 'kwargs': {'ana_long_short': False, 'ann_scaler': 25' had length 780, which exceeded length limit of 500
The cause of this error is typically due to repeated calls
to an individual run_id event logging.
Incorrect Example:
---------------------------------------
with mlflow.start_run():
mlflow.log_param("depth", 3)
mlflow.log_param("depth", 5)
---------------------------------------
Which will throw an MlflowException for overwriting a
logged parameter.
Correct Example:
---------------------------------------
with mlflow.start_run():
with mlflow.start_run(nested=True):
mlflow.log_param("depth", 3)
with mlflow.start_run(nested=True):
mlflow.log_param("depth", 5)
---------------------------------------
Which will create a new nested run for each individual
model and prevent parameter key collisions within the
tracking store.'
[7724:MainThread](2022-10-14 07:53:35,515) INFO - qlib.GATs - [pytorch_gats_ts.py:81] - GATs pytorch version...
[7724:MainThread](2022-10-14 07:53:35,562) INFO - qlib.GATs - [pytorch_gats_ts.py:100] - GATs parameters setting:
d_feat : 158
hidden_size : 64
num_layers : 2
dropout : 0.7
n_epochs : 200
lr : 0.0001
metric : loss
early_stop : 10
optimizer : adam
loss_type : mse
base_model : LSTM
model_path : None
visible_GPU : 0
use_GPU : True
seed : None
[7724:MainThread](2022-10-14 07:53:35,562) INFO - qlib.GATs - [pytorch_gats_ts.py:146] - model:
GATModel(
(rnn): LSTM(158, 64, num_layers=2, batch_first=True, dropout=0.7)
(transformation): Linear(in_features=64, out_features=64, bias=True)
(fc): Linear(in_features=64, out_features=64, bias=True)
(fc_out): Linear(in_features=64, out_features=1, bias=True)
(leaky_relu): LeakyReLU(negative_slope=0.01)
(softmax): Softmax(dim=1)
)
Then the program re-run again.
I am wondering how to fix it.
Thanks a lot.
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## Screenshot
<!-- A screenshot of the error message or anything shouldn't appear-->
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version:
- 0.8.6.99'
- Python version:
- 3.8.5
- OS (`Windows`, `Linux`, `MacOS`):
- windows 10
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
| closed | 2022-10-14T00:51:57Z | 2022-11-06T07:00:01Z | https://github.com/microsoft/qlib/issues/1317 | [
"bug"
] | nkchem09 | 4 |
ShishirPatil/gorilla | api | 92 | retrain results are poor | Great work thanks for sharing!!!
I used the fastchat code combined with the apibench/huggingface_train.json data and the llamav2-7b model to retrain to get a new model, but the inference result of the model is very poor. The data uses the fastchat conversation format and the vicuna template, and the data content value of the 'code' field in huggingface_train.json. Can you share the training parameters and details of your training? thanks!!! @ShishirPatil | open | 2023-08-10T08:31:22Z | 2023-08-10T08:31:22Z | https://github.com/ShishirPatil/gorilla/issues/92 | [] | fan-niu | 0 |
keras-team/keras | pytorch | 20,487 | Add parameter axis to tversky loss | Add parameter axis to tversky loss similar to dice loss.
I can implement that if someone give me green light :) | closed | 2024-11-12T20:37:45Z | 2024-12-01T08:43:06Z | https://github.com/keras-team/keras/issues/20487 | [
"type:feature"
] | jakubxy08 | 4 |
statsmodels/statsmodels | data-science | 8,630 | Autocorrelation function of AR(p) returns nan when coefficient are high (but still < 1)of $X_{t-1}$ or too many time points | #### Describe the bug
I am not sure if this is a bug, I created some time series and I want to play around with it.
When I set the first coefficient above 0.8, 0.9 (depending on how many time points) I get nan values from tsa.stattools.acf or pacf.
The more timepoints there are the lower the the threshold.
I would love to understand what is happening as I am also learning about these topics at the moment :-).
#### Code Sample, a copy-pastable example if possible
```
def create_ar(p, coefs, time_points):
"""
creates an Ar(p) process with coefficients coefs
where coefs[i] is the multiplier of X[t-i]
i.e. X[t] = c[1]*X[t-1] + c[2]*X[t-2] + ... + W[t]
"""
X = rng.normal(size = p).tolist()
time = np.arange(p, time_points)
print(p)
W = np.random.normal(0, scale = 1.0, size = (time_points,))
for t in time:
Xt = W[t]
for i, c in enumerate(coefs):
Xt += c*X[t-(i+1)]
X.append(Xt)
return np.array(X)
```
then for example call
```
X = create_ar(2, [0.99, 0.3], 3000)
pacf(X)
```
but this is ok:
```
X = create_ar(2, [0.7, 0.3], 3000)
pacf(X)
```
or this:
```
X = create_ar(2, [0.99, 0.3], 3000)
pacf(X)
```
similarly this works:
```
X = create_ar(4, [0.5, 0.3, 0.2, 0.2], 3000)
pacf(X)
```
but this does not:
```
X = create_ar(4, [0.7, 0.3, 0.2, 0.2], 3000)
pacf(X)
```
but this is ok again:
```
X = create_ar(4, [0.7, 0.3, 0.2, 0.2], 1000)
pacf(X)
```
Many thanks for reading and if possible an explanation as to why! | closed | 2023-01-22T11:41:45Z | 2023-04-14T15:04:27Z | https://github.com/statsmodels/statsmodels/issues/8630 | [] | aegonwolf | 1 |
sktime/pytorch-forecasting | pandas | 1,138 | index out of range in self when training a trained model on additional data | - PyTorch-Forecasting version: 0.10.3
- PyTorch version: 1.12.1
- Python version: 3.9.12
- Operating System: amazon linux 2
### Expected behavior
I executed the code in the Demand forecasting with the Temporal Fusion Transformer tutorial on similar dataset of my own.
I save the trained model to file. Next, I update the dataset - that is, I add new dates (unseen in the original dataset) to it - and I want to do load the trained model and train it on the updated dataset in order to get an updated model.
The updated dataset has the same features (columns) as the original dataset. The difference between them is the fact that the updated dataset has different number of rows - meaning new data points with new dates.
The original dataset has time_idx values e.g. [0, ..., 100] and the updated dataset has the same date range + additional dates, which are reflected in a new set of time_idx values e.g. [0, ..., 120]. I also tried to add max_prediction_length dates (instead of e.g. 20) and also max_encoder_length dates - but I get the same error.
I expected that since the updated dataset starts at the same time_idx, the loaded model will be able to be trained with new time_idx values, despite the coupling between the model and the original dataset.
### Actual behavior
I create the train and validation dataloaders in the same way I did for the original dataset. However, the trainer.fit function throws an `IndexError: index out of range in self` during the sanity check. The error comes from torch/nn/functional.py:2199
Does this mean that I always need to train the model from scratch when I have new dates to add to my dataset?
Is it possible to initialize a model from dataset with the new dataset and then copy the weights from a previously saved model?
Is it advised in such a situation to initialize the model in a different way? In the docs it repeatedly suggests to initialize models only from_dataset.
### Code to reproduce the problem
I create the dataloaders using the following code:
```
max_prediction_length = 6
max_encoder_length = 24
training_cutoff = data["time_idx"].max() - max_prediction_length
training = TimeSeriesDataSet(
data=data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="volume",
group_ids=["agency", "sku"],
min_encoder_length=max_encoder_length // 2, # keep encoder length long (as it is in the validation set)
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=["agency", "sku"],
static_reals=["avg_population_2017", "avg_yearly_household_income_2017"],
time_varying_known_categoricals=["special_days", "month"],
variable_groups={"special_days": special_days}, # group of categorical variables can be treated as one variable
time_varying_known_reals=["time_idx", "price_regular", "discount_in_percent"],
time_varying_unknown_categoricals=[],
time_varying_unknown_reals=[
"volume",
"log_volume",
"industry_volume",
"soda_volume",
"avg_max_temp",
"avg_volume_by_agency",
"avg_volume_by_sku",
],
target_normalizer=GroupNormalizer(
groups=["agency", "sku"], transformation="softplus"
), # use softplus and normalize by group
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
)
# create validation set (predict=True) which means to predict the last max_prediction_length points in time
# for each series
validation = TimeSeriesDataSet.from_dataset(training, data, predict=True, stop_randomization=True)
# create dataloaders for model
batch_size = 128 # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
```
When I update the dataset, the training_cutoff is changed because of the newly added dates.
Then I load the model that was trained on the original dataset:
`trained_tft = TemporalFusionTransformer.load_from_checkpoint(best_model_path)`
Create the trainer object:
```
trainer = pl.Trainer(
max_epochs=50,
accelerator="gpu" if os_device.type in ["gpu", "cuda"] else "cpu",
devices=n_gpus if os_device.type in ["gpu", "cuda"] else None,
gradient_clip_val=gradient_clip_val,
limit_train_batches=1.0, # comment in for training, running validation every N batches
callbacks=[lr_logger, early_stop_callback, ckpts_callback],
logger=logger,
log_every_n_steps=logging_steps,
)
```
and run the training:
```
trainer.fit(
trained_tft,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
```
| open | 2022-09-21T14:03:21Z | 2023-02-16T20:19:08Z | https://github.com/sktime/pytorch-forecasting/issues/1138 | [] | id5h | 1 |
litestar-org/litestar | api | 3,722 | Bug: Pydantic `json_schema_extra` fields aren't all merged | ### Description
When defining schema overrides on a Pydantic model via `json_schema_extra`, not all of them are applied to the generated schema
### MCVE
```python
import pydantic
from litestar import Litestar, get
class Model(pydantic.BaseModel):
with_title: str = pydantic.Field(title="WITH_title")
with_extra_title: str = pydantic.Field(json_schema_extra={"title": "WITH_extra"})
@get("/example")
async def example_route() -> Model:
return Model(with_title="1", with_extra_title="2")
app = Litestar([example_route])
schema = app.openapi_schema.to_schema()
props = schema["components"]["schemas"]["Model"]["properties"]
assert props["with_title"] == {"title": "WITH_title", "type": "string"}
assert props["with_extra_title"] == {"title": "WITH_extra", "type": "string"}
```
### Litestar Version
2.11
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-09-08T08:26:43Z | 2025-03-20T15:54:54Z | https://github.com/litestar-org/litestar/issues/3722 | [
"Bug :bug:"
] | provinzkraut | 0 |
neuml/txtai | nlp | 108 | Add notebook for ONNX pipeline | Add notebook that shows how to export to ONNX and shows how an ONNX model can be run in other programming languages. | closed | 2021-08-27T22:36:51Z | 2021-08-27T22:41:31Z | https://github.com/neuml/txtai/issues/108 | [] | davidmezzetti | 0 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,200 | webrtcvad wont install | When I try to install webrtcvad with pip install webrtcvad it throws out
Collecting webrtcvad
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: webrtcvad
Building wheel for webrtcvad (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [9 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
copying webrtcvad.py -> build\lib.win-amd64-cpython-39
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for webrtcvad
Running setup.py clean for webrtcvad
Failed to build webrtcvad
Installing collected packages: webrtcvad
Running setup.py install for webrtcvad ... error
error: subprocess-exited-with-error
× Running setup.py install for webrtcvad did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
running install
X:\anaconda2\envs\voice-clone\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
copying webrtcvad.py -> build\lib.win-amd64-cpython-39
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> webrtcvad
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
| open | 2023-04-21T22:17:25Z | 2024-08-06T16:59:22Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1200 | [] | zamonster | 6 |
Lightning-AI/pytorch-lightning | pytorch | 20,046 | ModelCheckpoint could not find key in returned metrics | ### Bug description
I have a model with several `ModelCheckpoint` callbacks. When loading it from a checkpoint using `trainer.fit(model, datamodule=dm, ckpt_path=training_ckpt_path)`, I get the following error:
```
lightning_fabric.utilities.exceptions.MisconfigurationException: `ModelCheckpoint(monitor='v_nll_unsupervised')` could not find the monitored key in the returned metrics:
['v_nll_supervised_encoder', 'v_nll_supervised_decoder', 'v_nll_supervised', 'v_nll', 'v_nll_supervised_encoder_clip', 'v_nll_supervised_decoder_clip', 'v_nll_supervised_c
lip', 'v_nll_clip', 'v_mse_supervised_encoder', 'v_mse_supervised_decoder', 'v_mse_encoder', 'v_mse_decoder', 'v_mse', 'v_mse_supervised_encoder_clip', 'v_mse_supervised_d
ecoder_clip', 'v_mse_encoder_clip', 'v_mse_decoder_clip', 'v_mse_clip', 'v_baseline_l_mse_supervised', 'v_baseline_l_mse', 'v_baseline_prior_mse_supervised', 'v_baseline_p
rior_mse', 'v_mu_supervised_encoder', 'v_mu_supervised_decoder', 'v_mu_encoder', 'v_mu_decoder', 'v_sigma_supervised_encoder', 'v_sigma_supervised_decoder', 'v_sigma_encod
er', 'v_sigma_decoder', 'hp_metric', 'epoch', 'step']. HINT: Did you call `log('v_nll_unsupervised', value)` in the `LightningModule`?
```
The issue seems to be that the `v_nll_unsupervised` metric was not logged with the `log(...)` method, so the `ModelCheckpoint` callback can't find it.
However, although I don't log this metric _at every validation step_, it is logged _at least once every validation epoch_. Since I use
`on_step=False, on_epoch=True` when logging metrics, **I would expect that the whole validation epoch would end before the `ModelCheckpoint` callback tries to access this metric**, in which case it would exist and no error would be raised.
Nonetheless, **it seems this metric is being accessed just after the first validation iteration**.
I thought that maybe this was due to the sanity checking process when training starts. However, setting `num_sanity_val_steps=0` or `num_sanity_val_steps=-1` in the `Trainer` did not solve anything.
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
_No response_
### Error messages and logs
```
lightning_fabric.utilities.exceptions.MisconfigurationException: `ModelCheckpoint(monitor='v_nll_unsupervised')` could not find the monitored key in the returned metrics:
['v_nll_supervised_encoder', 'v_nll_supervised_decoder', 'v_nll_supervised', 'v_nll', 'v_nll_supervised_encoder_clip', 'v_nll_supervised_decoder_clip', 'v_nll_supervised_c
lip', 'v_nll_clip', 'v_mse_supervised_encoder', 'v_mse_supervised_decoder', 'v_mse_encoder', 'v_mse_decoder', 'v_mse', 'v_mse_supervised_encoder_clip', 'v_mse_supervised_d
ecoder_clip', 'v_mse_encoder_clip', 'v_mse_decoder_clip', 'v_mse_clip', 'v_baseline_l_mse_supervised', 'v_baseline_l_mse', 'v_baseline_prior_mse_supervised', 'v_baseline_p
rior_mse', 'v_mu_supervised_encoder', 'v_mu_supervised_decoder', 'v_mu_encoder', 'v_mu_decoder', 'v_sigma_supervised_encoder', 'v_sigma_supervised_decoder', 'v_sigma_encod
er', 'v_sigma_decoder', 'hp_metric', 'epoch', 'step']. HINT: Did you call `log('v_nll_unsupervised', value)` in the `LightningModule`?
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- Tesla V100-PCIE-16GB
- Tesla V100-PCIE-16GB
- available: True
- version: 11.7
* Lightning:
- lightning-cloud: 0.5.37
- lightning-utilities: 0.8.0
- pytorch-lightning: 2.1.0
- pytorch-ranger: 0.1.1
- torch: 2.0.1
- torch-optimizer: 0.3.0
- torch-scatter: 2.1.1
- torchmetrics: 0.11.4
* Packages:
- absl-py: 1.4.0
- aiohttp: 3.8.4
- aiosignal: 1.3.1
- ansicolors: 1.1.8
- antlr4-python3-runtime: 4.7.2
- anyio: 3.7.1
- arrow: 1.2.3
- async-timeout: 4.0.2
- attrs: 23.1.0
- backoff: 2.2.1
- beautifulsoup4: 4.12.2
- blessed: 1.20.0
- boto: 2.49.0
- cachetools: 5.3.1
- certifi: 2023.5.7
- charset-normalizer: 3.1.0
- click: 8.1.3
- cmake: 3.26.4
- contourpy: 1.1.0
- croniter: 1.4.1
- cycler: 0.11.0
- dateutils: 0.6.12
- deepdiff: 6.3.1
- exceptiongroup: 1.1.2
- fastapi: 0.100.0
- filelock: 3.12.2
- fonttools: 4.40.0
- frozenlist: 1.3.3
- fsspec: 2023.6.0
- google-auth: 2.20.0
- google-auth-oauthlib: 1.0.0
- gprof2dot: 2022.7.29
- graphviz: 0.20.1
- grpcio: 1.51.3
- h11: 0.14.0
- idna: 3.4
- importlib-metadata: 6.7.0
- importlib-resources: 5.12.0
- inquirer: 3.1.3
- itsdangerous: 2.1.2
- jinja2: 3.1.2
- joblib: 1.2.0
- jsonschema: 4.17.3
- kiwisolver: 1.4.4
- lifted-pddl: 1.2.2
- lightning-cloud: 0.5.37
- lightning-utilities: 0.8.0
- lit: 16.0.6
- markdown: 3.4.3
- markdown-it-py: 3.0.0
- markupsafe: 2.1.3
- matplotlib: 3.7.1
- mdurl: 0.1.2
- mpmath: 1.3.0
- msgpack: 1.0.5
- multidict: 6.0.4
- multipledispatch: 0.6.0
- mypy: 1.3.0
- mypy-extensions: 1.0.0
- networkx: 3.1
- numpy: 1.25.0
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-curand-cu11: 10.2.10.91
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-nccl-cu11: 2.14.3
- nvidia-nvtx-cu11: 11.7.91
- oauthlib: 3.2.2
- ordered-set: 4.1.0
- packaging: 23.1
- pandas: 2.0.2
- pddl-generators: 1.0
- pillow: 9.5.0
- pip: 23.1.2
- protobuf: 4.23.3
- psutil: 5.9.5
- pyarrow: 12.0.1
- pyasn1: 0.5.0
- pyasn1-modules: 0.3.0
- pydantic: 1.10.11
- pygments: 2.15.1
- pyjwt: 2.7.0
- pynvml: 11.5.0
- pyparsing: 3.1.0
- pyperplan: 2.1
- pyrsistent: 0.19.3
- python-dateutil: 2.8.2
- python-editor: 1.0.4
- python-multipart: 0.0.6
- pytorch-lightning: 2.1.0
- pytorch-ranger: 0.1.1
- pytz: 2023.3
- pyyaml: 6.0
- ray: 2.5.0
- readchar: 4.0.5
- requests: 2.31.0
- requests-oauthlib: 1.3.1
- rich: 13.4.2
- rsa: 4.9
- scikit-learn: 1.2.2
- scipy: 1.10.1
- seaborn: 0.12.2
- setuptools: 67.7.2
- six: 1.16.0
- snakeviz: 2.2.0
- sniffio: 1.3.0
- soupsieve: 2.4.1
- stable-trunc-gaussian: 1.3.9
- starlette: 0.27.0
- starsessions: 1.3.0
- strips-hgn: 1.0
- sympy: 1.12
- tarski: 0.8.2
- tensorboard: 2.16.2
- tensorboard-data-server: 0.7.1
- tensorboardx: 2.6.1
- threadpoolctl: 3.1.0
- tomli: 2.0.1
- torch: 2.0.1
- torch-optimizer: 0.3.0
- torch-scatter: 2.1.1
- torchmetrics: 0.11.4
- tornado: 6.3.3
- tqdm: 4.65.0
- traitlets: 5.9.0
- triton: 2.0.0
- typing-extensions: 4.6.3
- tzdata: 2023.3
- urllib3: 1.26.16
- uvicorn: 0.23.0
- wcwidth: 0.2.6
- websocket-client: 1.6.1
- websockets: 11.0.3
- werkzeug: 2.3.6
- wheel: 0.40.0
- yarl: 1.9.2
- z3: 0.2.0
- zipp: 3.15.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.9.16
- release: 5.4.0-174-generic
- version: #193-Ubuntu SMP Thu Mar 7 14:29:28 UTC 2024
</details>
### More info
_No response_
cc @carmocca @awaelchli | open | 2024-07-04T14:19:45Z | 2024-07-25T14:06:42Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20046 | [
"bug",
"help wanted",
"callback: model checkpoint",
"ver: 2.1.x"
] | TheAeryan | 1 |
SYSTRAN/faster-whisper | deep-learning | 159 | Can faster-whisper return the left/right mono channel? | Since faster-whisper seems always separate segments if not same channel, also can decode stereo to mono channels, I'm wondering if it's easy for faster-whisper to return which channel is for the segment? | closed | 2023-04-16T21:16:08Z | 2023-05-09T00:50:32Z | https://github.com/SYSTRAN/faster-whisper/issues/159 | [] | junchen6072 | 2 |
napari/napari | numpy | 6,990 | Add tests for scalebar visibility | ## 🧰 Task
I tracked the bugs in #6959 and #6961 to #5432. It seems this part of the code is untested even though it was covered. So it would be good to add tests for it so our scale bars don't disappear again. 😅 | closed | 2024-06-15T07:13:20Z | 2024-06-23T13:40:43Z | https://github.com/napari/napari/issues/6990 | [
"task"
] | jni | 6 |
pyppeteer/pyppeteer | automation | 238 | Browser.pages() being inconsistent when new pages created | When a new tab is created; and if within a very less time gap, browser.pages is called; then the page object return from Browser.newPage() is not present in the list of pages. The newPage function returns a different Page object than the one in the Browser.pages() but there does exit a different Page object in the list pointing to the same page.
## Code to Reproduce
```python
import asyncio, pyppeteer
async def pages_getter(browser):
await asyncio.sleep(0.1)
return await browser.pages()
async def test(browser):
return await asyncio.gather(browser.newPage(), pages_getter(browser))
async def main_test(browser):
new_page, all_pages = await test(browser)
print(new_page in all_pages)
print(new_page, all_pages)
print(new_page.mainFrame._id == all_pages[1].mainFrame._id) # Second run is shows different objects having same _id
await new_page.close()
async def main():
browser = await pyppeteer.launch(headless=False)
# Happens as expected in first usage
await main_test(browser)
# Fails second time
await main_test(browser)
asyncio.run(main())
```
## Expected Behavior
The second run of test must show similar output as the first run of test.
i.e. the second output must be like:
True # The list must contain the new_page right?
... [..., ...]
True
The new page object returned should be in browser.pages()
## Current Behavior
The new page object is not in browser.pages(). The list of pages contains a different object referencing to same page and having same id. The code line ```new_page in all_pages``` must equate to True but then is evaluate to False.
*Also* if the variable 'browser' was global then even the first run fails.
## Environment Details
- *OS:* Windows 64 bit
- *Python version:* 3.9.1
- *Pyppeteer version:* 0.2.5 | open | 2021-03-17T16:10:07Z | 2023-04-09T11:23:02Z | https://github.com/pyppeteer/pyppeteer/issues/238 | [] | bytefoot | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.