repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Evil0ctal/Douyin_TikTok_Download_API | api | 48 | tiktok 分享链接解析失败 | https://www.tiktok.com/@official_kotaro2004/video/7110458501767367938?is_from_webapp=1&sender_device=pc
大部分的链接都是可以解析的。 发现少量不行的

| closed | 2022-06-30T14:31:43Z | 2022-07-01T08:34:53Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/48 | [] | OhGui | 1 |
tox-dev/tox | automation | 2,753 | Document which environment variables are passed through by default | Documentation for tox3 had https://tox.wiki/en/3.27.0/config.html#conf-passenv
Documentation for tox4 does not show the list any more
Also see https://github.com/tox-dev/tox/blob/6b1cc141aeb9501aa23774056fbc7179b719e200/src/tox/tox_env/api.py#L179-L204 | closed | 2022-12-19T15:24:47Z | 2024-07-14T07:16:55Z | https://github.com/tox-dev/tox/issues/2753 | [
"area:documentation",
"level:easy",
"help:wanted"
] | jugmac00 | 3 |
tflearn/tflearn | tensorflow | 942 | tflearn not stuck while using with tensorflow.map_fn() | Here's my code:
<pre>
...
network = fully_connected(network, 512, activation='relu')
#network = tf.map_fn(lambda x:tf.abs(x), network)
network =....
</pre>
Commenting the second line will cause the training to stuck forever without and error thrown. The `tf.abs` is just an example. I've tried lot of functions including one that does basically nothing but returning the input but none of them works.
Please help me! | open | 2017-10-26T12:11:18Z | 2017-11-20T10:12:08Z | https://github.com/tflearn/tflearn/issues/942 | [] | D0048 | 4 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 801 | [Still present in latest version] AttributeError: 'FetchNode' object has no attribute 'update_state' | **Describe the bug**
I can't even run this example: https://github.com/ScrapeGraphAI/Scrapegraph-ai/blob/main/examples/openai/scrape_plain_text_openai.py
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the repo and try running the example or any text. | closed | 2024-11-15T10:20:06Z | 2025-01-08T03:33:20Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/801 | [] | aleenprd | 9 |
KaiyangZhou/deep-person-reid | computer-vision | 225 | Help! I want to load weights from model zoo | I am trying to load weights from model zoo especially for OSNetx0.25. Model is not available for the same. When I am trying to load weights on model using function from the tool that giving me
Successfully loaded pretrained weights from "./osnet_x0_25_msmt17_combineall_256x128_amsgrad_ep150_stp60_lr0.0015_b64_fb10_softmax_labelsmooth_flip_jitter.pth"
** The following layers are discarded due to unmatched keys or layer size: ['classifier.weight', 'classifier.bias']
this error.
Help will be appreciated. | closed | 2019-09-08T22:10:08Z | 2019-10-22T21:31:34Z | https://github.com/KaiyangZhou/deep-person-reid/issues/225 | [] | prathameshnetake | 2 |
littlecodersh/ItChat | api | 586 | 请问如何拿到特定群聊里面的所有人的账号信息? | 在提交前,请确保您已经检查了以下内容!
- [x] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [x] 我已经阅读并按[文档][document] 中的指引进行了操作
- [x] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [x] 本问题确实关于`itchat`, 而不是其他项目.
- [x] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2018-01-29T15:00:37Z | 2018-02-28T03:06:19Z | https://github.com/littlecodersh/ItChat/issues/586 | [
"question"
] | ghost | 1 |
rougier/numpy-100 | numpy | 55 | the answer of question 45 is not exactly correct. | For example:
`z = np.array([1, 2, 3, 4, 5, 5], dtype=int)`
`z[z.argmax()] = 0`
`print(z)`
will output:
`[1 2 3 4 0 5]`
But for this question, the correct answer should be:
`[1 2 3 4 0 0]` | open | 2017-12-27T00:39:26Z | 2020-09-15T05:37:11Z | https://github.com/rougier/numpy-100/issues/55 | [] | i5cnc | 5 |
dpgaspar/Flask-AppBuilder | flask | 2,058 | ModuleNotFoundError: No module named 'config' | ### Environment
Flask-Appbuilder version:
3.4.5
pip freeze output:
apispec==3.3.2
attrs==22.2.0
Babel==2.11.0
click==7.1.2
colorama==0.4.5
dataclasses==0.8
defusedxml==0.7.1
dnspython==2.2.1
email-validator==1.3.1
Flask==1.1.4
Flask-AppBuilder==3.4.5
Flask-Babel==2.0.0
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
greenlet==2.0.2
idna==3.4
importlib-metadata==4.8.3
itsdangerous==1.1.0
Jinja2==2.11.3
jsonschema==3.2.0
MarkupSafe==2.0.1
marshmallow==3.14.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.26.1
prison==0.2.1
PyJWT==1.7.1
pyrsistent==0.18.0
python-dateutil==2.8.2
python3-openid==3.2.0
pytz==2023.3
PyYAML==6.0
six==1.16.0
SQLAlchemy==1.4.48
SQLAlchemy-Utils==0.41.1
typing_extensions==4.1.1
Werkzeug==1.0.1
WTForms==2.3.3
zipp==3.6.0
### Describe the expected results
It should create the admin
```python
(venv) C:\Users\coope\PycharmProjects\flaskProject\first_app>flask fab create-admin
Username [admin]:
User first name [admin]:
User last name [user]:
Email [admin@fab.org]:
Password:
Repeat for confirmation:
Usage: flask fab create-admin [OPTIONS]
Error: While importing "app", an ImportError was raised:
```
### Describe the actual results
There is no module named 'config'
### Steps to reproduce
Follow the [documentation](https://flask-appbuilder.readthedocs.io/en/latest/installation.html)
```pytb
venv) C:\Users\coope\PycharmProjects\flaskProject>flask fab create-app
Your new app name: first_app
Your engine type, SQLAlchemy or MongoEngine (SQLAlchemy, MongoEngine) [SQLAlchemy]:
Downloaded the skeleton app, good coding!
(venv) C:\Users\coope\PycharmProjects\flaskProject>cd first_app
(venv) C:\Users\coope\PycharmProjects\flaskProject\first_app>set FLASK_APP=app
(venv) C:\Users\coope\PycharmProjects\flaskProject\first_app>flask fab create-admin
Username [admin]:
User first name [admin]:
User last name [user]:
Email [admin@fab.org]:
Password:
Repeat for confirmation:
Usage: flask fab create-admin [OPTIONS]
Error: While importing "app", an ImportError was raised:
Traceback (most recent call last):
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\werkzeug\utils.py", line 568, in import_string
__import__(import_name)
ModuleNotFoundError: No module named 'config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\flask\cli.py", line 240, in locate_app
__import__(module_name)
File "C:\Users\coope\PycharmProjects\flaskProject\first_app\app\__init__.py", line 14, in <module>
app.config.from_object("config")
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\flask\config.py", line 174, in from_object
obj = import_string(obj)
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\werkzeug\utils.py", line 585, in import_string
ImportStringError, ImportStringError(import_name, e), sys.exc_info()[2]
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\werkzeug\_compat.py", line 147, in reraise
raise value.with_traceback(tb)
File "C:\Users\coope\PycharmProjects\flaskProject\venv\lib\site-packages\werkzeug\utils.py", line 568, in import_string
__import__(import_name)
werkzeug.utils.ImportStringError: import_string() failed for 'config'. Possible reasons are:
- missing __init__.py in a package;
- package or module path not included in sys.path;
- duplicated package or module name taking precedence in sys.path;
- missing module, class, function or variable;
Debugged import:
- 'config' not found.
Original exception:
ModuleNotFoundError: No module named 'config'
``` | closed | 2023-06-12T13:16:21Z | 2023-06-13T15:34:33Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2058 | [] | coopzr | 3 |
pytest-dev/pytest-django | pytest | 595 | django_db_setup runs inside transactional_db transaction | If the first db test that gets run happens to have `transactional_db` (or `django_db(transaction=True)`) then all subsequent db tests will fail.
This appears to be because the db setup (migrations etc.) are all rolled-back and will not run again because `django_db_setup` is session-scoped. | open | 2018-05-11T10:39:28Z | 2021-12-22T00:11:11Z | https://github.com/pytest-dev/pytest-django/issues/595 | [] | OrangeDog | 16 |
plotly/dash | flask | 2,592 | Be compatible with Flask 2.3 | dash dependency of end of support **flask** branch
```Flask>=1.0.4,<2.3.0```
since https://github.com/plotly/dash/commit/7bd5b7ebec72ffbfca85a57d0d4c19b595371a5a
The 2.3.x branch is now the supported fix branch, the 2.2.x branch will become a tag marking the end of support for that branch.
https://github.com/pallets/flask/releases
| closed | 2023-07-07T22:57:24Z | 2023-10-26T21:01:54Z | https://github.com/plotly/dash/issues/2592 | [] | VelizarVESSELINOV | 1 |
pydata/xarray | numpy | 9,424 | Numpy 2.0 (and 2.1): np.linspace(DataArray) does not work any more | ### What happened?
I'm going through our test suite trying to umblock numpy 2.
So we likely have many strange uses of xarray, I can work around them, but I figured I would report the issues with MRC if that is ok with you all:
```
# 2.0 or 2.1 cause the issue
mamba create --name xr netcdf4 xarray numpy=2.1 python=3.11 --channel conda-forge --override-channels
```
```
mamba activate xr
# Choose your version of numpy here
mamba install numpy=2.1 --yes && python -c "import numpy as np; import xarray as xr; np.linspace(0, xr.DataArray(np.full(3, fill_value=100, dtype='int8'))[0])"
mamba install numpy=2.0 --yes && python -c "import numpy as np; import xarray as xr; np.linspace(0, xr.DataArray(np.full(3, fill_value=100, dtype='int8'))[0])"
# Works below
mamba install numpy=1.26 --yes && python -c "import numpy as np; import xarray as xr; np.linspace(0, xr.DataArray(np.full(3, fill_value=100, dtype='int8'))[0])"
```
With numpy 2.0 and 2.1 the following happens
```python
File "<string>", line 1, in <module>
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/numpy/_core/function_base.py", line 189, in linspace
y = conv.wrap(y.astype(dtype, copy=False))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/xarray/core/dataarray.py", line 4704, in __array_wrap__
new_var = self.variable.__array_wrap__(obj, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/xarray/core/variable.py", line 2295, in __array_wrap__
return Variable(self.dims, obj)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/xarray/core/variable.py", line 398, in __init__
super().__init__(
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/xarray/namedarray/core.py", line 264, in __init__
self._dims = self._parse_dimensions(dims)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mark/mambaforge/envs/xr/lib/python3.11/site-packages/xarray/namedarray/core.py", line 508, in _parse_dimensions
raise ValueError(
ValueError: dimensions () must have the same length as the number of data dimensions, ndim=1
```
### What did you expect to happen?
for it to work
### Minimal Complete Verifiable Example
```Python
as above
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
```
# packages in environment at /home/mark/mambaforge/envs/xr:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
blosc 1.21.6 hef167b5_0 conda-forge
bzip2 1.0.8 h4bc722e_7 conda-forge
c-ares 1.33.1 heb4867d_0 conda-forge
ca-certificates 2024.8.30 hbcca054_0 conda-forge
certifi 2024.8.30 pyhd8ed1ab_0 conda-forge
cftime 1.6.4 py311h18e1886_0 conda-forge
hdf4 4.2.15 h2a13503_7 conda-forge
hdf5 1.14.3 nompi_hdf9ad27_105 conda-forge
icu 75.1 he02047a_0 conda-forge
keyutils 1.6.1 h166bdaf_0 conda-forge
krb5 1.21.3 h659f571_0 conda-forge
ld_impl_linux-64 2.40 hf3520f5_7 conda-forge
libaec 1.1.3 h59595ed_0 conda-forge
libblas 3.9.0 23_linux64_openblas conda-forge
libcblas 3.9.0 23_linux64_openblas conda-forge
libcurl 8.9.1 hdb1bdb2_0 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 hd590300_2 conda-forge
libexpat 2.6.2 h59595ed_0 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc 14.1.0 h77fa898_1 conda-forge
libgcc-ng 14.1.0 h69a702a_1 conda-forge
libgfortran 14.1.0 h69a702a_1 conda-forge
libgfortran-ng 14.1.0 h69a702a_1 conda-forge
libgfortran5 14.1.0 hc5f4f2c_1 conda-forge
libgomp 14.1.0 h77fa898_1 conda-forge
libiconv 1.17 hd590300_2 conda-forge
libjpeg-turbo 3.0.0 hd590300_1 conda-forge
liblapack 3.9.0 23_linux64_openblas conda-forge
libnetcdf 4.9.2 nompi_h135f659_114 conda-forge
libnghttp2 1.58.0 h47da74e_1 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libopenblas 0.3.27 pthreads_hac2b453_1 conda-forge
libsqlite 3.46.1 hadc24fc_0 conda-forge
libssh2 1.11.0 h0841786_0 conda-forge
libstdcxx 14.1.0 hc0a3c3a_1 conda-forge
libstdcxx-ng 14.1.0 h4852527_1 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libxcrypt 4.4.36 hd590300_1 conda-forge
libxml2 2.12.7 he7c6b58_4 conda-forge
libzip 1.10.1 h2629f0a_3 conda-forge
libzlib 1.3.1 h4ab18f5_1 conda-forge
lz4-c 1.9.4 hcb278e6_0 conda-forge
ncurses 6.5 he02047a_1 conda-forge
netcdf4 1.7.1 nompi_py311h25b3b55_101 conda-forge
numpy 2.0.2 py311h71ddf71_0 conda-forge
openssl 3.3.1 hb9d3cd8_3 conda-forge
packaging 24.1 pyhd8ed1ab_0 conda-forge
pandas 2.2.2 py311h14de704_1 conda-forge
pip 24.2 pyh8b19718_1 conda-forge
python 3.11.9 hb806964_0_cpython conda-forge
python-dateutil 2.9.0 pyhd8ed1ab_0 conda-forge
python-tzdata 2024.1 pyhd8ed1ab_0 conda-forge
python_abi 3.11 5_cp311 conda-forge
pytz 2024.1 pyhd8ed1ab_0 conda-forge
readline 8.2 h8228510_1 conda-forge
setuptools 73.0.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.2.1 ha2e4443_0 conda-forge
tk 8.6.13 noxft_h4845f30_101 conda-forge
tzdata 2024a h8827d51_1 conda-forge
wheel 0.44.0 pyhd8ed1ab_0 conda-forge
xarray 2024.7.0 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
zlib 1.3.1 h4ab18f5_1 conda-forge
zstd 1.5.6 ha6fb4c9_0 conda-forge
```
### Environment
<details>
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 6.8.0-41-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.7.0
pandas: 2.2.2
numpy: 2.0.2
scipy: None
netCDF4: 1.7.1
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 73.0.1
pip: 24.2
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
```
</details>
| closed | 2024-09-03T14:48:51Z | 2024-09-03T15:40:47Z | https://github.com/pydata/xarray/issues/9424 | [
"bug",
"needs triage"
] | hmaarrfk | 3 |
litestar-org/litestar | asyncio | 3,650 | Bug(OpenAPI): Schema generation doesn't resolve signature types for "nested" objects | ### Description
OpenAPI schema generation fails if it encounters a "nested" object with a type which is not available at runtime but could be resolved using `signature types/namespaces`.
### URL to code causing the issue
_No response_
### MCVE
main.py
```py
from litestar import Litestar, get
from external_module import Item
from schemas import ItemContainer
@get(sync_to_thread=False, signature_namespace={"Item": Item})
def handler() -> ItemContainer:
return ItemContainer(items=[])
app = Litestar(
route_handlers=[handler],
signature_types=(Item,),
debug=True,
)
```
schemas.py
```py
from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from external_module import Item
@dataclass
class ItemContainer:
items: list[Item]
```
external_module.py
```py
from dataclasses import dataclass
@dataclass
class Item:
foo: str = "bar"
```
### Steps to reproduce
```bash
1. Run the above code
2. See error at `http://127.0.0.1:8000/schema` (`name 'Item' is not defined`)
3. `http://127.0.0.1:8000` still works.
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.10.0
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-08-02T11:26:08Z | 2025-03-20T15:54:51Z | https://github.com/litestar-org/litestar/issues/3650 | [
"Bug :bug:",
"OpenAPI"
] | floxay | 0 |
ranaroussi/yfinance | pandas | 2,100 | 0.2.47 Refactor multi.py to return single-level index when a single ticker | **Describe bug**
Refactoring multi.py to return single-level indexes when using a single ticker is breaking a lot of existing code in several applications. Was this refactor necessary?
**Debug log**
No response
**yfinance version**
0.2.47
**Python version**
3.10
**Operating system**
kubuntu 22.04
| closed | 2024-10-25T15:58:17Z | 2024-10-25T18:05:15Z | https://github.com/ranaroussi/yfinance/issues/2100 | [] | BiggRanger | 2 |
flasgger/flasgger | rest-api | 552 | older flassger required package incompatibility | Hi Team, thanks for the great package!
We came across an issue where flassger 0.9.5 imports a flask/jinja version that in turn imports a version of markupsafe that has a breaking change (soft_unicode was removed, soft_str replaced it), which causes a hard fail. A current workaround for this is manually importing an older version of markupsafe, but we wanted to suggest a specific version being specified in requirements for flask/jinja2, to avoid the package breaking for apps using older versions.
Thanks in advance for your help! | open | 2022-11-23T12:21:38Z | 2022-11-23T12:21:38Z | https://github.com/flasgger/flasgger/issues/552 | [] | adamb910 | 0 |
widgetti/solara | fastapi | 572 | footgun: using a reactive var, not its value as a dependency | ```python
var = solara.reactive(1)
...
solara.use_effect(..., dependencies=[var])
```
A user probably intended to use `var.value`, since the effect should trigger when the value changes. I think we should warn when this happens, and have an opt out for this warning. | open | 2024-03-26T11:41:41Z | 2024-03-27T09:46:54Z | https://github.com/widgetti/solara/issues/572 | [
"footgun"
] | maartenbreddels | 0 |
jupyter/nbgrader | jupyter | 1,039 | Language support | I would like to add language support for Octave kernel.
And make it easier to add more languages and not have the code that checks for errors in two places like it is now in validator._extract_error and utils.determine_grade
You can see the code in this PR I just made that describes the changes better: #1038 | open | 2018-10-30T12:41:00Z | 2022-12-02T14:46:20Z | https://github.com/jupyter/nbgrader/issues/1039 | [
"enhancement"
] | sigurdurb | 3 |
ivy-llc/ivy | tensorflow | 28,517 | Fix Frontend Failing Test: torch - math.paddle.heaviside | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-09T14:58:00Z | 2024-03-14T21:29:22Z | https://github.com/ivy-llc/ivy/issues/28517 | [
"Sub Task"
] | ZJay07 | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 939 | Can not find the three Pretrained model |

I intended to download these models, but find nothing.
encoder\saved_models\pretrained.pt
synthesizer\saved_models\pretrained\pretrained.pt
vocoder\saved_models\pretrained\pretrained.pt
Any guys know why? Or can somebody just share them with me?
Thank you. | closed | 2021-12-06T18:32:20Z | 2021-12-28T12:34:19Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/939 | [] | ZSTUMathSciLab | 1 |
kevlened/pytest-parallel | pytest | 42 | cannot work with pytest‘s fixture | ```
python 3.7.4
pytest 5.2.0
pytest-parallel 0.2.2
gevent 1.4.0
```
the scope of fixture, for example: session,module, they do no work.
| open | 2019-10-21T12:38:24Z | 2019-11-18T10:11:40Z | https://github.com/kevlened/pytest-parallel/issues/42 | [] | bglmmz | 2 |
gunthercox/ChatterBot | machine-learning | 1,652 | ModuleNotFoundError | After installing chatterbot this error occurs!
C:\Users\Nabeel\chatbot> py chat.py
Traceback (most recent call last):
File "chat.py", line 1, in <module>
from chatterbot import ChatBot
ModuleNotFoundError: No module named 'chatterbot' | closed | 2019-03-04T17:41:45Z | 2020-01-17T16:16:22Z | https://github.com/gunthercox/ChatterBot/issues/1652 | [] | nabeelahmedsabri | 4 |
mirumee/ariadne | graphql | 224 | Update GraphQL Core Next & Starlette | Issue for me to remember to update our core dependencies to latest versions before release. | closed | 2019-08-01T15:41:48Z | 2019-08-12T12:24:54Z | https://github.com/mirumee/ariadne/issues/224 | [
"enhancement"
] | rafalp | 0 |
supabase/supabase-py | flask | 20 | AttributeError: 'RequestBuilder' object has no attribute 'on' | # Bug report
## Describe the bug
The Python client doesn't support realtime subscription and fails with "AttributeError: 'RequestBuilder' object has no attribute 'on'".
(Update 17/01/22: This was an example originally forming part of the README)
## To Reproduce
Using the following example from the original readme:
```python
subscription = supabase.table("countries").on("*", lambda x: print(x)).subscribe()`
```
## Expected behavior
For each postgres db change to be printed.
## System information
- MacOS 11.3 Beta
- Version of supabase-js: [0.0.2]
- Version of Node.js: [N/A - Using hosted]
## Additional context
Add any other context about the problem here.
| closed | 2021-04-08T13:42:41Z | 2024-06-25T08:12:55Z | https://github.com/supabase/supabase-py/issues/20 | [
"bug",
"realtime",
"Stale"
] | iwootten | 10 |
ymcui/Chinese-BERT-wwm | nlp | 210 | Confusion with the config.json in RoBerta-based Models | closed | 2022-01-08T06:57:27Z | 2022-01-17T04:25:59Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/210 | [
"stale"
] | qhd1996 | 2 | |
mwaskom/seaborn | matplotlib | 2,986 | swarmplot change point maximum displacement from center | Hi,
I am trying to plot a `violinplot` + `swarmplot` combination for with multiple hues and many points and am struggling to get the optimal clarity with as few points as possible overlapping. I tried both `swarmplot` and `stripplot`, with and without `dodge`.
Since i have multiple categories on the y-axis , I have also played around with the figure size, setting it to large height values. It helps to improve the clarity of the violin plots but the swarm/strip plots remain unchanged and crowded with massive overlap. I know that there will always be overlap with many points sharing the same/similar x-values, but i would like to maximize the use of space available between y-values for the swarms. Is there a way i can increase the maximum displacement from center for the swarm plots? With the `stripplot` `jitter` i can disperese the point, but they tend to overlap randomly quite a bit still and also start to move over into other violin plots.
Tried with Seaborn versions: `0.11.2` and `0.12.0rc0 `
I attached a partial plot, as the original is quite large:
```
...
sns.set_theme()
sns.set(rc={"figure.figsize": (6, 18)})
...
PROPS = {'boxprops': {'edgecolor': 'black'},
'medianprops': {'color': 'black'},
'whiskerprops': {'color': 'black'},
'capprops': {'color': 'black'}}
ax = sns.violinplot(x=stat2show, y=y_cat, data=data_df, width=1.7, fliersize=0,
linewidth=0.75, order=y_order, palette=qual_colors,
scale="count", inner="quartile", **PROPS)
sns.swarmplot(x=stat2show, y=y_cat, data=data_df, size=5.2, color='white',
linewidth=0.5, hue="Data Set", edgecolor='black',
palette=data_set_palette, order=y_order, dodge=False,
hue_order=data_set_hue_order)
...
```

Thanks for any help! | closed | 2022-08-30T10:05:12Z | 2022-08-30T11:44:51Z | https://github.com/mwaskom/seaborn/issues/2986 | [] | ohickl | 4 |
torchbox/wagtail-grapple | graphql | 340 | Inconsistent error handling in the site query | I've noticed an inconsistency in how wagtail-grapple handles errors for the `site` query, which takes `id` and `hostname` as parameters.
When an incorrect `id` is provided, the query appropriately returns `null`, meaning the requested site does not exist. However, when a non-existent `hostname` is provided, it raises an unhandled `DoesNotExist` exception. This exception subsequently results in a `GraphQLError` that is returned in the "errors" response.
The current implementation adds complexity to differentiating between real unhandled errors and expected behaviours, as it behaves differently based on the input parameter.
From a conceptual standpoint, it seems incorrect to raise an unhandled error when the site does not exist, as it is an expected scenario that can occur in normal operation.
It would be beneficial for the `site` query to handle these errors consistently across both `id` and `hostname` parameters. This would make error handling more predictable and user-friendly.
| closed | 2023-07-03T12:11:17Z | 2023-07-09T16:09:24Z | https://github.com/torchbox/wagtail-grapple/issues/340 | [] | estyxx | 1 |
coqui-ai/TTS | deep-learning | 3,270 | [Bug] Cant run any of the xtts models using the TTS Command Line Interface (CLI) | ### Describe the bug
Hello I just started playing with the TTS library and I am running tests using the TTS Command Line Interface (CLI).
I was able to try capacitron, vits (english and portuguese) and tacotron2 successfully. But when I tried any of the xtts models, I get the same error that suggests I have yet to set a language option.
### To Reproduce
I tried running the following and it issues the error
`tts --text "Welcome. This is a TTS test." --model_name "tts_models/multilingual/multi-dataset/xtts_v2" --language en --out_path TTS_english_test_xtts_output2.wav`
`tts --text "Welcome. This is a TTS test." --model_name "tts_models/multilingual/multi-dataset/xtts_v1.1" --language en --out_path TTS_english_test_xtts_output2.wav`
I tried these commands on multiple systems yet I get the same error
AssertionError: ❗ Language None is not supported. Supported languages are ['en', 'es', 'fr', 'de', 'it', 'pt', 'pl', 'tr', 'ru', 'nl', 'cs', 'ar', 'zh-cn', 'hu', 'ko', 'ja']
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- TTS installed from pip install TTS
- Linux OS
```
### Additional context
My guess is that --language en is ignored and perhaps the xtts_v2 and xtts_v1.1 models are required to run in Python? I wanted to try a multilingual model through the command line interface (CLI) are there any missing steps I am missing here?
I was able to run bark using
`tts --text "Welcome. This is a TTS test." --model_name "tts_models/multilingual/multi-dataset/bark" --language en --out_path TTS_english_test_bark_output2.wav` | closed | 2023-11-20T01:33:34Z | 2023-11-20T08:38:43Z | https://github.com/coqui-ai/TTS/issues/3270 | [
"bug"
] | 240db | 1 |
tqdm/tqdm | pandas | 1,015 | Progress bar always rendered in Google Colab/Jupyter Notebook | On the terminal, it is possible to disable the progress bar by not specifying `"{bar}"` in `bar_format`. For example `bar_format="{l_bar}{r_bar}"` will render the left and right sides of the bar, but not the actual progress bar itself.
On Google Colab/Jupyter Notebook, the bar will always render on the left side, even when disabled. My guess is that this happens because an `IProgress` is [always created in `status_printer` ](https://github.com/tqdm/tqdm/blob/master/tqdm/notebook.py#L100-L114). A simple fix might be to create a component for the description instead of implicitly using `IProgress` and set the `IProgress` component to invisible if no `"{bar}:` is given, e.g.:
```python
# in status_printer
desc = HTML()
pbar = IProgres(...)
ptext = HTM()
container = HBOX(children=[desc, pbar, ptext])
...
# in display
desc, pbar, ptext = self.container.children
...
if we_dont_want_the_bar:
pbar.closer() # or pbar.visible = False
...
```
- [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```
Google colab
4.41.1 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| closed | 2020-07-29T21:36:56Z | 2020-08-02T21:22:17Z | https://github.com/tqdm/tqdm/issues/1015 | [
"to-fix ⌛",
"p2-bug-warning ⚠",
"submodule-notebook 📓",
"c1-quick 🕐"
] | EugenHotaj | 1 |
docarray/docarray | fastapi | 1,351 | HnswDocIndex cannot use two time the same workdir | # Context
using two time the same `work_dir` lead to error
```python
from docarray import DocList
from docarray.documents import ImageDoc
from docarray.index import HnswDocumentIndex
import numpy as np
# create some data
dl = DocList[ImageDoc](
[
ImageDoc(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
embedding=np.random.random((128,)),
)
for _ in range(100)
]
)
# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2')
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2') # second time is failing
``` | closed | 2023-04-11T09:46:33Z | 2023-04-22T09:47:25Z | https://github.com/docarray/docarray/issues/1351 | [] | samsja | 2 |
keras-team/keras | machine-learning | 20,104 | Tensorflow model.fit fails on test_step: 'NoneType' object has no attribute 'items' | I am using tf.data module to load my datasets. Although the training and validation data modules are almost the same. The train_step works properly and the training on the first epoch continues till the last batch, but in the test_step I get the following error:
```shell
353 val_logs = {
--> 354 "val_" + name: val for name, val in val_logs.items()
355 }
356 epoch_logs.update(val_logs)
358 callbacks.on_epoch_end(epoch, epoch_logs)
AttributeError: 'NoneType' object has no attribute 'items'
```
Here is the code for fitting the model:
```shell
results = auto_encoder.fit(
train_data,
epochs=config['epochs'],
steps_per_epoch=(num_train // config['batch_size']),
validation_data=valid_data,
validation_steps=(num_valid // config['batch_size'])-1,
callbacks=callbacks
)
```
I should mention that I have used .repeat() on both train_data and valid_data, so the problem is not with not having enough samples. | closed | 2024-08-09T15:32:39Z | 2024-08-10T17:59:05Z | https://github.com/keras-team/keras/issues/20104 | [
"stat:awaiting response from contributor",
"type:Bug"
] | JVD9kh96 | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,378 | Streamline Many-to-Many and One-to-Many Relationship Handling with Primary Key Lists | ### Describe the use case
Currently, SQLAlchemy requires fetching all related objects from the database to establish many-to-many or one-to-many relationships. This can be inefficient and unnecessary when only the primary keys of the related objects are known. A more efficient approach would be to allow the association to be made directly using a list of primary keys, without needing to retrieve the full objects.
### Databases / Backends / Drivers targeted
Preferably all supported databases and backends.
### Example Use
```python
from sqlalchemy import create_engine, Column, Integer, String, Table, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
Base = declarative_base()
# Association table for many-to-many relationship
bot_filegroup_association = Table('bot_filegroup', Base.metadata,
Column('bot_id', Integer, ForeignKey('bot.id')),
Column('file_group_id', Integer, ForeignKey('file_group.id'))
)
class Bot(Base):
__tablename__ = 'bot'
id = Column(Integer, primary_key=True)
name = Column(String)
file_groups = relationship('FileGroup', secondary=bot_filegroup_association, back_populates='bots')
class FileGroup(Base):
__tablename__ = 'file_group'
id = Column(Integer, primary_key=True)
name = Column(String)
bots = relationship('Bot', secondary=bot_filegroup_association, back_populates='file_groups')
# Creating a new session
engine = create_engine('sqlite:///:memory:')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
# Example usage of the proposed feature
file_group_ids = [1, 2, 3]
bot_id = 1
# Assuming bot and file groups are already created in the database
bot = session.query(Bot).filter_by(id=bot_id).first()
bot.file_groups = file_group_ids # Directly assigning primary key list
# Commit the changes
session.commit()
# Verify the associations
bot = session.query(Bot).filter_by(id=bot_id).first()
print([fg.id for fg in bot.file_groups]) # Should output list of actual objects
```
### Additional context
Fetching all related objects just to establish relationships can lead to unnecessary database queries and increased latency. This is particularly problematic in scenarios where the list of related object primary keys is already known, and fetching the full objects provides no additional benefit. Allowing direct association using primary key lists would streamline the process, reduce database load, and improve performance. | closed | 2025-02-27T09:11:13Z | 2025-03-03T07:45:12Z | https://github.com/sqlalchemy/sqlalchemy/issues/12378 | [
"orm",
"use case"
] | Gepardgame | 6 |
ansible/awx | django | 15,540 | docker-compose-build fails with: Unable to find a match: openssl-3.0.7 | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
```make docker-compose-build``` fails when trying to install ```openssl-3.0.7```.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
2.17.4
### Operating system
CentOS stream 9
### Web browser
_No response_
### Steps to reproduce
with:
- awx_version=24.6.0 or 24.6.1
- awx_devel_container_version=release_4.5 or release_4.6
- awx_receptor_version=devel or latest
1. git clone -b $awx_version https://github.com/ansible/awx.git git-awx && cd git-awx
2. git switch -c $awx_devel_container_version
3. export RECEPTOR_IMAGE=quay.io/ansible/receptor:${awx_receptor_version}
4. make docker-compose-build
### Expected results
No error
### Actual results
```
ansible-playbook -e ansible_python_interpreter=python3.11 tools/ansible/dockerfile.yml \
-e dockerfile_name=Dockerfile.dev \
-e build_dev=True \
-e receptor_image=quay.io/ansible/receptor:devel
...
13.47 CentOS Stream 9 - BaseOS 27 kB/s | 16 kB 00:00
13.60 CentOS Stream 9 - AppStream 205 kB/s | 17 kB 00:00
14.10 CentOS Stream 9 - CRB 19 MB/s | 6.5 MB 00:00
16.53 No match for argument: openssl-3.0.7
16.57 Error: Unable to find a match: openssl-3.0.7
------
Dockerfile.dev:22
--------------------
21 | # Install build dependencies
22 | >>> RUN dnf -y update && dnf install -y 'dnf-command(config-manager)' && \
23 | >>> dnf config-manager --set-enabled crb && \
24 | >>> dnf -y install \
25 | >>> iputils \
26 | >>> gcc \
27 | >>> gcc-c++ \
28 | >>> git-core \
29 | >>> gettext \
30 | >>> glibc-langpack-en \
31 | >>> libffi-devel \
32 | >>> libtool-ltdl-devel \
33 | >>> make \
34 | >>> nodejs \
35 | >>> nss \
36 | >>> openldap-devel \
37 | >>> # pin to older openssl, see jira AAP-23449
38 | >>> openssl-3.0.7 \
39 | >>> patch \
40 | >>> postgresql \
41 | >>> postgresql-devel \
42 | >>> python3.11 \
43 | >>> "python3.11-devel" \
44 | >>> "python3.11-pip" \
45 | >>> "python3.11-setuptools" \
46 | >>> "python3.11-packaging" \
47 | >>> "python3.11-psycopg2" \
48 | >>> swig \
49 | >>> unzip \
50 | >>> xmlsec1-devel \
51 | >>> xmlsec1-openssl-devel
52 |
--------------------
ERROR: failed to solve: process "/bin/sh -c dnf -y update && dnf install -y 'dnf-command(config-manager)' && dnf config-manager --set-enabled crb && dnf -y install iputils gcc gcc-c++ git-core gettext glibc-langpack-en libffi-devel libtool-ltdl-devel make nodejs nss openldap-devel openssl-3.0.7 patch postgresql postgresql-devel python3.11 \"python3.11-devel\" \"python3.11-pip\" \"python3.11-setuptools\" \"python3.11-packaging\" \"python3.11-psycopg2\" swig unzip xmlsec1-devel xmlsec1-openssl-devel" did not complete successfully: exit code: 1
make: *** [Makefile:619: docker-compose-build] Error 1
```
### Additional information
_No response_ | closed | 2024-09-18T14:33:40Z | 2024-09-19T16:13:57Z | https://github.com/ansible/awx/issues/15540 | [
"type:bug",
"needs_triage",
"community"
] | jean-christophe-manciot | 3 |
JaidedAI/EasyOCR | pytorch | 457 | Update 'Open in colab' link on the Home Page for Demo/Example | closed | 2021-06-12T08:29:22Z | 2021-06-13T07:16:21Z | https://github.com/JaidedAI/EasyOCR/issues/457 | [] | the-marlabs | 1 | |
deeppavlov/DeepPavlov | tensorflow | 1,454 | Support of the Transformers>=4.0.0 version library | **DeepPavlov version** (you can look it up by running `pip show deeppavlov`):
latest
**Python version**:
3.6
**Operating system** (ubuntu linux, windows, ...):
ubuntu
**Issue**:
Starting from version 4.0.0 the interface of Transformers library was changed and a dictionary is returned as model output. That is why issue #1355 cannot be fixed by changing PyTorch version. It can be fixed as easy as adding an additional argument `return_dict=False` to model call at [this](https://github.com/deepmipt/DeepPavlov/blob/6c8f8924628f40eab3ce6301916dc6fbd38869f0/deeppavlov/models/embedders/transformers_embedder.py#L73) line. But it breaks the support of the previous Transformers library version.
Please add the version check for this library or something like this because this issue is the only one that stops from using the latest Transformers library versions.
| closed | 2021-05-24T18:39:42Z | 2022-04-01T11:18:09Z | https://github.com/deeppavlov/DeepPavlov/issues/1454 | [
"bug"
] | spolezhaev | 2 |
davidsandberg/facenet | computer-vision | 499 | How do you setup the project? | I am missing the setup.py file, was the installation procedure changed? | closed | 2017-10-26T10:07:22Z | 2017-11-10T23:11:56Z | https://github.com/davidsandberg/facenet/issues/499 | [] | mia-petkovic | 2 |
vitalik/django-ninja | pydantic | 1,172 | Not required fields in FilterSchema | Please describe what you are trying to achieve:
in FilterSchema, is it possible to use the params 'exclude_none' ?
Please include code examples (like models code, schemes code, view function) to help understand the issue
```
class Filters(FilterSchema):
limit: int = 100
offset: int = None
query: str = None
category__in: List[str] = Field(None, alias="categories")
@route.get("/filter")
def events(request, filters: Query[Filters]):
print(filters.filter)
return {"filters": filters.dict()}
```
the print output is
> <FilterSchema.filter of Filters(limit=100, offset=None, query=None, category__in=None)>
while the filters.dict does't have the none value , the filters.filter is still have the none value.
And my question is there any possible to use 'exclude_none' in request body?
| open | 2024-05-20T08:16:49Z | 2024-09-27T06:27:31Z | https://github.com/vitalik/django-ninja/issues/1172 | [] | horizon365 | 1 |
PokeAPI/pokeapi | api | 452 | Add possibility to get Pokemon evolution easier | Please add possibility to make requests like this:
https://pokeapi.co/api/v2/evolution/{pkmn ID or name}/
This would return:
evo_from
evo_from_reqs
evo_to
evo_to_req
evo_mega
evo_form
Please add this. Currenly it's so hard to get specific pokemon evo from and evo to. | closed | 2019-10-12T10:24:05Z | 2020-08-19T10:07:31Z | https://github.com/PokeAPI/pokeapi/issues/452 | [] | ks129 | 4 |
huggingface/datasets | deep-learning | 6,973 | IndexError during training with Squad dataset and T5-small model | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libraries: !pip install transformers datasets
2.Run the following code:
!pip install transformers datasets
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding
# Load a small, publicly available dataset
from datasets import load_dataset
dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing
# Load a pre-trained model and tokenizer
model_name = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# Define a basic data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=2,
num_train_epochs=1,
)
# Create a trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
data_collator=data_collator,
)
# Train the model
trainer.train()
### Expected behavior
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>()
32
33 # Train the model
---> 34 trainer.train()
10 frames
[/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size)
427 if isinstance(key, int):
428 if (key < 0 and key + size < 0) or (key >= size):
--> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
430 return
431 elif isinstance(key, slice):
IndexError: Invalid key: 42 is out of bounds for size 0
### Environment info
transformers version:4.41.2
datasets version:1.18.4
Python version:3.10.12
| closed | 2024-06-16T07:53:54Z | 2024-07-01T11:25:40Z | https://github.com/huggingface/datasets/issues/6973 | [] | ramtunguturi36 | 2 |
Miserlou/Zappa | django | 1,812 | How to "catch" an asynchronous task timeout? | Sorry if I missed this in the docs, but how can I "catch" that a certain asynchronous task has received a timeout? | open | 2019-03-12T17:05:51Z | 2019-03-14T13:21:24Z | https://github.com/Miserlou/Zappa/issues/1812 | [] | mojimi | 2 |
openapi-generators/openapi-python-client | rest-api | 750 | Allow tweaking configuration of black and isort via custom templates | **Is your feature request related to a problem? Please describe.**
I'm using a monorepo with multiple projects, where some of the projects are generated by openapi-python-client.
I'd like to have a single configuration of black and isort at the top level of the monorepo, but having them included in pyproject.toml of generated projects breaks that.
**Describe the solution you'd like**
I'd like to be able to use custom templates to alter the configuration of black and isort in generated projects, while not having to copy the rest of poetry's configuration into my custom templates.
**Describe alternatives you've considered**
I've considered using **generate** once, then manually tweaking pyproject.toml and doing **update** afterwards. However, I don't like this approach, since it leads to having non-reproducible manual edits.
**Additional context**
I think this can be achieved by a very simple one-line change that also reduces some code duplication; I'll be sending a PR for this shortly.
| closed | 2023-04-21T16:38:54Z | 2023-04-30T19:31:14Z | https://github.com/openapi-generators/openapi-python-client/issues/750 | [
"✨ enhancement"
] | machinehead | 1 |
RobertCraigie/prisma-client-py | pydantic | 854 | Retrying db calls? | Hey @RobertCraigie,
Is there a way to retry db calls? (or is this already happening under-the-hood)?
Users reporting failed db writes - https://github.com/BerriAI/litellm/issues/1056 | open | 2023-12-08T02:42:31Z | 2023-12-08T02:42:31Z | https://github.com/RobertCraigie/prisma-client-py/issues/854 | [] | krrishdholakia | 0 |
miguelgrinberg/python-socketio | asyncio | 528 | Good way to handle flow control | What is the best way to handle flow control with WebSockets. I am looking at a large file transfer case. Here is what I am currently doing
1. Chunk the files into 10k size chunks
2. Send out 5 chunks, then call sio.sleep(0.01)
3. back to step(2) until EOF
There are a few problems I run it, especially when the file goes larger than 50MB. I see the endpoint disconnects from the server. I have seen different errors (some times an exception "object has no attribute 'call_exception_handler'")
I have few questions
1. What is the best way to ensure I can do back-to-back messaging while still not letting the connection time-out? I am using an async client)
2. What criteria determine the receiver side buffer size? If there is a way to determine this, I could calculate the amount of data I can send in a burst before getting an ack from the receiver - before sending out the next burst.
3. What is the largest size payload that can be sent on a WebSocket? (I am wondering if I should increase the chunk size from 10k to a larger number. I have tried larger sizes and it works but I am not sure if there is a way to determine the right size).
| closed | 2020-07-26T08:42:33Z | 2020-10-09T19:07:25Z | https://github.com/miguelgrinberg/python-socketio/issues/528 | [
"question"
] | bhakta0007 | 4 |
graphql-python/graphene-sqlalchemy | graphql | 31 | Anything on the roadmap for a relay SQLAlchemyClientIDMutation class? | I have been using this library for a project and its great so far, however it seems there should also be a class for relay ClientIDMutations. Is this on the roadmap? | closed | 2017-01-12T02:24:12Z | 2023-08-15T00:36:03Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/31 | [
":eyes: more info needed"
] | aminghadersohi | 2 |
explosion/spaCy | data-science | 13,680 | Spaces impacting tag/pos | ## How to reproduce the behaviour
Notice the double space in front of `sourire` in the first case vs. the single space in the second case
`Les publics avec un sourire chaleureux et`
<img width="1277" alt="image" src="https://github.com/user-attachments/assets/9bdb2aca-8741-41d5-995e-2333aa392158">
https://demos.explosion.ai/displacy?text=Les%20publics%20avec%20un%20%20sourire%20chaleureux%20%20et&model=fr_core_news_sm
vs.
`Les publics avec un sourire chaleureux et`
<img width="1282" alt="image" src="https://github.com/user-attachments/assets/e43870e6-115a-42ca-9d2b-39c9446ed212">
https://demos.explosion.ai/displacy?text=Les%20publics%20avec%20un%20sourire%20chaleureux%20%20et&model=fr_core_news_sm
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used: 3.12
* spaCy Version Used: v3.5 (displacy) but also in v3.7
* Environment Information:
Semi-related: Any guidance on how to modify the tokenizer so that a double spaces would be placed into `whitespace_` (ie. ` `) and not lead to a `SPACE` token? I did take note of https://github.com/explosion/spaCy/issues/1707 though putting the additional spaces into `whitespace_` seems more logical to me.
## Research
a) Maybe related https://github.com/explosion/spaCy/issues/621
b) Semi-related https://stephantul.github.io/spacy/2019/05/01/tokenizationspacy/
c) Semi-related https://github.com/explosion/spaCy/discussions/9978 | open | 2024-10-28T12:55:22Z | 2024-11-12T04:11:26Z | https://github.com/explosion/spaCy/issues/13680 | [] | lsmith77 | 1 |
ijl/orjson | numpy | 229 | JSON5 Format Support | Well the current issue more of an issue is an actual feature request or suggestion.
But basically, I've been wondernig if JSON5 is going to be implemented in this library which I think it'd be very useful given the library's speed performance in reading operations while being able to keep the advantages that this new format provides.
Have a great day, looking forward your response, cheers. | closed | 2022-01-06T19:22:18Z | 2022-01-13T00:09:37Z | https://github.com/ijl/orjson/issues/229 | [] | Vioshim | 1 |
explosion/spaCy | nlp | 13,157 | Issue when calling spacy info | Hi I am Bala. I use Spacy 3.6.1 for NLP. I am facing the following issue when calling spacy info and when loading any model. I use Python 3.8 on Windows 10.
<<
(pnlpbase) PS C:\windows\system32> python -m spacy info
Traceback (most recent call last):
File "D:\python\Anaconda3\envs\pnlpbase\lib\runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "D:\python\Anaconda3\envs\pnlpbase\lib\runpy.py", line 144, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "D:\python\Anaconda3\envs\pnlpbase\lib\runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "D:\python\Anaconda3\envs\pnlpbase\lib\site-packages\spacy\__init__.py", line 14, in <module>
from . import pipeline # noqa: F401
File "D:\python\Anaconda3\envs\pnlpbase\lib\site-packages\spacy\pipeline\__init__.py", line 1, in <module>
from .attributeruler import AttributeRuler
File "D:\python\Anaconda3\envs\pnlpbase\lib\site-packages\spacy\pipeline\attributeruler.py", line 6, in <module>
from .pipe import Pipe
File "spacy\pipeline\pipe.pyx", line 1, in init spacy.pipeline.pipe
File "spacy\vocab.pyx", line 1, in init spacy.vocab
File "D:\python\Anaconda3\envs\pnlpbase\lib\site-packages\spacy\tokens\__init__.py", line 1, in <module>
from .doc import Doc
File "spacy\tokens\doc.pyx", line 36, in init spacy.tokens.doc
File "D:\python\Anaconda3\envs\pnlpbase\lib\site-packages\spacy\schemas.py", line 222, in <module>
class TokenPattern(BaseModel):
File "pydantic\main.py", line 205, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 491, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 421, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 537, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 634, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 641, in pydantic.fields.ModelField._type_analysis
File "D:\python\Anaconda3\envs\pnlpbase\lib\typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
(pnlpbase) PS C:\windows\system32>
>>
Also, I get a deserialization error when I load models. I will appreciate if you let me know how to fix this. This is bit urgent. Awaiting your reply. Thank you. Bala<!-- Describe the problem or suggestion here. If you've found a mistake and you know the answer, feel free to submit a pull request straight away: https://github.com/explosion/spaCy/pulls -->
## Which page or section is this issue related to?
<!-- Please include the URL and/or source. -->
| closed | 2023-11-28T01:55:43Z | 2024-01-26T08:50:07Z | https://github.com/explosion/spaCy/issues/13157 | [
"duplicate"
] | balachander1964 | 3 |
horovod/horovod | deep-learning | 3,857 | Horovod with MPI and NCCL | If I have installed NCCL and MPI, and want to install horovod from source code. But I'm confused about some parameters.
**HOROVOD_GPU_OPERATIONS**,**HOROVOD_GPU_ALLREDUCE** and **HOROVOD_GPU_BROADCAST**
How to set this three parameters ? Which use NCCL and which use MPI ? Anyone can help to answer this question? Thanks a lot in advance!!! | closed | 2023-03-01T07:27:23Z | 2023-03-01T10:08:37Z | https://github.com/horovod/horovod/issues/3857 | [
"question"
] | yjiangling | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 579 | Custom dataset encoder training | Hi, how do i implement a cusom data set for the encoder training? | closed | 2020-10-29T15:03:13Z | 2021-02-10T07:36:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/579 | [] | quirijnve | 13 |
adbar/trafilatura | web-scraping | 676 | Remove deprecations (mostly CLI) | - [x] Look for lines like `raise ValueError("...deprecated")` and remove deprecations
- [x] Check if all CLI arguments are actually used
- [x] Remove corresponding tests | closed | 2024-08-15T15:34:45Z | 2024-10-08T16:53:12Z | https://github.com/adbar/trafilatura/issues/676 | [
"maintenance"
] | adbar | 0 |
allure-framework/allure-python | pytest | 484 | Attach a ZIP or XLSX file | Hello, using the latest package, what is the right code for attaching a file .xlsx to the report?
I'm using:
```
from allure_commons.types import AttachmentType
allure.attach.file("./bin/prova-riccardo.xlsx", name="prova-riccardo.xlsx")
```
but the downloaded file name has a strange name (508c27f28c7697d9.attach) which prevents from opening the MSExcel after the file download on Desktop
The solution is quite simple. Please add in https://github.com/allure-framework/allure-python/blob/master/allure-python-commons/src/types.py#L36:
XLS = ("application/vnd.ms-excel", "xls")
XLSX = ("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", "xlsx")
ZIP = ("application/zip", "zip")
| closed | 2020-04-14T10:47:31Z | 2022-04-19T19:05:56Z | https://github.com/allure-framework/allure-python/issues/484 | [] | ric79 | 3 |
encode/apistar | api | 523 | Docs/Guide give error: AttributeError: type object 'list' has no attribute '__args__' | [This](https://docs.apistar.com/api-guide/routing/) routing guide, give me error:
> (apistar) ➜ test python app.py
Traceback (most recent call last):
File "app.py", line 39, in <module>
Route('/users/', method='GET', handler=list_users),
File "/Users/atmosuwiryo/.virtualenvs/apistar/lib/python3.6/site-packages/apistar/server/core.py", line 17, in __init__
self.link = self.generate_link(url, method, handler, self.name)
File "/Users/atmosuwiryo/.virtualenvs/apistar/lib/python3.6/site-packages/apistar/server/core.py", line 21, in generate_link
response = self.generate_response(handler)
File "/Users/atmosuwiryo/.virtualenvs/apistar/lib/python3.6/site-packages/apistar/server/core.py", line 83, in generate_response
annotation = self.coerce_generics(annotation)
File "/Users/atmosuwiryo/.virtualenvs/apistar/lib/python3.6/site-packages/apistar/server/core.py", line 94, in coerce_generics
annotation.__args__ and
AttributeError: type object 'list' has no attribute '__args__'
This is the 'list_users' function, same as the guide:
> def list_users(app: App) -> list:
return [
{
'username': username,
'url': app.reverse_url('get_user', user_id=user_id)
} for user_id, username in USERS.items()
]
This is my env:
> (apistar) ➜ test python --version
Python 3.6.4
(apistar) ➜ test pip freeze
apistar==0.5.12
certifi==2018.4.16
chardet==3.0.4
idna==2.6
Jinja2==2.10
MarkupSafe==1.0
requests==2.18.4
urllib3==1.22
Werkzeug==0.14.1
whitenoise==3.3.1
Additional info:
if I change this code:
> def list_users(app: App) -> list:
to:
> def list_users(app: App) -> dict:
Then there is no error anymore. | closed | 2018-05-09T16:29:03Z | 2018-05-21T09:41:51Z | https://github.com/encode/apistar/issues/523 | [] | atmosuwiryo | 1 |
vitalik/django-ninja | pydantic | 866 | Dates in query parameters without leading zeroes lead to 422 error | Query parameters defined as date now expect to be in the format YYYY-MM-DD with leading zeroes.
Before upgrade to beta v1 it was possible to send date parameters without leading zeroes, e.g. 2023-9-29.
After upgrade this will yield 422 Unprocessable Entity.
The request works with leading zeroes, e.g. 2023-09-29
I have not tried this with dates in request body, but I expect it to behave the same.
I understand this may be due to the way pydantic v2 is parsing strings, but I wonder is this the intended behaviour? I have a pretty large application with hundreds of endpoints where JS front end is sending date parameters without leading zeroes. Do we need to refactor everything?
Thanks in advance!
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.2.5
- Django-Ninja version: 1.0b1
- Pydantic version: 2.4.2
| closed | 2023-09-29T16:27:10Z | 2023-10-02T07:25:02Z | https://github.com/vitalik/django-ninja/issues/866 | [] | ognjenk | 3 |
PokeAPI/pokeapi | api | 1,140 | Kanto Route 13 encounter table missing time conditions for Crystal | - Pidgeotto, Nidorina, Nidorino slots should be equally split between morning only and day only.
- Venonat, Venomoth, Noctowl, Quagsire should be night only (except Quagsire's surf slot).
- Chansey slots have no time conditions at all, so what should be 1% is reported as 3%. | open | 2024-10-08T04:11:39Z | 2024-10-08T04:11:39Z | https://github.com/PokeAPI/pokeapi/issues/1140 | [] | Pinsplash | 0 |
recommenders-team/recommenders | machine-learning | 1,810 | Use LSTUR Model with own Data | ### Description
I am trying to use the LSTUR model with a dataset about purchase data, but I don't understand which format data have to pass to the model.
In the example:
` model = LSTURModel(hparams, iterator, seed=seed)`
Is data stored inside the "iterator" object?
| open | 2022-08-10T21:53:07Z | 2022-09-03T14:27:31Z | https://github.com/recommenders-team/recommenders/issues/1810 | [
"help wanted"
] | claraMarti | 2 |
bendichter/brokenaxes | matplotlib | 14 | How to share x axes in subplots? | I creat subplots with Gridspec, how can I make the top panel share the x axes with the bottom one? | closed | 2018-04-20T02:16:04Z | 2018-04-20T16:01:14Z | https://github.com/bendichter/brokenaxes/issues/14 | [] | Kal-Elll | 3 |
huggingface/diffusers | pytorch | 10,406 | CogVideoX: RuntimeWarning: invalid value encountered in cast | Can be closed | closed | 2024-12-29T08:50:56Z | 2024-12-29T08:59:00Z | https://github.com/huggingface/diffusers/issues/10406 | [
"bug"
] | nitinmukesh | 0 |
cvat-ai/cvat | tensorflow | 9,187 | Problem with version 2.31.0 after upgrading from 2.21.2: "Could not fetch requests from the server" | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Hello,
we upgraded vom version 2.21.2 to 2.31.0. Now we see the following message after login or after an action like creating or deleting a task.
````
Could not fetch requests from the server
<!doctype html> <html lang="en"> <head> <title>Server Error (500)</title> </head> <body> <h1>Server Error (500)</h1><p></p> </body> </html>
````
This is not a problem for us, but it make us insecure.
But another case is a problem for us: We see the functions from Nuclio as models in CVAT, but we cannot run them. In version 2.21.2 we could.
I inform about the both cases in this issue, because I believe, that they could be related.
Best Reagards
Rose
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | closed | 2025-03-10T08:04:30Z | 2025-03-10T14:09:02Z | https://github.com/cvat-ai/cvat/issues/9187 | [
"bug",
"need info"
] | RoseDeSable | 7 |
FlareSolverr/FlareSolverr | api | 1,123 | The CPU and memory usage of Chromium | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.16
- Last working FlareSolverr version:
- Operating system: centos7
- Are you using Docker: [yes/no] yes
- FlareSolverr User-Agent (see log traces or / endpoint): default config
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
When the runtime extends, the CPU and memory usage of Chromium become unusually high. This persists even when there is no website access at the moment.
Both CPU and memory are maxed out at 100%.
The high CPU and memory usage of Chromium cause server lag.
### Logged Error Messages
```text
The logs appear to be normal.
```
### Screenshots
_No response_ | closed | 2024-03-17T07:29:55Z | 2024-03-18T11:04:42Z | https://github.com/FlareSolverr/FlareSolverr/issues/1123 | [
"duplicate"
] | nanmuye | 2 |
gevent/gevent | asyncio | 1,806 | immediate disconnection of gevent-websocket (code: 1005, reason: “”) | * gevent version: gevent==21.1.2
* Python version: Python 3.8.10
* Operating System: Ubuntu 20.04.1 LTS
### Description:
I test the following simple websocket server with wscat, but it is immediately disconnected without listening.
I tried
```
python main.py &
wscat -c ws://localhost:5000/echo
```
and the result is below without showing any prompt.
```
Connected (press CTRL+C to quit)
Disconnected (code: 1005, reason: "")
```
### What I've run:
main.py
```
# -*- coding: utf-8 -*-
from geventwebsocket.handler import WebSocketHandler
from gevent.pywsgi import WSGIServer
from flask import Flask, request
from werkzeug.exceptions import abort
app = Flask(__name__)
@app.route('/echo')
def echo():
ws = request.environ['wsgi.websocket']
if not ws:
abort(400)
while True:
message = ws.receive()
ws.send(message)
if __name__ == '__main__':
http_server = WSGIServer(('', 5000), app, handler_class=WebSocketHandler)
http_server.serve_forever()
```
| closed | 2021-07-10T21:37:40Z | 2021-07-11T11:17:33Z | https://github.com/gevent/gevent/issues/1806 | [] | nemnemnemy | 1 |
noirbizarre/flask-restplus | flask | 726 | Move to Jazzband or other shared project space | flask-restplus, even after adding multiple new maintainers, is continuing to fall behind requests. We are all donating our time and expertise, and there's still more work than available time.
We should think about moving to Jazzband or another shared project space. This gives us:
1. more possible maintainers and contributors
2. a good set of rules for adding maintainers and contributors
Jazzband may not be the right fit, but it's at least someplace to start
https://jazzband.co/about | open | 2019-10-09T14:26:47Z | 2019-10-10T18:47:17Z | https://github.com/noirbizarre/flask-restplus/issues/726 | [] | j5awry | 5 |
tensorpack/tensorpack | tensorflow | 1,200 | Add fetches tensors dynamically | Hi,
First,Thanks for your wonderful work.
I'm training an object detector. In the end of my net I have 2 Tensors of confidence and location.
I can take those two tensors to compute cost function (it works and train). I also want that at the end of each epoch take those two tensors apply tf function and apply summary.
I tried to use the ProcessTensors callback but it says the graph is already finalized.
Should I use Callbacks? Somethinig else?
In my framework I build all the graph (including extra function) and only change the tensor in the feed dict of the session run main loop function when I want to log to summary.
Is there anything like it in TP?
Thanks
| closed | 2019-05-20T08:15:18Z | 2019-05-24T22:43:10Z | https://github.com/tensorpack/tensorpack/issues/1200 | [
"usage"
] | MikeyLev | 2 |
MycroftAI/mycroft-core | nlp | 2,770 | Mycroft fails to start on Manjaro | **Describe the bug**
Starting Mycroft on Manjaro Linux after installing it in the official way, results in errors and warnings.
**To Reproduce**
1. Go to https://github.com/MycroftAI/mycroft-core
2. Copy the steps from the Installation guide and run them in your terminal.
3. Answer all the Questions asked at the Installation
4. run ~/mycroft-core/start-mycroft.sh debug
**Expected behavior**
The CLI-Interface should show up.
**Log files**
[audio.log](https://github.com/MycroftAI/mycroft-core/files/5621446/audio.log)
[bus.log](https://github.com/MycroftAI/mycroft-core/files/5621447/bus.log)
[enclosure.log](https://github.com/MycroftAI/mycroft-core/files/5621448/enclosure.log)
[skills.log](https://github.com/MycroftAI/mycroft-core/files/5621449/skills.log)
[voice.log](https://github.com/MycroftAI/mycroft-core/files/5621450/voice.log)



**Environment (please complete the following information):**
- Device type: Desktop
- OS: Manjaro
- Mycroft-core version: 20.08.0
**Additional context**
I used to see the same problem with the AUR-package and the official Manjaro package. Reported the error there too, but as it happens with the git installation method too, I report it here too. | closed | 2020-12-01T09:14:48Z | 2024-09-08T08:32:16Z | https://github.com/MycroftAI/mycroft-core/issues/2770 | [
"bug"
] | 1Maxnet1 | 16 |
scikit-hep/awkward | numpy | 2,859 | GPU Tests Failed | The GPU tests failed for commit with the following pytest output:
```
``` | closed | 2023-11-30T07:02:11Z | 2023-11-30T12:59:36Z | https://github.com/scikit-hep/awkward/issues/2859 | [] | agoose77 | 0 |
pydantic/logfire | pydantic | 652 | Connecting Alternate Backend to GCP Metrics/Traces | ### Question
In the below documentation
https://logfire.pydantic.dev/docs/guides/advanced/alternative-backends/#other-environment-variables
it is mentioned if OTEL_TRACES_EXPORTER and/or OTEL_METRICS_EXPORTER is configured, it can work with alternate Backends
Can I connect the same to GCP Cloud Monitoring (Metrics & Traces)
Can you provide an Example? | open | 2024-12-06T13:50:16Z | 2024-12-24T08:39:15Z | https://github.com/pydantic/logfire/issues/652 | [
"Question"
] | sandeep540 | 6 |
mitmproxy/pdoc | api | 35 | Parsing Epytext | Sorry the ignorance. Is there a way of 'forcing' pdoc to parse docstrings in Epytext format as the example below:
``` python
def load_config(filename, option):
"""
Loads and tests input parameters.
@param filename: input filename
@param option: option name
@return: returns a valid config value
"""
```
Thanks
| closed | 2015-02-27T17:10:11Z | 2018-06-03T03:15:23Z | https://github.com/mitmproxy/pdoc/issues/35 | [] | biomadeira | 5 |
tortoise/tortoise-orm | asyncio | 1,551 | Optional parameter in pydantic_model_creator does not work after upgrading to pydantic v2 | **Describe the bug**
- tortoise-orm = 0.20.0
- pydantic==2.5.3
- pydantic-core==2.14.6
升级至 pydantic v2 后 使用 pydantic_model_creator 创建 pydantic 模型时,pydantic_model_creator(optional=(xxx))不生效,字段 仍为必须填写
After upgrading to pydantic v2, when using pydantic_model_creator to create pydantic model, pydantic_model_creator(optional=(xxx)) does not take effect, and fields are still required.
**To Reproduce**
```
class AuthUsers(BaseModel):
username = fields.CharField(max_length=32, unique=True)
password = fields.CharField(max_length=128)
nickname = fields.CharField(max_length=32)
phone = fields.CharField(null=True, max_length=20, unique=True)
email = fields.CharField(max_length=128, unique=True)
class Meta:
table = "auth_users"
indexes = ("username", "user_status")
class UserUpdateRequest(pydantic_model_creator(
cls=AuthUsers,
name="UserUpdateRequest",
exclude=("username", "password",),
exclude_readonly=True,
optional=("nickname", "email",)
))
```
<img width="533" alt="image" src="https://github.com/tortoise/tortoise-orm/assets/106720683/0fd72561-d4ab-4f15-a7e5-94337c341563">
**Expected behavior**
在UserUpdateRequest模型中,nickname和email应该为可选的,但实际为必填参数。在pydantic v1和tortoise-orm 0.19.3中是正常工作的
In the UserUpdateRequest model, nickname and email should be optional, but are actually required parameters. This is working fine in pydantic v1 and tortoise-orm 0.19.3
**Additional context**
我已经下载develop分支中的最新源码,仍然存在此问题,我在/tortoise/contrib/pydantic/creator.py文件中看到以下代码,当我添加了json_schema_extra["nullable"] = True时,工作正常
I have downloaded the latest source code in the develop branch and still have this problem, I see the following code in the /tortoise/contrib/pydantic/creator.py file and when I add json_schema_extra["nullable"] = True, it works fine
<img width="781" alt="image" src="https://github.com/tortoise/tortoise-orm/assets/106720683/39aefa97-91dc-4f36-9768-283846b1eca4">
<img width="537" alt="image" src="https://github.com/tortoise/tortoise-orm/assets/106720683/3ea6485b-10c8-45f7-9eff-c25a3e2837c3">
我认为是pydantic v2迁移指南中描述的一些更改引起的 [Pydantic 2.0 Migration Guide](https://docs.pydantic.dev/dev/migration/#required-optional-and-nullable-fields)
I think it's caused by some changes described in the pydantic v2 migration guide [Pydantic 2.0 Migration Guide](https://docs.pydantic.dev/dev/migration/#required-optional-and-nullable-fields)
| closed | 2024-01-25T09:48:13Z | 2024-05-24T07:23:25Z | https://github.com/tortoise/tortoise-orm/issues/1551 | [] | cary997 | 2 |
pyeve/eve | flask | 1,496 | Replace vs Merge Update on PATCH | ### Feature Request
Providing a header that will allow a PATCH request to replace a key value instead of using PUT, PUT replaces fields I want to
preserve and leave unchangeable.
Unless this can be achieved in another way
### Expected Behavior
```python
# Create a record
POST /api/profiles
{
'name': 'Test',
'fields': {
'one': 1,
'two': 2
}
}
# => { _created: 'blah', _id: '123456' }
# then update fields with a PATCH request
PATCH /api/profiles/123456
{
'fields': {
'three': 3,
'four': 4
}
}
# then get the updated record
GET /api/profiles/123456
# RESPONSE
{
'_id': '123456',
'name': 'Test',
'fields': {
'three': 3,
'four': 4
}
}
```
### Actual Behavior
Tell us what happens instead.
```pytb
# Create a record
POST /api/profiles
{
'name': 'Test',
'fields': {
'one': 1,
'two': 2
}
}
# => { _created: 'blah', _id: '123456' }
# then update fields with a PATCH request
PATCH /api/profiles/123456
{
'fields': {
'three': 3,
'four': 4
}
}
# then get the updated record
GET /api/profiles/123456
# RESPONSE
{
'_id': '123456',
'name': 'Test',
'fields': {
'one': 1,
'two': 2,
'three': 3,
'four': 4
}
}
```
### Environment
* Python version: 3.10
* Eve version: 2.0
| open | 2023-01-26T21:00:40Z | 2023-01-26T21:00:40Z | https://github.com/pyeve/eve/issues/1496 | [] | ghost | 0 |
long2ice/fastapi-cache | fastapi | 30 | Include dependencies with PyPi installation | Should not `aioredis`, `memcache`, and `redis` come with the installation of this package, as they are requirements?
Regarding `redis` vs `memcache` and PyPi, this is issue is related: #29 | closed | 2021-07-30T23:57:08Z | 2023-05-14T22:13:48Z | https://github.com/long2ice/fastapi-cache/issues/30 | [] | joeflack4 | 1 |
huggingface/transformers | python | 36,414 | Downloading models in distributed training | When I run distributed training, if the model is not already downloaded locally on disk, different ranks start fighting for the download and they crash.
I am looking for a fix such that:
1. If the model is not yet downloaded on disk, only one rank downloads it. The rest of the ranks are waiting until the file is downloaded
2. If the model is already on disk, all ranks load it simultaneously, no waiting for each other
3. The solution is universal. In other worlds, I still instantiate the model via `AutoModel` instead of with some wrapper function and I don't write a bunch of if-else statements every time I need to create a model
I wasn't able to find something that can achieve this right now. I guess a very simple solution could be adding lock files when downloading a model such that other ranks wait until the completion of the download and then use the downloaded files directly | closed | 2025-02-26T09:28:06Z | 2025-03-11T22:08:10Z | https://github.com/huggingface/transformers/issues/36414 | [] | nikonikolov | 3 |
huggingface/text-generation-inference | nlp | 2,324 | 墙内用户如何不使用梯子运行docker容器.Chinese mainland users must use a proxy software when running Docker. How can they avoid using a proxy software? | ### Feature request
我可以确认在huggingface上下载的模型权重文件没有问题,运行docker时还是需要翻墙才能正常运行。我猜测是代码中需要对文件进行检查,是否可以设置一个参数避免进行检查呢?
I can confirm that the weight files of the neural network models downloaded from the Hugging Face website are correct, but the container still needs a proxy software (to bypass the firewall) to run normally. I suspect that the code needs to check the files, and whether it is possible to set a parameter to avoid the check?
### Motivation
我下载了huggingface上的模型权重在本地,通过docker运行时还是需要开启梯子才能正常运行。我需要如何设置参数才能不用梯子呢?
I downloaded the weight parameters of the neural network model from the Hugging Face website and saved them on my local computer. However, when running the Docker container, I encountered an error related to network connection. After enabling a proxy software, I was able to successfully run the container.
docker run --gpus all -p8080:80 -v $HOME/.cache/huggingface/hub:/data ghcr.io/huggingface/text-generation-inference:2.0.4 --model-id lllyasviel/omost-llama-3-8b --max-total-tokens 9216 --cuda-memory-fraction 0.5



### Your contribution
None | closed | 2024-07-29T07:55:13Z | 2024-07-29T09:15:56Z | https://github.com/huggingface/text-generation-inference/issues/2324 | [] | zk19971101 | 5 |
unionai-oss/pandera | pandas | 851 | PyArrow as optional dependency | **Is your feature request related to a problem? Please describe.**
The PyArrow package contains some very large libraries (e.g., `libarrow.so` (50MB) and `libarrow_flight.so` (14M)). This makes it very hard to use the Pandera package in a serverless environment, since packages have strict size limits and PyArrow is required. Hence, the issue is that Pandera is practically unusable for AWS Lambda.
**Describe the solution you'd like**
It seems that PyArrow is not really part of the core of Pandera. Therefore, I would like to suggest to make pyarrow an optional dependency to allow Pandera to be used in environment with strict size constraints.
**Describe alternatives you've considered**
Not applicable.
**Additional context**
Not applicable.
| open | 2022-05-10T11:47:22Z | 2022-07-14T10:21:29Z | https://github.com/unionai-oss/pandera/issues/851 | [
"enhancement"
] | markkvdb | 6 |
jupyterlab/jupyter-ai | jupyter | 292 | Allow /generate to accept a schema or template | ### Problem
Right now, users have no control over the "structure" of notebooks generated via `/generate`.
### Proposed Solution
Offer some way for users to indicate to `/generate` the schema/template of the notebook. The specifics of how this may be implemented remain open to discussion. | open | 2023-07-24T16:31:41Z | 2024-10-23T22:09:19Z | https://github.com/jupyterlab/jupyter-ai/issues/292 | [
"enhancement",
"scope:generate"
] | dlqqq | 1 |
chainer/chainer | numpy | 8,545 | Incompatible version is released to Python 2 | `chainer>=7.0.0` released before #8517 is still released to Python 2, even though they doesn't support Python 2.
Due to this, we cannot use `pip install chainer` in Python 2. Instead, we always have to specify install version like `pip install chainer<7.0.0`.
```
$ pip install --user chainer --no-cache-dir -U
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 2, 3]) may cause slowdown.
warnings.warn(warning, RequestsDependencyWarning)
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting chainer
Downloading https://files.pythonhosted.org/packages/a8/ba/32b704e077cb24b4d85260512a5af903e772f06fb58e716301dd51758869/chainer-7.0.0.tar.gz (1.0MB)
|████████████████████████████████| 1.0MB 8.1MB/s
Requirement already satisfied, skipping upgrade: setuptools in /usr/lib/python2.7/dist-packages (from chainer) (20.7.0)
Requirement already satisfied, skipping upgrade: typing_extensions in /usr/local/lib/python2.7/dist-packages (from chainer) (3.6.6)
Requirement already satisfied, skipping upgrade: filelock in /usr/local/lib/python2.7/dist-packages (from chainer) (3.0.12)
Requirement already satisfied, skipping upgrade: numpy>=1.9.0 in /usr/lib/python2.7/dist-packages (from chainer) (1.11.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from chainer) (3.7.1)
Requirement already satisfied, skipping upgrade: six>=1.9.0 in /usr/local/lib/python2.7/dist-packages (from chainer) (1.12.0)
Requirement already satisfied, skipping upgrade: typing>=3.6.2 in /usr/local/lib/python2.7/dist-packages (from typing_extensions->chainer) (3.6.6)
Building wheels for collected packages: chainer
Building wheel for chainer (setup.py) ... done
Created wheel for chainer: filename=chainer-7.0.0-cp27-none-any.whl size=966689 sha256=5d7d792512b88770a53c52e193fd05d6d1e3d978e4c7e8f6dcd1abc6980ea5ed
Stored in directory: /tmp/pip-ephem-wheel-cache-MwgxDb/wheels/42/ab/c8/d723d9d7a08b5649c7343f113e74c729d4a1bd5d96e349294b
Successfully built chainer
Installing collected packages: chainer
Successfully installed chainer-7.0.0
WARNING: You are using pip version 19.2.3, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
$ python -c 'import chainer'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/pazeshun/.local/lib/python2.7/site-packages/chainer/__init__.py", line 10, in <module>
from chainer import backends # NOQA
File "/home/pazeshun/.local/lib/python2.7/site-packages/chainer/backends/__init__.py", line 1, in <module>
from chainer.backends import cuda # NOQA
File "/home/pazeshun/.local/lib/python2.7/site-packages/chainer/backends/cuda.py", line 77
def shape(self) -> types.Shape:
^
SyntaxError: invalid syntax
```
This situation easily causes mistakes of installing incompatible versions.
In addition, this is problem when using ROS.
In ROS, we want to resolve all dependency with `rosdep install` command, but its philosophy is like `apt` and it doesn't have a method to specify install version.
http://wiki.ros.org/ROS/Tutorials/rosdep
(Now we are working on making exceptions of Python packages in that philosophy, but the discussion doesn't advance:
https://github.com/ros-infrastructure/rosdep/pull/694)
I know released version cannot be overwritten,
https://stackoverflow.com/questions/21064581/how-to-overwrite-pypi-package-when-doing-upload-from-command-line
so solutions I know are the following:
- Release Python 2 compatible version to `chainer>=7.0.0` (very strange solution)
- Remove `chainer>=7.0.0` released before #8517 (very drastic solution)
I know this is difficult problem, but hope this is solved.
cf. https://github.com/chainer/chainer/pull/8517#issuecomment-576128957 | closed | 2020-02-11T03:00:41Z | 2021-01-13T11:37:38Z | https://github.com/chainer/chainer/issues/8545 | [
"stale",
"issue-checked"
] | pazeshun | 11 |
davidteather/TikTok-Api | api | 227 | [BUG] - Putting Spanish lang code gives me Russian hashtags and Arabic trendings | **Describe the bug**
Putting the Spanish lang code gives me Russian hashtags using the get trending hashtag. If I do use the get trending function I do get Arabic videos.

**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
@app.route('/get_tiktoks_by_hashtag')
def get_tiktoks_by_hashtag():
hashtag = request.args.get('hashtag')
results = 10 if request.args.get('limit') is None else int(request.args.get('limit'))
lang = 'en' if request.args.get('lang') is None else request.args.get('lang')
print("Getting {0} hashtag in {1} language".format(hashtag, lang))
try:
tiktoks = api.byHashtag(hashtag, language=lang, count=results)
except BadStatusLine as e:
print(str(e))
time.sleep(30)
return jsonify({"hashtag_tiktok": tiktoks})
```
**Expected behavior**
When I do access tiktok from my country,Spain, I do get this as trending.

**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
# Error Trace Here
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10] Debian 9
- TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue 3.4.3
**Additional context**
Add any other context about the problem here.
| closed | 2020-08-20T11:29:24Z | 2020-08-20T16:56:26Z | https://github.com/davidteather/TikTok-Api/issues/227 | [
"bug"
] | elblogbruno | 4 |
jupyter-incubator/sparkmagic | jupyter | 760 | [BUG] Running sparkmagic notebook in sagemaker lifecycle script | **Describe the bug**
Through sagemaker notebooks I am trying to run a sparkmagic notebook (to talk to an emr) via nbconvert inside of the lifecycle script that runs during start up. It looks like it isn't picking up the config file.
If I wait till after sagemaker has started it all connects and works fine.
I know this sounds like a sagemaker issue, but aws aren't being any help so was hoping someone has an idea here.
**To Reproduce**
Run a python script in sagemaker lifecycle that uses nbconvert to run a sparkmagic notebook that tries to talk to a emr cluster
| open | 2022-05-03T11:16:49Z | 2022-05-06T13:30:47Z | https://github.com/jupyter-incubator/sparkmagic/issues/760 | [] | byteford | 4 |
Textualize/rich | python | 2,827 | [BUG] rich progress bar display problem | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
rich print broken in some terminal, but i don`t know why.

**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
python -m rich.diagnose
pip freeze | grep rich
```
If you're using Rich in a Jupyter Notebook, run the following snippet in a cell
and paste the output in your bug report.
```python
from rich.diagnose import report
report()
```
</details>
| closed | 2023-02-23T12:35:54Z | 2023-03-04T15:01:16Z | https://github.com/Textualize/rich/issues/2827 | [
"more information needed"
] | shimbay | 4 |
yt-dlp/yt-dlp | python | 12,546 | embed online subtitles directly ? | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
For instance:
```
$ yt-dlp "~~~https://source-video-stream~~~.m3u8" --embed-subs "https://~~~source-sub-file~~~.vtt" -o "destination.mkv"
```
outputs
```
ERROR: Fixed output name but more than one file to download: destination.mkv
```
The video itself doesn't have subtitles. But the subtitle is a different stream in the form `.srt`, `.ass` or `.vtt`. Is there a way to directly embed online subtitles instead of downloading video stream and subtitle separately and merging through ffmpeg later ?
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | closed | 2025-03-06T08:49:53Z | 2025-03-06T10:06:35Z | https://github.com/yt-dlp/yt-dlp/issues/12546 | [
"question",
"piracy/illegal"
] | veganomy | 8 |
unit8co/darts | data-science | 2,403 | Enhance integration of Global and Local models. | **Is your feature request related to a current problem? Please describe.**
When using a mixture of local and global models, the user needs to distinguish the model types.
Here's a list of practical examples:
- When calling the `fit` method, local models don't support lists of TimeSeries.
- Ensembles support a mixture of local and global models when calling the `historical_forecasts` method, but not when calling the `fit` method.
- It's not clear if global models are effectively trained on multiple time-series when using the `historical_forecasts` method, especially when using ensembles.
**Describe proposed solution**
- Add support for multiple time-series on local models. Under the hood, independent models should be trained.
- Allow to `fit` and `predict` ensembles of mixtures of local/global models.
- Provide a single interface wrapper to call `fit`, `predict`, and `historical_forecasts` on any kind of model. Under the hood, the interface should assign the correct args to fit each model, and raise an error if some args are missing, and possibly raise a warning if some args are unused.
- Bonus: it would be cool to have a factory that receives the name of the model and a serializable dict of args to create instances of models, or even ensembles.
| open | 2024-06-05T09:43:32Z | 2024-06-10T14:52:16Z | https://github.com/unit8co/darts/issues/2403 | [
"triage"
] | davide-burba | 2 |
recommenders-team/recommenders | data-science | 1,933 | [BUG] Issue with AzureML machines in tests. Conflict of Cornac with NumPy | ### Description
<!--- Describe your issue/bug/request in detail -->
Machines are not starting, so no tests are being triggered.
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
AzureML
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
by triggering the tests
```
"error": ***
"message": "Activity Failed:\n***\n \"error\": ***\n \"code\": \"UserError\",\n \"message\": \"Image build failed. For more details, check log file azureml-logs/20_image_build_log.txt.\",\n \"messageFormat\": \"Image build failed. For more details, check log file ***ArtifactPath***.\",\n \"messageParameters\": ***\n \"ArtifactPath\": \"azureml-logs/20_image_build_log.txt\"\n ***,\n \"details\": [],\n \"innerError\": ***\n \"code\": \"BadArgument\",\n \"innerError\": ***\n \"code\": \"ImageBuildFailure\"\n ***\n ***\n ***,\n \"correlation\": ***\n \"operation\": \"9e89362ac8454ae436aebd9cdc824dc8\",\n \"request\": \"1417867f9fdf05be\"\n ***,\n \"environment\": \"eastus\",\n \"location\": \"eastus\",\n \"time\": \"2023-06-01T00:18:06.477534Z\",\n \"componentName\": \"RunHistory\"\n***"
```
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
See logs: https://github.com/microsoft/recommenders/actions/runs/5138848873/jobs/9248618895
| closed | 2023-06-01T15:39:21Z | 2023-06-08T10:41:48Z | https://github.com/recommenders-team/recommenders/issues/1933 | [
"bug"
] | miguelgfierro | 5 |
FujiwaraChoki/MoneyPrinterV2 | automation | 65 | I optimized a version that supports Chinese(中文) and made a lot of optimizations | # I am very grateful for the MoneyPrinter project.
I found that its support for **Chinese** was not very good.
So I did a refactor and optimization to make it support **both Chinese and English** well.
It supports various Chinese and English speech synthesis, and the subtitle effect has been improved.
[https://github.com/harry0703/MoneyPrinterTurbo](https://github.com/harry0703/MoneyPrinterTurbo)
我优化了一个版本,支持了中文,并且做了大量的优化
非常感谢 MoneyPrinter这个项目,我发现对中文的支持不太好。
于是我做了重构和优化,使其对中英文都可以很好的支持,支持多种中英文语音合成,而且字幕的效果更好了。
## Feature Highlights 🎯
- [x] Fully implemented MVC architecture, offering clear code structure, ease of maintenance, and support for both API and web interfaces.
- [x] Supports multiple high-definition video resolutions:
- [x] Portrait mode: 9:16, 1080x1920
- [x] Landscape mode: 16:9, 1920x1080
- [x] Multilingual video script support for both Chinese and English.
- [x] Advanced voice synthesis capabilities.
- [x] Subtitle generation support, allowing customization of fonts, colors, and sizes, including outline settings for subtitles.
- [x] Background music support, with options for random selection or specifying music files.
## 功能特性 🎯
- [x] 完整的 **MVC架构**,代码 **结构清晰**,易于维护,支持API和Web界面
- [x] 支持多种 **高清视频** 尺寸
- [x] 竖屏 9:16,`1080x1920`
- [x] 横屏 16:9,`1920x1080`
- [x] 支持 **中文** 和 **英文** 视频文案
- [x] 支持 **多种语音** 合成
- [x] 支持 **字幕生成**,可以调整字体、颜色、大小,同时支持字幕描边设置
- [x] 支持 **背景音乐**,随机或者指定音乐文件 | closed | 2024-03-13T02:23:03Z | 2024-04-28T09:25:43Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/65 | [] | harry0703 | 1 |
collerek/ormar | sqlalchemy | 529 | JSON field isnull filter | Using a nullable JSON field and filtering with isnull produces unexpected results. Is the JSON intended to be treated differently when it comes to nullness?
**To reproduce and expected behavior:**
```
import asyncio
import databases
import ormar
import sqlalchemy
DATABASE_URL = "sqlite:///db.sqlite"
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
class Author(ormar.Model):
class Meta(ormar.ModelMeta):
metadata = metadata
database = database
tablename = "authors"
id = ormar.Integer(primary_key=True)
text_field = ormar.Text(nullable=True)
json_field = ormar.JSON(nullable=True)
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.drop_all(engine)
metadata.create_all(engine)
async def test():
async with database:
author = await Author.objects.create()
assert author.json_field is None
non_null_text_fields = await Author.objects.all(text_field__isnull=False)
assert len(non_null_text_fields) == 0
non_null_json_fields = await Author.objects.all(json_field__isnull=False)
assert len(non_null_json_fields) == 0 # Fails
asyncio.run(test())
``` | closed | 2022-01-15T08:20:05Z | 2022-02-25T11:19:46Z | https://github.com/collerek/ormar/issues/529 | [
"bug"
] | vekkuli | 1 |
coqui-ai/TTS | deep-learning | 3,131 | [Bug] Error occurs when resuming training a xtts model | ### Describe the bug
Error occurs when I try to resume training of a xtts model. Details are describe below.
### To Reproduce
First, I train a xtts model using the official script:
```
cd TS/recipes/ljspeech/xtts_v1
CUDA_VISIBLE_DEVICES="0" python train_gpt_xtts.py
```
Then, during the training, it is corrupted. So I try to resume training using:
```
CUDA_VISIBLE_DEVICES="0" python train_gpt_xtts.py --continue_
path run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_08+42AM-0000000
```
It failed to resume the tranining process.
### Expected behavior
_No response_
### Logs
```shell
>> DVAE weights restored from: /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/XTTS_v1.1_original_model_files/dvae.pth
| > Found 13100 files in /nfs2/speech/data/tts/Datasets/LJSpeech-1.1
fatal: detected dubious ownership in repository at '/nfs2/yi.liu/src/TTS'
To add an exception for this directory, call:
git config --global --add safe.directory /nfs2/yi.liu/src/TTS
> Training Environment:
| > Backend: Torch
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 64
| > Num. of Torch Threads: 1
| > Torch seed: 1
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_08+42AM-0000000/
> Restoring from checkpoint_1973.pth ...
> Restoring Model...
> Restoring Optimizer...
> Model restored from step 1973
> Model has 543985103 parameters
> Restoring best loss from best_model_1622.pth ...
--- Logging error ---
Traceback (most recent call last):
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
TypeError: must be real number, not dict
Call stack:
File "/nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py", line 182, in <module>
main()
File "/nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py", line 178, in main
trainer.fit()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
self._fit()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1746, in _fit
self._restore_best_loss()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1710, in _restore_best_loss
logger.info(" > Starting with loaded last best loss %f", self.best_loss)
Message: ' > Starting with loaded last best loss %f'
Arguments: {'train_loss': 0.03659261970647744, 'eval_loss': None}
--- Logging error ---
Traceback (most recent call last):
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/root/miniconda3/envs/xtts/lib/python3.9/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
TypeError: must be real number, not dict
Call stack:
File "/nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py", line 182, in <module>
main()
File "/nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py", line 178, in main
trainer.fit()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
self._fit()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1746, in _fit
self._restore_best_loss()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1710, in _restore_best_loss
logger.info(" > Starting with loaded last best loss %f", self.best_loss)
Message: ' > Starting with loaded last best loss %f'
Arguments: {'train_loss': 0.03659261970647744, 'eval_loss': None}
> EPOCH: 0/1000
--> run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_08+42AM-0000000/
! Run is kept in run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_08+42AM-0000000/
Traceback (most recent call last):
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
self._fit()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1762, in _fit
self.eval_epoch()
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1610, in eval_epoch
self.get_eval_dataloader(
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 976, in get_eval_dataloader
return self._get_loader(
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 895, in _get_loader
loader = model.get_data_loader(
File "/nfs2/yi.liu/src/TTS/TTS/tts/layers/xtts/trainer/gpt_trainer.py", line 337, in get_data_loader
dataset = XTTSDataset(self.config, samples, self.xtts.tokenizer, config.audio.sample_rate, is_eval)
File "/nfs2/yi.liu/src/TTS/TTS/tts/layers/xtts/trainer/dataset.py", line 83, in __init__
self.debug_failures = model_args.debug_loading_failures
File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/coqpit/coqpit.py", line 626, in __getattribute__
value = super().__getattribute__(arg)
AttributeError: 'XttsArgs' object has no attribute 'debug_loading_failures'
```
### Environment
```shell
TTS: v0.19.1
pytorch: 2.0.1+cu117
python: 3.9.18
```
### Additional context
Please inform me if any other information is needed. | closed | 2023-11-01T10:04:46Z | 2024-01-26T11:33:37Z | https://github.com/coqui-ai/TTS/issues/3131 | [
"bug"
] | yiliu-mt | 7 |
iperov/DeepFaceLive | machine-learning | 169 | Camera Input Problem | Hello iperov,
Thanks in advance. But i have a problem. I'm using Logitech C922 Pro Webcam, RTX 4090 Graphic Card, 64 Gb Ram, Intel-i7 12700F, Windows 11 system and your nvidia dfl version. I cant chose better than 720x480 for camera input. And it gives only 30 FPS for 720x480. So final output gives 30 fps too. But my webcam supports 1080p and 60fps. I tried update DirectShow and Foundation drivers but it didnt change. I tried to install Gstreamer driver but i couldnt install it. So im stuck in 720x480 and 30 fps. Please help me how can i set up input for 1080p and 60fps? | closed | 2023-05-31T14:21:08Z | 2023-05-31T14:23:21Z | https://github.com/iperov/DeepFaceLive/issues/169 | [] | Ridefort01 | 0 |
strawberry-graphql/strawberry | fastapi | 3,481 | `strawberry.Parent` not supporting forward refs | I would like the strawberry documentation on accessing parent with function resolvers on this [page](https://strawberry.rocks/docs/guides/accessing-parent-data#accessing-parents-data-in-function-resolvers) tweaked to be more clear, or maybe corrected?
From what I understand in the docs, its suggesting you end up with the following. However, this doesn't even run? I have tried swapping the definitions both directions, they have the same issue. I had to resort to the `self` method on a method resolver, which seems less desirable to me since the docs specifically call out that it might not work quite right everywhere.
> and it works like it should in Python, but there might be cases where it doesn’t properly follow Python’s semantics
```
def get_full_name(parent: strawberry.Parent[User2]) -> str:
return f"{parent.first_name} {parent.last_name}"
@strawberry.type
class User2:
first_name: str
last_name: str
full_name: str = strawberry.field(resolver=get_full_name)
```
```
Traceback (most recent call last):
File "/Users/.../Library/Application Support/JetBrains/IntelliJIdea2024.1/plugins/python/helpers/pydev/pydevd.py", line 1535, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/.../Library/Application Support/JetBrains/IntelliJIdea2024.1/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/.../src/api/python-graphql-poc/src/main.py", line 21, in <module>
def get_full_name(parent: strawberry.Parent[User2]) -> str:
^^^^^
NameError: name 'User2' is not defined
```
Perhaps there's a quirk in here where the structure of my file is part of the problem since everything is top level? I am using FastAPI, uvicorn and strawberry.
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. --> | open | 2024-05-01T16:35:23Z | 2025-03-20T15:56:43Z | https://github.com/strawberry-graphql/strawberry/issues/3481 | [
"bug"
] | andrewkruse | 7 |
graphdeco-inria/gaussian-splatting | computer-vision | 244 | SIBR compile error in windows: There is no provided GLEW library for your version of MSVC | Hi, I tried to compile the SIBR Viewer in Windows11, but I got error below.
I just installed the MinGW, Cmake and git. Do I need to install the MSVC?
Is there any advice? And is there a more detailed instruction about what to be installed? Thank you!
```
PS C:\Users\17670\Desktop\SIBR_viewers> cmake -Bbuild .
-- Building for: Ninja
-- The C compiler identification is GNU 13.1.0
-- The CXX compiler identification is GNU 13.1.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: F:/msys64/mingw64/bin/cc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: F:/msys64/mingw64/bin/c++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: F:/msys64/usr/bin/git.exe (found version "2.41.0")
-- Git found: F:/msys64/usr/bin/git.exe
-- SIBR version :
BRANCH
COMMIT_HASH
TAG
VERSION -
-- Install path set to C:/Users/17670/Desktop/SIBR_viewers/install.
Note you can provide default program options for Visual Studio target properties by either setting a value for the cmake cached variable 'SIBR_PROGRAMARGS' or by setting a new environment variable 'SIBR_PROGRAMARGS'
--
****************** Handling core dependencies ******************
-- Found OpenGL: opengl32
There is no provided GLEW library for your version of MSVC
CMake Error at cmake/windows/Win3rdParty.cmake:173 (if):
if given arguments:
"MSVC17" "AND" "w3p_MSVC17" "OR" "EQUAL" "143" "AND" "MSVC17" "STREQUAL" "MSVC17"
Unknown arguments specified
Call Stack (most recent call first):
cmake/windows/sibr_library.cmake:55 (win3rdParty)
cmake/windows/dependencies.cmake:50 (sibr_addlibrary)
cmake/windows/include_once.cmake:20 (include)
src/CMakeLists.txt:46 (include_once)
-- Configuring incomplete, errors occurred!
``` | closed | 2023-09-26T16:50:41Z | 2023-10-10T19:54:21Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/244 | [] | Chuan-10 | 1 |
ultralytics/yolov5 | pytorch | 13,537 | how to set label smoothing in yolov8/yolov11? | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
how to set label smoothing in yolov8/yolov11?
### Additional
_No response_ | open | 2025-03-20T07:50:55Z | 2025-03-21T10:35:15Z | https://github.com/ultralytics/yolov5/issues/13537 | [
"question"
] | xuan-xuan6 | 4 |
strawberry-graphql/strawberry | graphql | 3,130 | Confusing Getting Started Guide | Hi, I'm brand new to Graphql and Strawberry and sorry if this is obvious but I was going through the getting started guide
and starting on Step 3. [Step 3: Define your data set](https://strawberry.rocks/docs#step-3-define-your-data-set) It doesn't specify if you are suppose to start a new file, so I'm assuming you are still talking about the file we created in [Step 2: Define the schema](https://strawberry.rocks/docs#step-2-define-the-schema) - In your favorite editor create a file called schema.py, with the following contents:..., but I know that's probably not correct because then we are defining two classes called Query. So it would be helpful to know if I need to create a new file and then do I need to import that file into schema? Then in step 4. define your resolver. Where does that supposed to go, in the schema file or a new file? I realized that this is probably obvious if you already know some of the basics but maybe you could clarify? Thanks! | closed | 2023-10-02T14:14:08Z | 2025-03-20T15:56:24Z | https://github.com/strawberry-graphql/strawberry/issues/3130 | [] | JacobGoldenArt | 2 |
KaiyangZhou/deep-person-reid | computer-vision | 35 | Performance when training a model on one dataset and testing in another? | Just wondering, has anyone tried? Do you think it would be useful to try it or the results will be awful? | closed | 2018-07-13T00:26:11Z | 2018-08-31T02:43:31Z | https://github.com/KaiyangZhou/deep-person-reid/issues/35 | [] | ortegatron | 3 |
scikit-image/scikit-image | computer-vision | 7,083 | phase_cross_correlation returns tuple instead of np.array when disambiguate=True | ### Description:
According to its documentation, `ski.registration.phase_cross_correlation` returns a numpy array as a shift. However, when using the function with `disambiguate=True`, a tuple is returned.
### Way to reproduce:
```python
import numpy as np
import skimage as ski
im = np.random.randint(0, 100, ((10,10)))
r1 = ski.registration.phase_cross_correlation(im, im)
assert(isinstance(r1[0], np.ndarray)) # r1[0] is a numpy array
r2 = ski.registration.phase_cross_correlation(im, im, disambiguate=True)
assert(isinstance(r2[0], np.ndarray)) # r2[0] is a tuple
```
### Version information:
```Shell
3.10.12 (main, Jul 5 2023, 15:02:25) [Clang 14.0.6 ]
macOS-12.6-arm64-arm-64bit
3.10.12 (main, Jul 5 2023, 15:02:25) [Clang 14.0.6 ]
macOS-12.6-arm64-arm-64bit
scikit-image version: 0.21.0
numpy version: 1.25.2
```
| closed | 2023-08-02T16:40:24Z | 2023-09-20T10:40:30Z | https://github.com/scikit-image/scikit-image/issues/7083 | [
":bug: Bug"
] | m-albert | 5 |
jupyter/docker-stacks | jupyter | 2,043 | Conda environment not fully set in Jupyter | ### What docker image(s) are you using?
minimal-notebook
### Host OS system
CentOS
### Host architecture
x86_64
### What Docker command are you running?
sudo docker run -p8888:8888 --rm docker.io/jupyter/minimal-notebook:latest
(or standard run from JupyterHub)
### How to Reproduce the problem?
Start a Python or C++ kernel, e.g. from a notebook and run the following
command:
```
!env | grep CONDA
```
### Command output
```bash session
CONDA_DIR=/opt/conda
```
### Expected behavior
See all the usual environment variables that are set when a conda
environment is set; e.g.:
```
CONDA_EXE=/opt/conda/bin/conda
CONDA_PREFIX=/opt/conda
CONDA_PROMPT_MODIFIER=(base)
_CE_CONDA=
CONDA_SHLVL=1
CONDA_DIR=/opt/conda
CONDA_PYTHON_EXE=/opt/conda/bin/python
CONDA_DEFAULT_ENV=base
```
### Actual behavior
If a barebone shell is launched from Jupyter (e.g. through shebang calls in a notebook), the conda environment variables are not set.
### Anything else?
## Suggestion
Launch Jupyter itself within an activated conda environment. Maybe by using an entry
point such as `conda run jupyter ...`.
For now I am using as workaround:
```
%bash --login
...
```
but this is tricky to find for end users.
## Use case
In a C++ course, we include calls to compilation commands in the course
narrative to explain how to compile; students also include them
in the support of their presentation to run the compilation and execution
of their programs. These compilation commands often need a fully
configured conda environment.
### Latest Docker version
- [X] I've updated my Docker version to the latest available, and the issue persists | closed | 2023-11-24T18:03:39Z | 2023-12-04T20:52:41Z | https://github.com/jupyter/docker-stacks/issues/2043 | [
"type:Bug"
] | nthiery | 6 |
nolar/kopf | asyncio | 359 | Helm 3 change triggers create insteand of update | > <a href="https://github.com/Carles-Figuerola"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/13749641?v=4"></a> An issue by [Carles-Figuerola](https://github.com/Carles-Figuerola) at _2020-05-07 20:30:30+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/359
>
## Long story short
helm 3 updates trigger `@kopf.on.create` rather than `@kopf.on.update`.
## Description
After a resource has been created using helm3 (haven't tested with helm2), and the definition changes, kopf is triggering the create function rather than update
<details><summary>These are the debug logs from helm</summary>
```
I0507 14:27:07.862467 34642 round_trippers.go:443] GET https://XXXXXXXXXXXXXXXXXXXX.us-west-2.eks.amazonaws.com/apis/my-api-domain/v1/namespaces/default/my-resources/myresource 200 OK in 108 milliseconds
I0507 14:27:07.862500 34642 round_trippers.go:449] Response Headers:
I0507 14:27:07.862510 34642 round_trippers.go:452] Audit-Id: xxxxx-xxxx-xxxx-xxxx-xxxxxx
I0507 14:27:07.862519 34642 round_trippers.go:452] Content-Type: application/json
I0507 14:27:07.862526 34642 round_trippers.go:452] Content-Length: 1615
I0507 14:27:07.862533 34642 round_trippers.go:452] Date: Thu, 07 May 2020 19:27:07 GMT
I0507 14:27:07.863041 34642 request.go:1068] Response Body: {"apiVersion":"my-api-domain/v1","kind":"MyResource","metadata":{"annotations":{"kopf.zalando.org/last-handled-configuration":"{\"spec\": \"content\"}, \"metadata\": {\"labels\": {\"app.kubernetes.io/managed-by\": \"Helm\"}, \"annotations\": {\"meta.helm.sh/release-name\": \"myapp\", \"meta.helm.sh/release-namespace\": \"default\"}}}","meta.helm.sh/release-name":"myapp","meta.helm.sh/release-namespace":"default"},"creationTimestamp":"2020-05-07T19:06:28Z","finalizers":["kopf.zalando.org/KopfFinalizerMarker"],"generation":4,"labels":{"app.kubernetes.io/managed-by":"Helm"},"name":"myresource","namespace":"default","resourceVersion":"36598","selfLink":"/apis/my-api-domain/v1/namespaces/default/my-resources/myresource","uid":"daaf8839-9095-11ea-85ab-024b3556169a"},"spec":"content"}
I0507 14:27:07.863377 34642 request.go:1068] Request Body: {"apiVersion":"my-api-domain/v1","kind":"MyResource","metadata":{"annotations":{"meta.helm.sh/release-name":"myapp","meta.helm.sh/release-namespace":"default"},"labels":{"app.kubernetes.io/managed-by":"Helm"},"name":"myresource","namespace":"default","resourceVersion":"36598"},"spec":"content"}
I0507 14:27:07.863502 34642 round_trippers.go:423] curl -k -v -XPUT -H "Accept: application/json" -H "Content-Type: application/json" 'https://XXXXXXXXXXXXXXXXXXXX.us-west-2.eks.amazonaws.com/apis/my-api-domain/v1/namespaces/default/my-resources/myresource'
I0507 14:27:07.990540 34642 round_trippers.go:443] PUT https://XXXXXXXXXXXXXXXXXXXX.us-west-2.eks.amazonaws.com/apis/my-api-domain/v1/namespaces/default/my-resources/myresource 200 OK in 127 milliseconds
I0507 14:27:07.990576 34642 round_trippers.go:449] Response Headers:
I0507 14:27:07.990584 34642 round_trippers.go:452] Audit-Id: xxxxx-xxxx-xxxx-xxxx-xxxxxx
I0507 14:27:07.990589 34642 round_trippers.go:452] Content-Type: application/json
I0507 14:27:07.990593 34642 round_trippers.go:452] Content-Length: 894
I0507 14:27:07.990597 34642 round_trippers.go:452] Date: Thu, 07 May 2020 19:27:07 GMT
I0507 14:27:07.990862 34642 request.go:1068] Response Body: {"apiVersion":"my-api-domain/v1","kind":"MyResource","metadata":{"annotations":{"meta.helm.sh/release-name":"myapp","meta.helm.sh/release-namespace":"default"},"creationTimestamp":"2020-05-07T19:06:28Z","generation":5,"labels":{"app.kubernetes.io/managed-by":"Helm"},"name":"myresource","namespace":"default","resourceVersion":"39209","selfLink":"/apis/my-api-domain/v1/namespaces/default/my-resources/myresource","uid":"daaf8839-9095-11ea-85ab-024b3556169a"},"spec":"modifiedContent"}
client.go:446: [debug] Replaced "myresource" with kind MyResource for kind MyResource
```
</details>
<details><summary>This is how my operator sees this update:</summary>
```
[2020-05-07 19:36:45,024] myclass [INFO ] This is the on_create function
[2020-05-07 19:36:45,374] kopf.objects [ERROR ] [default/myresource] Handler 'on_create' failed permanently: <redacted> already exists.
[2020-05-07 19:36:45,375] kopf.objects [INFO ] [default/myresource] All handlers succeeded for creation.
```
</details>
<details><summary>However, when I apply the same changes using `kubectl apply -f`, I get the expected result.</summary>
```
[2020-05-07 19:36:45,024] myclass [INFO ] This is the on_update function
[2020-05-07 19:30:14,944] kopf.objects [INFO ] [default/myresource] Handler 'on_update' succeeded.
[2020-05-07 19:30:14,944] kopf.objects [INFO ] [default/myresource] All handlers succeeded for update.
```
</details>
## Environment
* Kopf version: 0.24
* Kubernetes version: v1.14.9-eks-f459c0
* Python version: 3.7.6
* OS/platform: Debian 10.2
<details><summary>Python packages installed</summary>
<!-- use `pip freeze --all` -->
```
pip freeze --all
aiohttp==3.6.2
aiojobs==0.2.2
async-timeout==3.0.1
attrs==19.3.0
boto3==1.10.50
botocore==1.13.50
certifi==2019.11.28
chardet==3.0.4
Click==7.0
docutils==0.15.2
idna==2.8
iso8601==0.1.12
jmespath==0.9.4
kopf==0.24
multidict==4.7.3
pip==19.3.1
pykube-ng==19.12.1
python-dateutil==2.8.1
pytz==2019.3
PyYAML==5.3
requests==2.22.0
s3transfer==0.2.1
setuptools==44.0.0
six==1.13.0
typing-extensions==3.7.4.1
urllib3==1.25.7
wheel==0.33.6
yarl==1.4.2
```
</details>
---
> <a href="https://github.com/Carles-Figuerola"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/13749641?v=4"></a> Commented by [Carles-Figuerola](https://github.com/Carles-Figuerola) at _2020-05-11 23:14:10+00:00_
>
I think the issue here is that I'm using `--force` on my helm commands. I don't know if that changed slightly with helm2 to helm3, but I'm going to investigate further. For context, that's from helm's help:
```
--force force resource updates through a replacement strategy
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-05-12 07:58:53+00:00_
>
Thank you for reporting.
Well, a "replacement strategy" phrasing implies deletion and re-creation. Some quick googling also points to comments like https://github.com/helm/helm/issues/5281#issuecomment-581975916 saying that _"…`helm upgrade —force` is the equivalent of a `kubectl replace`…"_. [`kubectl replace` docs](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#replace) and options also hint that it goes through deletion.
So, Kopf handles what actually happens on Kubernetes level — deletion and then creation.
Internally, Kopf distinguishes between creation and upgrades by presence or absence of an annotation named `kopf.zalando.org/last-handled-configuration`, which contains a JSON-serialised essence of the object as it was last handled, incl. at the 1st cycle of creation. I believe, there is no way to transfer 3rd-party annotations on Helm upgrades. Which means, continuity of the resource cannot be simulated.
So, you have to implement your own global state, accessed by e.g. resource names or namespaces+names. Then, put both on-creation and on-update handlers to the same function, and check for the existence internally.
Conceptually, something like this:
```python
import kopf
RESOURCES = {}
@kopf.on.create(...)
@kopf.on.update(...)
def myresouce_changed_fn(namespace, name, **_):
key = f'{namespace}/{name}'
if key in RESOURCES:
print('it was an upgrade')
else:
print('it was a real creation')
RESOURCES[key] = ...
@kopf.on.delete(...)
def myresource_deleted(namespace, name, **_):
key = f'{namespace}/{name}'
if key in RESOURCES:
del RESOURCES[key]
```
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-05-12 09:12:23+00:00_
>
As a slightly more advanced solution, but also slightly more complex, you can utilise the recent configurable storages for "diff-bases" aka "essences" of the resources (available since [0.27rc6](https://github.com/nolar/kopf/releases) — release candidates yet, as of 2020-05-12):
```python
import kopf
from typing import MutableMapping, Any, Optional
class ByNameDiffBaseStorage(kopf.DiffBaseStorage):
_items: MutableMapping[str, kopf.BodyEssence]
def __init__(self) -> None:
super().__init__()
self._items = {}
def fetch(self, *, body: kopf.Body) -> Optional[kopf.BodyEssence]:
key = f'{body.metadata.namespace}/{body.metadata.name}'
return self._items.get(key, None)
def store(self, *, body: kopf.Body, patch: kopf.Patch, essence: kopf.BodyEssence) -> None:
key = f'{body.metadata.namespace}/{body.metadata.name}'
self._items[key] = essence
@kopf.on.startup()
def configure(settings: kopf.OperatorSettings, **_):
settings.persistence.diffbase_storage = kopf.MultiDiffBaseStorage([
settings.persistence.diffbase_storage, # the default Kopf's storage
ByNameDiffBaseStorage(),
])
@kopf.on.create('zalando.org', 'v1', 'kopfexamples')
def create_fn(**_):
pass
@kopf.on.update('zalando.org', 'v1', 'kopfexamples')
def update_fn(**_):
pass
@kopf.on.delete('zalando.org', 'v1', 'kopfexamples')
def delete_fn(**_):
pass
```
The magic happens in `MultiDiffBaseStorage`: if it can find a last-handled-configuration record on the resource itself (in annotations), it will use it — which is the case for the existing resources. If it is absent there —which happens when the resource is "replaced" by Helm— it will try the in-memory storage by "namespace/name" key, even if it is a different resource, and so it will detect "update" cause instead of "creation". A new last-handled-configuration will be put to both storages after the on-update handler succeeds.
**Note:** I've made only a quick-test of this code, not the thorough testing. This is only a theory how the problem can be hacked. This code can lead to unexpected consequences, so better test it in an isolated cluster (e.g. minikube or kind or so).
Two quite obvious problems:
* If the new resource replacement essentially mismatches with the original deleted one, this can confuse the business/domain logic. For example, some fields in some built-in resources (e.g. Deployment's selectors) are immutable normally, so there is probably no logic to handle their changes; but the replacement will change them.
* If the replacement happens when the operator is down or restarting, the in-memory storage will not transfer the resource's essence from the previous object (i.e. between it is stored on the old resource for the last time and it is fetched from the new resource for the first time). So, after the operator startup, it will be detected as "creation". You probably need some external persistent storage rather than memory, which will persist the essences during the operator restart/downtime — e.g. databases or redis or alike; or shadow Kubernetes objects, with Kubernetes/etcd as a database (not sure if this is a good idea).
One fancy effect:
* If the resource is deleted and created without changes, the on-update handler is not even called — because there is nothing updated (the diff is empty). Only by changing the resource or its yaml file can the on-update handler be triggered. This is as expected.
**PS:** Perhaps, you also want to include the resource's kind/singular/plural/full name into the key, in case you have multiple resources served by the same operator. Otherwise, they can have the same "namespace/name" strings, and will collide with each other in memory.
More info:
* https://kopf.readthedocs.io/en/latest/configuration/#change-detection
* https://kopf.readthedocs.io/en/latest/continuity/#persistence
* https://kopf.readthedocs.io/en/latest/packages/kopf/#kopf.DiffBaseStorage
* https://kopf.readthedocs.io/en/latest/packages/kopf/#kopf.AnnotationsDiffBaseStorage
| open | 2020-08-18T20:04:36Z | 2020-08-23T20:57:59Z | https://github.com/nolar/kopf/issues/359 | [
"question",
"archive"
] | kopf-archiver[bot] | 0 |
jupyter/docker-stacks | jupyter | 1,779 | [BUG] - Docker images for ubuntu 20.04 not updated | ### What docker image(s) are you using?
base-notebook, datascience-notebook, minimal-notebook, pyspark-notebook, r-notebook, scipy-notebook, tensorflow-notebook
### OS system and architecture running docker image
amd64
### What Docker command are you running?
On dockerhub, the last ubuntu 20.04 images are a month old. Are you planning on supporting ubuntu 20.04 images ?
### How to Reproduce the problem?
n/a
### Command output
_No response_
### Expected behavior
_No response_
### Actual behavior
n/a
### Anything else?
_No response_ | closed | 2022-09-02T01:16:40Z | 2022-10-10T14:07:39Z | https://github.com/jupyter/docker-stacks/issues/1779 | [
"type:Bug"
] | tiaden | 3 |
dask/dask | pandas | 11,595 | Supporting inconsistent schemas in read_json | If you have two (jsonl) files where one contains columns `{"id", "text"}` and the other contains `{"text", "id", "meta"}` and you wish to read the two files using `dd.read_json([file1.jsonl, file2.jsonl], lines=True)` we run into an error
```
Metadata mismatch found in `from_delayed`.
Partition type: `pandas.core.frame.DataFrame`
(or it is Partition type: `cudf.core.dataframe.DataFrame` when backend=='cudf')
+---------+-------+----------+
| Column | Found | Expected |
+---------+-------+----------+
| 'meta1' | - | object |
+---------+-------+----------+
```
For what it's worth this isn't an issue in read_parquet (cpu) and for gpu the fix is in the works https://github.com/rapidsai/cudf/pull/17554/files
## Guessing the rootcause
IIUC in both pandas and cudf, we call `read_json_file` ([here](https://github.com/dask/dask/blob/a9396a913c33de1d5966df9cc1901fd70107c99b/dask/dataframe/io/json.py#L315)).
In the pandas case, even if `dtype` is specified, pandas doesn't prune out the non-specified columns, while cudf does (assuming prune_columns=True). Therefore the pandas case continues to fail, while `cudf` case fails on a column order vs metadata column order mismatch error (since one file has `id, text`, while the other has `text, id`.
One possible hack could be supporting `columns` arg and then performing `engine(.....)[columns]`. Another could be
## MRE
```python
import dask.dataframe as dd
import dask
import tempfile
import pandas as pd
import os
records = [
{"id": 123, "text": "foo"},
{
"text": "bar",
"meta1": [{"field1": "cat"}],
"id": 456,
},
]
columns = ["text", "id"]
with tempfile.TemporaryDirectory() as tmpdir:
file1 = os.path.join(tmpdir, "part.0.jsonl")
file2 = os.path.join(tmpdir, "part.1.jsonl")
pd.DataFrame(records[:1]).to_json(file1, orient="records", lines=True)
pd.DataFrame(records[1:]).to_json(file2, orient="records", lines=True)
for backend in ["pandas", "cudf"]:
read_kwargs = dict()
if backend == "cudf":
read_kwargs["dtype"] = {"id": "str", "text": "str"}
read_kwargs["prune_columns"] = True
print("="*30)
print(f"==== {backend=} ====")
print("="*30)
try:
with dask.config.set({"dataframe.backend": backend}):
df = dd.read_json(
[file1, file2],
lines=True,
**read_kwargs,
)
print(f"{df.columns=}")
print(f"{df.compute().columns=}")
print(f"{type(df.compute())=}")
display((df.compute()))
except Exception as e:
print(f"{backend=} failed due to {e} \n")
```
cc @rjzamora
| open | 2024-12-10T18:24:48Z | 2025-02-24T02:01:24Z | https://github.com/dask/dask/issues/11595 | [
"dataframe",
"needs attention",
"feature"
] | praateekmahajan | 1 |
supabase/supabase-py | fastapi | 1 | create project base structure | Use [postgrest-py](https://github.com/supabase/postgrest-py) and [supabase-js](https://github.com/supabase/supabase-js) as reference implementations
| closed | 2020-08-28T06:38:31Z | 2021-04-01T18:44:49Z | https://github.com/supabase/supabase-py/issues/1 | [
"help wanted"
] | awalias | 0 |
netbox-community/netbox | django | 18,453 | Multiple Tunnel Terminations support | ### NetBox version
v4.1.2
### Feature type
New functionality
### Proposed functionality
I wanna Define Multiple Tunnel Terminations point for several Tunnel installed in same device
### Use case
Site To Site Ipsec Tunnel
### Database changes
_No response_
### External dependencies
_No response_ | closed | 2025-01-22T09:49:53Z | 2025-03-13T04:23:15Z | https://github.com/netbox-community/netbox/issues/18453 | [
"type: feature",
"status: revisions needed",
"pending closure"
] | l0rdmaster | 3 |
FlareSolverr/FlareSolverr | api | 1,375 | [yggtorrent] (testing) Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Have you ACTUALLY checked all these?
YES
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes/no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [yes/no]
- Are you using a Proxy: [yes/no]
- Are you using Captcha Solver: [yes/no]
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
@21hsmw : hey it's me again ;) - #1371 - Down again Timeout and Error 500. I put the log below
### Logged Error Messages
```text
Jackett :
ackett.Common.IndexerException: Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 90.0 seconds.
[v0.22.694.0] Jackett.Common.IndexerException: Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 90.0 seconds.
---> FlareSolverrSharp.Exceptions.FlareSolverrException: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 90.0 seconds.
at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass12_0.<<SendFlareSolverrRequest>b__0>d.MoveNext()
--- End of stack trace from previous location ---
at FlareSolverrSharp.Utilities.SemaphoreLocker.LockAsync[T](Func`1 worker)
at FlareSolverrSharp.Solvers.FlareSolverr.SendFlareSolverrRequest(HttpContent flareSolverrRequest)
at FlareSolverrSharp.Solvers.FlareSolverr.Solve(HttpRequestMessage request, String sessionId)
at FlareSolverrSharp.ClearanceHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at Jackett.Common.Utils.Clients.HttpWebClient2.Run(WebRequest webRequest) in ./Jackett.Common/Utils/Clients/HttpWebClient2.cs:line 180
at Jackett.Common.Utils.Clients.WebClient.GetResultAsync(WebRequest request) in ./Jackett.Common/Utils/Clients/WebClient.cs:line 186
at Jackett.Common.Indexers.BaseWebIndexer.RequestWithCookiesAsync(String url, String cookieOverride, RequestType method, String referer, IEnumerable`1 data, Dictionary`2 headers, String rawbody, Nullable`1 emulateBrowser) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 603
at Jackett.Common.Indexers.Definitions.CardigannIndexer.PerformQuery(TorznabQuery query) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 1550
at Jackett.Common.Indexers.BaseIndexer.ResultsForQuery(TorznabQuery query, Boolean isMetaIndexer) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 368
--- End of inner exception stack trace ---
at Jackett.Common.Indexers.BaseIndexer.ResultsForQuery(TorznabQuery query, Boolean isMetaIndexer) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 389
at Jackett.Common.Indexers.BaseWebIndexer.ResultsForQuery(TorznabQuery query, Boolean isMetaIndexer) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 802
at Jackett.Common.Services.IndexerManagerService.TestIndexer(String name) in ./Jackett.Common/Services/IndexerManagerService.cs:line 324
at Jackett.Server.Controllers.IndexerApiController.Test() in ./Jackett.Server/Controllers/IndexerApiController.cs:line 132
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(ActionContext actionContext, IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Jackett.Server.Middleware.CustomExceptionHandler.Invoke(HttpContext httpContext) in ./Jackett.Server/Middleware/CustomExceptionHandler.cs:line 26
Flare :
2024/09/30 17:16:17 stdout 2024-09-30 15:16:17 INFO 192.168.1.44 POST http://192.168.1.44:8191/v1 500 Internal Server Error
2024/09/30 17:16:17 stdout 2024-09-30 15:16:17 INFO Response in 93.508 s
2024/09/30 17:16:17 stdout 2024-09-30 15:16:17 ERROR Error: Error solving the challenge. Timeout after 90.0 seconds.
2024/09/30 17:14:56 stdout 2024-09-30 15:14:56 INFO Challenge detected. Title found: Just a moment...
2024/09/30 17:14:44 stdout 2024-09-30 15:14:44 INFO Incoming request => POST /v1 body: {'maxTimeout': 90000, 'cmd': 'request.get', 'url': 'https://www.ygg.re/engine/search?do=search&order=desc&sort=publish_date&category=all'}
```
### Screenshots
_No response_ | closed | 2024-09-30T15:21:01Z | 2024-10-05T04:14:02Z | https://github.com/FlareSolverr/FlareSolverr/issues/1375 | [] | DaGreenX | 7 |
scikit-tda/kepler-mapper | data-visualization | 211 | try different min_intersections from the visualization | I'm thinking about implementing this -- make changing the min_intersection before drawing an edge possible to do from the js visualization | closed | 2021-02-12T23:59:11Z | 2021-10-09T23:14:42Z | https://github.com/scikit-tda/kepler-mapper/issues/211 | [] | deargle | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 628 | Colab notebook not found | https://github.com/googlecolab/colabtools/blob/master/examples/binary_segmentation_intro.ipynb linked on readme returns Notebook not found | closed | 2022-08-05T12:40:20Z | 2022-10-12T02:18:34Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/628 | [
"Stale"
] | robmarkcole | 5 |
pydata/pandas-datareader | pandas | 405 | NaN for the start date | Hello,
I just experienced a strange behaviour with some European Equities:
`
import pandas_datareader.data as web
start='2017-05-25'
end='2017-10-01'
f=web.DataReader('ASML.AS', 'yahoo', start=start, end=end)
`
The first line of the resulting DF is 2017-05-24 and it has NaN for all attributes (open, high, low close etc.).
If I move the start date back another day, the data for 2017-05-24 is included, but the day before then has the NaNs.
On Yahoo Finance, the historical data seems to be complete for all days including 2017-05-24:
https://finance.yahoo.com/quote/ASML.AS/history?p=ASML.AS
I am not sure what's causing this. Any feedback would be appreciated!
(python 3.6.2, pandas-datareader 0.5.0) | closed | 2017-10-02T20:17:55Z | 2018-01-18T16:23:42Z | https://github.com/pydata/pandas-datareader/issues/405 | [
"yahoo-finance"
] | ComeAsUAre | 1 |
allenai/allennlp | nlp | 4,823 | Add the Gaussian Error Linear Unit as an Activation option | The [Gaussian Error Linear Unit](https://arxiv.org/pdf/1606.08415.pdf) activation is currently not a possible option from the set of registered Activations. Since this class just directly called the PyTorch classes - adding this in is a 1 line addition. Motivation is that models like BART/BERT use this activation in many places and elegant consistency of activation function across models that are "something pretrained" + "more weights trained on AllenNLP" would be nice.
**Describe the solution you'd like**
Add the following snippet to the end of the [Activations class](https://github.com/allenai/allennlp/blob/master/allennlp/nn/activations.py) class
```
"gelu": (torch.nn.GELU, None),
```
**Describe alternatives you've considered**
Manually hardcoding the activation. This isn't very robust and modules such as FeedForward complain since Gelu isnt a registered activation to insert between layers (as far as I can tell).
Thanks - happy to submit a tiny PR for this | closed | 2020-11-26T18:20:18Z | 2020-12-02T04:04:05Z | https://github.com/allenai/allennlp/issues/4823 | [
"Feature request"
] | tomsherborne | 1 |
Esri/arcgis-python-api | jupyter | 1,370 | Errors Trying to Install ArcGIS in Anaconda Navigator | **Describe the bug**
I am running into a long list of errors trying to install ArcGIS into my Anaconda Navigator, but I believe the critical one is: ModuleNotFoundError: Required requests_ntlm not found.
**Platform (please complete the following information):**
- Windows 10
- Microsoft Edge
- Python 3.9
**To Reproduce**
I downloaded win-64/arcgis-2.0.1-py39_2825.tar.bz2 from the website https://anaconda.org/Esri/arcgis/files?type=conda&page=1&channel=main.
Then, I placed win-64/arcgis-2.0.1-py39_2825.tar.bz2 into my user file.
Finally, I opened the Anaconda Prompt, and input the command: conda install arcgis-2.0.1-py39_2826.tar.bz2, and this is when the errors occurred.
**Here is everything I see in the command prompt**
(base) C:\Users\Tomas>conda install arcgis-2.0.1-py39_2826.tar.bz2
Downloading and Extracting Packages
############################################################################################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: - Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\lib\runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\Tomas\anaconda3\lib\runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\Scripts\jupyter-nbextension-script.py", line 10, in <module>
sys.exit(main())
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 269, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\config\application.py", line 846, in launch_instance
app.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 972, in start
super().start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 258, in start
self.subapp.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 702, in start
self.install_extensions()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 675, in install_extensions
full_dests = install(self.extra_args[0],
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 203, in install_nbextension_python
m, nbexts = _get_nbextension_metadata(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 1107, in _get_nbextension_metadata
m = import_item(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\utils\importstring.py", line 38, in import_item
return __import__(parts[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\Scripts\jupyter-nbextension-script.py", line 10, in <module>
sys.exit(main())
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 269, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\config\application.py", line 846, in launch_instance
app.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 972, in start
super().start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 258, in start
self.subapp.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 882, in start
self.toggle_nbextension_python(self.extra_args[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 855, in toggle_nbextension_python
return toggle(module,
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 470, in enable_nbextension_python
return _set_nbextension_state_python(True, module, user, sys_prefix,
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 368, in _set_nbextension_state_python
m, nbexts = _get_nbextension_metadata(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 1107, in _get_nbextension_metadata
m = import_item(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\utils\importstring.py", line 38, in import_item
return __import__(parts[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
done
ERROR conda.core.link:_execute(733): An error occurred while installing package '<unknown>::arcgis-2.0.1-py39_2826'.
Rolling back transaction: done
LinkError: post-link script failed for package <unknown>::arcgis-2.0.1-py39_2826
location of failed script: C:\Users\Tomas\anaconda3\Scripts\.arcgis-post-link.bat
==> script messages <==
Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\lib\runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\Tomas\anaconda3\lib\runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\Scripts\jupyter-nbextension-script.py", line 10, in <module>
sys.exit(main())
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 269, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\config\application.py", line 846, in launch_instance
app.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 972, in start
super().start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 258, in start
self.subapp.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 702, in start
self.install_extensions()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 675, in install_extensions
full_dests = install(self.extra_args[0],
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 203, in install_nbextension_python
m, nbexts = _get_nbextension_metadata(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 1107, in _get_nbextension_metadata
m = import_item(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\utils\importstring.py", line 38, in import_item
return __import__(parts[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
Traceback (most recent call last):
File "C:\Users\Tomas\anaconda3\Scripts\jupyter-nbextension-script.py", line 10, in <module>
sys.exit(main())
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 269, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\config\application.py", line 846, in launch_instance
app.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 972, in start
super().start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\jupyter_core\application.py", line 258, in start
self.subapp.start()
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 882, in start
self.toggle_nbextension_python(self.extra_args[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 855, in toggle_nbextension_python
return toggle(module,
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 470, in enable_nbextension_python
return _set_nbextension_state_python(True, module, user, sys_prefix,
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 368, in _set_nbextension_state_python
m, nbexts = _get_nbextension_metadata(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\notebook\nbextensions.py", line 1107, in _get_nbextension_metadata
m = import_item(module)
File "C:\Users\Tomas\anaconda3\lib\site-packages\traitlets\utils\importstring.py", line 38, in import_item
return __import__(parts[0])
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\__init__.py", line 3, in <module>
from arcgis.auth.tools import LazyLoader
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\__init__.py", line 1, in <module>
from .api import EsriSession
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\api.py", line 30, in <module>
from ._auth import (
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\__init__.py", line 3, in <module>
from ._winauth import EsriWindowsAuth, EsriKerberosAuth
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\_auth\_winauth.py", line 39, in <module>
requests_ntlm = LazyLoader("requests_ntlm", strict=True)
File "C:\Users\Tomas\anaconda3\lib\site-packages\arcgis\auth\tools\_lazy.py", line 23, in __init__
raise ModuleNotFoundError(f"Required {module_name} not found.")
ModuleNotFoundError: Required requests_ntlm not found.
==> script output <==
stdout: jupyter nbextension command failed: map widgets in the jupyter notebook may not work, installation continuing...
jupyter nbextension command failed: map widgets in the jupyter notebook may not work, installation continuing...
stderr:
return code: 1
()
(base) C:\Users\Tomas> | closed | 2022-10-26T22:34:05Z | 2022-10-27T18:26:42Z | https://github.com/Esri/arcgis-python-api/issues/1370 | [
"bug"
] | tomasliutsung | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.