repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
aleju/imgaug | deep-learning | 647 | draw_on_image only works for RGB (three channel) images? | The [source code](https://github.com/aleju/imgaug/blob/master/imgaug/augmentables/kps.py#L314) shows that the input must be a 3 channel image while augmentation works on gray images. Probably should extend it to support both? | closed | 2020-03-31T20:24:12Z | 2020-03-31T20:26:18Z | https://github.com/aleju/imgaug/issues/647 | [] | Jayzh7 | 0 |
tox-dev/tox | automation | 3,435 | Prefer `tox.toml` over `tox.ini` if both exist | So during migration of my repos to a new laptop I accidentally ended up with a tox repo checkout having an old `tox.ini` on disk. When I was trying out `tox r`, it was yelling at me about some CLI args that `pytest` didn't recognize.
It took me some time to see that `tox l` outputs envs from `tox.ini` and not `tox.toml`.
Action items:
* [ ] Decide whether to warn or error out if two definitions exist
* [ ] Make the TOML definition take precedence | closed | 2024-10-31T23:45:57Z | 2024-11-01T14:22:40Z | https://github.com/tox-dev/tox/issues/3435 | [] | webknjaz | 1 |
horovod/horovod | deep-learning | 3,345 | Error on installing: pip install horovod[pytorch] in Virtualbox ubuntu 18.04 | **Environment:**
1. Framework: Pytorch
2. Framework version: torch 1.10
3. Horovod version: latest, with pip install horovod[pytorch]
4. MPI version: -
5. CUDA version: 9.1
6. NCCL version:
7. Python version: 3.6.9
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version: 7.5.
12. CMake version: 3.10
Collecting oauthlib>=3.0.0
Downloading oauthlib-3.1.1-py2.py3-none-any.whl (146 kB)
|████████████████████████████████| 146 kB 27.1 MB/s
Using legacy 'setup.py install' for horovod, since package 'wheel' is not installed.
Using legacy 'setup.py install' for psutil, since package 'wheel' is not installed.
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Using legacy 'setup.py install' for idna-ssl, since package 'wheel' is not installed.
Installing collected packages: urllib3, pyasn1, idna, charset-normalizer, certifi, zipp, setuptools, rsa, requests, pyasn1-modules, oauthlib, multidict, frozenlist, cachetools, yarl, requests-oauthlib, importlib-metadata, idna-ssl, google-auth, attrs, asynctest, async-timeout, aiosignal, wheel, werkzeug, tensorboard-plugin-wit, tensorboard-data-server, pycparser, protobuf, packaging, markdown, grpcio, google-auth-oauthlib, fsspec, aiohttp, absl-py, tqdm, torchmetrics, tensorboard, pyDeprecate, psutil, future, cloudpickle, cffi, pytorch-lightning, horovod
Attempting uninstall: setuptools
Found existing installation: setuptools 39.0.1
Uninstalling setuptools-39.0.1:
Successfully uninstalled setuptools-39.0.1
Running setup.py install for idna-ssl ... done
Running setup.py install for psutil ... error
ERROR: Command errored out with exit status 1:
command: /home/luda1013/packnet-sfm/packnetsfm-env/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-m54bc2s2/psutil_60b7efb33a6d49118a550e4850f50b87/setup.py'"'"'; __file__='"'"'/tmp/pip-install-m54bc2s2/psutil_60b7efb33a6d49118a550e4850f50b87/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-b5_f826a/install-record.txt --single-version-externally-managed --compile --install-headers /home/luda1013/packnet-sfm/packnetsfm-env/include/site/python3.6/psutil
cwd: /tmp/pip-install-m54bc2s2/psutil_60b7efb33a6d49118a550e4850f50b87/
Complete output (47 lines):
running install
/home/luda1013/packnet-sfm/packnetsfm-env/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/psutil
copying psutil/_common.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_psposix.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_psbsd.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/__init__.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_pswindows.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_compat.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_pslinux.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_pssunos.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_psosx.py -> build/lib.linux-x86_64-3.6/psutil
copying psutil/_psaix.py -> build/lib.linux-x86_64-3.6/psutil
creating build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_system.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/__init__.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_process.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/runner.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/__main__.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_memleaks.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_testutils.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-3.6/psutil/tests
copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-3.6/psutil/tests
running build_ext
building 'psutil._psutil_linux' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/psutil
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=590 -DPSUTIL_LINUX=1 -I/home/luda1013/packnet-sfm/packnetsfm-env/include -I/usr/include/python3.6m -c psutil/_psutil_common.c -o build/temp.linux-x86_64-3.6/psutil/_psutil_common.o
psutil/_psutil_common.c:9:10: fatal error: Python.h: No such file or directory
#include <Python.h>
^~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/luda1013/packnet-sfm/packnetsfm-env/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-m54bc2s2/psutil_60b7efb33a6d49118a550e4850f50b87/setup.py'"'"'; __file__='"'"'/tmp/pip-install-m54bc2s2/psutil_60b7efb33a6d49118a550e4850f50b87/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-b5_f826a/install-record.txt --single-version-externally-managed --compile --install-headers /home/luda1013/packnet-sfm/packnetsfm-env/include/site/python3.6/psutil Check the logs for full command output.
i am in project with depth estimation and it needs horovod.. Before i tried to dock the project but it failed, that's why i am trying to install all package in my virtualbox ubuntu 18.04.
I am stumbled with this issues, can someone helps pls? Thanks
| open | 2022-01-05T12:17:30Z | 2022-09-11T12:46:26Z | https://github.com/horovod/horovod/issues/3345 | [
"bug"
] | luda1013 | 2 |
pytest-dev/pytest-xdist | pytest | 914 | Python 3.12 test failures due to warnings | The tests fail with Python 3.12.0b1 due to warnings coming from pytest. While the "root cause" lies in pytest, it would be nice if tests could be made resilient to warnings:
```pytb
========================================================= test session starts =========================================================
platform linux -- Python 3.12.0b1, pytest-7.3.1, pluggy-1.0.0
cachedir: .tox/py312/.pytest_cache
rootdir: /tmp/pytest-xdist
configfile: tox.ini
testpaths: testing
plugins: xdist-3.3.2.dev1+g7e1768f
collected 203 items
testing/acceptance_test.py ..............s..x.......xx..F...........s.........x.........................................F. [ 46%]
testing/test_dsession.py .................x...x............ [ 63%]
testing/test_looponfail.py ...........x.ss [ 70%]
testing/test_newhooks.py .... [ 72%]
testing/test_plugin.py ...s............... [ 82%]
testing/test_remote.py x....x........ [ 89%]
testing/test_workermanage.py ........x.......s...x. [100%]
============================================================== FAILURES ===============================================================
_____________________________________________________ test_config_initialization ______________________________________________________
pytester = <Pytester PosixPath('/tmp/pytest-of-mgorny/pytest-4/test_config_initialization0')>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fecae65d880>
pytestconfig = <_pytest.config.Config object at 0x7fecb007a7e0>
def test_config_initialization(
pytester: pytest.Pytester, monkeypatch: pytest.MonkeyPatch, pytestconfig
) -> None:
"""Ensure workers and controller are initialized consistently. Integration test for #445"""
pytester.makepyfile(
**{
"dir_a/test_foo.py": """
def test_1(request):
assert request.config.option.verbose == 2
"""
}
)
pytester.makefile(
".ini",
myconfig="""
[pytest]
testpaths=dir_a
""",
)
monkeypatch.setenv("PYTEST_ADDOPTS", "-v")
result = pytester.runpytest("-n2", "-c", "myconfig.ini", "-v")
> result.stdout.fnmatch_lines(["dir_a/test_foo.py::test_1*", "*= 1 passed in *"])
E Failed: nomatch: 'dir_a/test_foo.py::test_1*'
E and: '========================================================= test session starts ========================================================='
E and: 'platform linux -- Python 3.12.0b1, pytest-7.3.1, pluggy-1.0.0 -- /tmp/pytest-xdist/.tox/py312/bin/python'
E and: 'cachedir: .pytest_cache'
E and: 'rootdir: /tmp/pytest-of-mgorny/pytest-4/test_config_initialization0'
E and: 'configfile: myconfig.ini'
E and: 'testpaths: dir_a'
E and: 'plugins: xdist-3.3.2.dev1+g7e1768f'
E and: 'created: 2/2 workers'
E and: '2 workers [1 item]'
E and: ''
E and: 'scheduling tests via LoadScheduling'
E and: ''
E fnmatch: 'dir_a/test_foo.py::test_1*'
E with: 'dir_a/test_foo.py::test_1 '
E nomatch: '*= 1 passed in *'
E and: '[gw0] [100%] PASSED dir_a/test_foo.py::test_1 '
E and: ''
E and: '========================================================== warnings summary ==========================================================='
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' expr = ast.IfExp(test, self.display(name), ast.Str(name.id))'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' syms.append(ast.Str(sym))'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' expls.append(ast.Str(expl))'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' keys = [ast.Str(key) for key in current.keys()]'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' assertmsg = ast.Str("")'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' clear = ast.Assign(variables, ast.NameConstant(None))'
E and: ''
E and: '-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html'
E and: '=================================================== 1 passed, 15 warnings in 0.41s ===================================================='
E remains unmatched: '*= 1 passed in *'
/tmp/pytest-xdist/testing/acceptance_test.py:608: Failed
-------------------------------------------------------- Captured stdout call ---------------------------------------------------------
========================================================= test session starts =========================================================
platform linux -- Python 3.12.0b1, pytest-7.3.1, pluggy-1.0.0 -- /tmp/pytest-xdist/.tox/py312/bin/python
cachedir: .pytest_cache
rootdir: /tmp/pytest-of-mgorny/pytest-4/test_config_initialization0
configfile: myconfig.ini
testpaths: dir_a
plugins: xdist-3.3.2.dev1+g7e1768f
created: 2/2 workers
2 workers [1 item]
scheduling tests via LoadScheduling
dir_a/test_foo.py::test_1
[gw0] [100%] PASSED dir_a/test_foo.py::test_1
========================================================== warnings summary ===========================================================
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
expr = ast.IfExp(test, self.display(name), ast.Str(name.id))
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
syms.append(ast.Str(sym))
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
expls.append(ast.Str(expl))
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
keys = [ast.Str(key) for key in current.keys()]
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
assertmsg = ast.Str("")
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
clear = ast.Assign(variables, ast.NameConstant(None))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=================================================== 1 passed, 15 warnings in 0.41s ====================================================
________________________________________________________ test_collection_crash ________________________________________________________
testdir = <Testdir local('/tmp/pytest-of-mgorny/pytest-4/test_collection_crash0')>
def test_collection_crash(testdir):
p1 = testdir.makepyfile(
"""
assert 0
"""
)
result = testdir.runpytest(p1, "-n1")
assert result.ret == 1
> result.stdout.fnmatch_lines(
[
"created: 1/1 worker",
"1 worker [[]0 items[]]",
"*_ ERROR collecting test_collection_crash.py _*",
"E assert 0",
"*= 1 error in *",
]
)
E Failed: nomatch: 'created: 1/1 worker'
E and: '========================================================= test session starts ========================================================='
E and: 'platform linux -- Python 3.12.0b1, pytest-7.3.1, pluggy-1.0.0'
E and: 'rootdir: /tmp/pytest-of-mgorny/pytest-4/test_collection_crash0'
E and: 'plugins: xdist-3.3.2.dev1+g7e1768f'
E exact match: 'created: 1/1 worker'
E fnmatch: '1 worker [[]0 items[]]'
E with: '1 worker [0 items]'
E nomatch: '*_ ERROR collecting test_collection_crash.py _*'
E and: ''
E and: ''
E and: '=============================================================== ERRORS ================================================================'
E fnmatch: '*_ ERROR collecting test_collection_crash.py _*'
E with: '______________________________________________ ERROR collecting test_collection_crash.py ______________________________________________'
E nomatch: 'E assert 0'
E and: 'test_collection_crash.py:1: in <module>'
E and: ' assert 0'
E exact match: 'E assert 0'
E nomatch: '*= 1 error in *'
E and: '========================================================== warnings summary ==========================================================='
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' assertmsg = ast.Str("")'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' keys = [ast.Str(key) for key in current.keys()]'
E and: ''
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941'
E and: '../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941'
E and: ' /tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead'
E and: ' clear = ast.Assign(variables, ast.NameConstant(None))'
E and: ''
E and: '-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html'
E and: '======================================================= short test summary info ======================================================='
E and: 'ERROR test_collection_crash.py - assert 0'
E and: '==================================================== 6 warnings, 1 error in 0.32s ====================================================='
E remains unmatched: '*= 1 error in *'
/tmp/pytest-xdist/testing/acceptance_test.py:1567: Failed
-------------------------------------------------------- Captured stdout call ---------------------------------------------------------
========================================================= test session starts =========================================================
platform linux -- Python 3.12.0b1, pytest-7.3.1, pluggy-1.0.0
rootdir: /tmp/pytest-of-mgorny/pytest-4/test_collection_crash0
plugins: xdist-3.3.2.dev1+g7e1768f
created: 1/1 worker
1 worker [0 items]
=============================================================== ERRORS ================================================================
______________________________________________ ERROR collecting test_collection_crash.py ______________________________________________
test_collection_crash.py:1: in <module>
assert 0
E assert 0
========================================================== warnings summary ===========================================================
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
assertmsg = ast.Str("")
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
keys = [ast.Str(key) for key in current.keys()]
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941
../../../pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
clear = ast.Assign(variables, ast.NameConstant(None))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================= short test summary info =======================================================
ERROR test_collection_crash.py - assert 0
==================================================== 6 warnings, 1 error in 0.32s =====================================================
========================================================== warnings summary ===========================================================
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:683
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:683
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:683: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
and isinstance(item.value, ast.Str)
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:685
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:685
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:685: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
doc = item.value.s
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965: 568 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:965: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968: 568 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:968: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
expr = ast.IfExp(test, self.display(name), ast.Str(name.id))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102: 303 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1102: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
syms.append(ast.Str(sym))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104: 303 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1104: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
expls.append(ast.Str(expl))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: 1884 warnings
testing/test_looponfail.py: 3 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:817: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
keys = [ast.Str(key) for key in current.keys()]
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: 438 warnings
testing/test_looponfail.py: 3 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:927: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
assertmsg = ast.Str("")
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: 442 warnings
testing/test_looponfail.py: 3 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:929: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
template = ast.BinOp(assertmsg, ast.Add(), ast.Str(explanation))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: 431 warnings
testing/test_looponfail.py: 3 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:941: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
clear = ast.Assign(variables, ast.NameConstant(None))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1004: 20 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1004: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
expl_format = self.pop_format_context(ast.Str(expl))
.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1016: 11 warnings
/tmp/pytest-xdist/.tox/py312/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:1016: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
expl_template = self.helper("_format_boolop", expl_list, ast.Num(is_or))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================= short test summary info =======================================================
SKIPPED [3] .tox/py312/lib/python3.12/site-packages/_pytest/pytester.py:1534: could not import 'pexpect': No module named 'pexpect'
SKIPPED [1] testing/acceptance_test.py:783: pytest 7.3.1 does not have the pytest_warning_captured hook.
SKIPPED [1] testing/test_plugin.py:104: could not import 'psutil': No module named 'psutil'
SKIPPED [1] testing/test_workermanage.py:316: no 'gspecs' option found
XFAIL testing/acceptance_test.py::TestDistEach::test_simple_diffoutput - reason: [NOTRUN] other python versions might not have pytest installed
XFAIL testing/acceptance_test.py::test_terminate_on_hangingnode
XFAIL testing/acceptance_test.py::test_session_hooks - reason: [NOTRUN] works if run outside test suite
XFAIL testing/acceptance_test.py::TestNodeFailure::test_each_multiple - #20: xdist race condition on node restart
XFAIL testing/test_dsession.py::TestDistReporter::test_rsync_printing
XFAIL testing/test_dsession.py::test_pytest_issue419 - duplicate test ids not supported yet
XFAIL testing/test_looponfail.py::TestLooponFailing::test_looponfail_removed_test - broken by pytest 3.1+
XFAIL testing/test_remote.py::test_remoteinitconfig - #59
XFAIL testing/test_remote.py::TestWorkerInteractor::test_happy_run_events_converted - reason: implement a simple test for event production
XFAIL testing/test_workermanage.py::TestNodeManager::test_rsync_roots_no_roots - reason: [NOTRUN]
XFAIL testing/test_workermanage.py::test_unserialize_warning_msg[Nested] - Nested warning classes are not supported.
FAILED testing/acceptance_test.py::test_config_initialization - Failed: nomatch: 'dir_a/test_foo.py::test_1*'
FAILED testing/acceptance_test.py::test_collection_crash - Failed: nomatch: 'created: 1/1 worker'
=========================== 2 failed, 184 passed, 6 skipped, 11 xfailed, 4984 warnings in 61.14s (0:01:01) ============================
``` | closed | 2023-05-24T06:19:07Z | 2023-05-24T07:15:59Z | https://github.com/pytest-dev/pytest-xdist/issues/914 | [] | mgorny | 2 |
QingdaoU/OnlineJudge | django | 346 | queries reference deleted fields | Hi, I noticed some of the submission queries still reference problem_id and contest_id, even though they are no longer fields in submission. Is this an error, or is there some special mechanism that prevents errors in these cases?
(line 74-80) in submission/views/oj.py
https://github.com/QingdaoU/OnlineJudge/blob/master/submission/views/oj.py#L74
(line 135) in submission/views/oj.py
https://github.com/QingdaoU/OnlineJudge/blob/master/submission/views/oj.py#L135
(line 164) in submission/views/oj.py
https://github.com/QingdaoU/OnlineJudge/blob/master/submission/views/oj.py#L164 | closed | 2020-12-29T03:38:12Z | 2020-12-29T06:02:59Z | https://github.com/QingdaoU/OnlineJudge/issues/346 | [] | sophie200 | 1 |
tensorpack/tensorpack | tensorflow | 594 | Question about implementation | Hi !
I am currently trying to add FPN network in your FasterRCNN codebase but encountered some problems. I will be very grateful if you can share some thoughts !
(1. I am facing CUDNN_STATUS_BAD_PARAM seemed to be caused by empty array in cudnn. But I have already use tf 1.4.0 and it should be fixed by your pull request right ?
(2. I have successfully trained FPN with RPN (loss and prediction seemed to be reasonable) and now
I am implementing Pyramid ROI align by iterate all strides and determine the level of roi to pool according to its size. My question is that I have to preserve original order of boxes in order to match the sampled rois (in function sample_fast_rcnn_targets). I am trying to record box indices and use scatter_nd to recover original order. But it will cause slightly shape difference in decode_bbox_target in runtime (and also problem #1 when I simply discard tf.scatter_nd to return pooled feature).
The fraction of the code is like below:
```
def roi_align_FPN(featuremaps, boxes, output_shape):
# feature map [P5, P4, P3, P2]
boxes = tf.stop_gradient(tf.expand_dims(boxes, 0))
x1, y1, x2, y2 = tf.split(boxes, 4, axis=2)
w = x2 - x1
h = y2 - y1
roi_level = tf_log2(tf.sqrt(h * w) / (224.0))
roi_level = tf.minimum(5, tf.maximum(
2, 4 + tf.cast(tf.round(roi_level), tf.int32)))
roi_level = tf.squeeze(roi_level, 2)
# limit to P5 ~ P2
print("ROI LEVEL SHAPE:", roi_level.get_shape())
print("BOX SHAPE", boxes.get_shape())
pooled = []
box_to_level = []
strides = [4., 8., 16., 32.]
for level in range(2, 6):
featuremap_to_crop = featuremaps[4 - (level - 2)]
# order : P6(idx 4) ~ P2(idx0) => idx shift
# P2 -> f[4], P3 -> f[3], P4 -> f[2], P5 -> f[1]
id_for_box_wrt_level = tf.where(tf.equal(roi_level, level))
level_boxes = tf.gather_nd(boxes, id_for_box_wrt_level)
# Box indicies for crop_and_resize.
box_indices = tf.cast(id_for_box_wrt_level[:, 0], tf.int32)
box_to_level.append(tf.cast(id_for_box_wrt_level[:, 1], tf.int32))
level_boxes = tf.stop_gradient(level_boxes)
box_indices = tf.stop_gradient(box_indices)
# Perform roi pool (current box scale still wrt image size)
level_boxes = level_boxes * (1.0 / strides[level - 2])
ret = crop_and_resize(
featuremap_to_crop, level_boxes,
tf.zeros([tf.shape(level_boxes)[0]], dtype=tf.int32),
output_shape * 2)
ret = tf.nn.avg_pool(ret, [1, 1, 2, 2], [1, 1, 2, 2], padding='SAME', data_format='NCHW')
pooled.append(ret)
pooled = tf.concat(pooled, axis=0)
box_to_level = tf.concat(box_to_level, axis=0)
pooled = tf.scatter_nd(box_to_level, pooled, tf.shape(pooled), name="scatter_pool")
```
Thanks ! | closed | 2018-01-15T15:57:16Z | 2018-05-30T20:59:32Z | https://github.com/tensorpack/tensorpack/issues/594 | [
"examples"
] | tkuanlun350 | 9 |
errbotio/errbot | automation | 1,312 | CLI: listing all available backends is broken in 6.0.0 | ### I am...
* [x] Reporting a bug
### I am running...
* Errbot version: 6.0.0a0 and 6.0.0
* OS version: Linux 4.20.7-1-default GNU/Linux
* Python version: 3.7.2
* Using a virtual environment: yes
### Issue description
The `-l` / `--list` option is broken in versions 6.0.0a0 and 6.0.0. It works correctly in version 5.2.0.
### Steps to reproduce
```shell
$ uname -o -r -s
Linux 4.20.7-1-default GNU/Linux
$ python3 -V
Python 3.7.2
$ mkdir -p /tmp/errbot
$ cd /tmp/errbot
$ python3 -m venv env
$ . env/bin/activate
(env) $ pip install -q errbot==6.0.0
(env) $ errbot -v
Errbot version 6.0.0
(env) $ errbot -i
Your Errbot directory has been correctly initialized !
Just do "errbot" and it should start in text/development mode.
(env) $ errbot -l
Traceback (most recent call last):
File "/tmp/errbot/env/bin/errbot", line 11, in <module>
sys.exit(main())
File "/tmp/errbot/env/lib64/python3.7/site-packages/errbot/cli.py", line 201, in main
from errbot.bootstrap import enumerate_backends
ImportError: cannot import name 'enumerate_backends' from 'errbot.bootstrap' (/tmp/errbot/env/lib64/python3.7/site-packages/errbot/bootstrap.py)
(env) $ pip install -q 'errbot<6.0.0'
(env) $ errbot -v
Errbot version 5.2.0
(env) $ errbot -l
Available backends:
Graphic
Hipchat
IRC
Null
Slack
Telegram
Test
Text
XMPP
(env) $
```
| closed | 2019-04-01T21:43:37Z | 2019-04-26T03:18:35Z | https://github.com/errbotio/errbot/issues/1312 | [
"#regression"
] | selurvedu | 2 |
jupyter-widgets-contrib/ipycanvas | jupyter | 191 | Use devicePixelRatio for improved rendering on Retina and other higher-density displays | On displays with a higher pixel density (e.g. MacBook Pro Retina displays), HTML canvas looks slightly fuzzy and pixellated by default.
Can be fixed by using the window's `devicePixelRatio`, and scale the context and canvas sizes accordingly - [see example](https://gist.github.com/callumlocke/cc258a193839691f60dd). | open | 2021-04-21T13:23:18Z | 2023-01-28T18:51:25Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/191 | [
"enhancement"
] | ideoforms | 3 |
ultralytics/ultralytics | deep-learning | 19,123 | yolo benchmark command with int8=True does not recognize provided data argument | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I am running the benchmark command normally with tensorRT (v8.5.5.2) and `half=True` with success, but when I remove half precision argument and include the `int8=True` argument, the warning appears indicating that the command is not recognizing the provided data argument:
```
$ yolo benchmark model=last.pt imgsz=1280 batch=1 int8=True format=engine data=data.yaml
# WARNING ⚠️ INT8 export requires a missing 'data' arg for calibration. Using default 'data=coco8.yaml'.
# ...
```
### Environment
Ultralytics 8.3.70 🚀 Python-3.8.10 torch-2.0.0+nv23.05 CUDA:0 (Orin, 7337MiB)
Setup complete ✅ (6 CPUs, 7.2 GB RAM, 139.7/233.7 GB disk)
OS Linux-5.10.104-tegra-aarch64-with-glibc2.29
Environment Linux
Python 3.8.10
Install git
RAM 7.16 GB
Disk 139.7/233.7 GB
CPU ARMv8 Processor rev 1 (v8l)
CPU count 6
GPU Orin, 7337MiB
GPU count 1
CUDA 11.4
numpy ✅ 1.23.5<=2.1.1,>=1.23.0
matplotlib ✅ 3.7.5>=3.3.0
opencv-python ✅ 4.9.0.80>=4.6.0
pillow ✅ 9.4.0>=7.1.2
pyyaml ✅ 6.0.1>=5.3.1
requests ✅ 2.28.2>=2.23.0
scipy ✅ 1.10.1>=1.4.1
torch ✅ 2.0.0+nv23.5>=1.8.0
torch ✅ 2.0.0+nv23.5!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.15.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
```
yolo benchmark model=last.pt imgsz=1280 batch=1 int8=True format=engine data=data.yaml
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-07T08:48:50Z | 2025-02-07T10:59:33Z | https://github.com/ultralytics/ultralytics/issues/19123 | [
"bug",
"fixed",
"exports"
] | iokarkan | 6 |
sammchardy/python-binance | api | 1,265 | BinanceAPIException when using coin-m futures get_historical_klines | **Describe the bug**
Using COIN-M methods (using python-binance) is causing weird erros. Analogous USD-M with same parameters are working fine.
**To Reproduce**
This works:
client.get_historical_klines(
symbol="SOLBUSD",
interval="1m",
start_str="2022-09-27 11:54:17.480347+00:00",
end_str="2022-09-27 13:54:17.480347+00:00",
klines_type=HistoricalKlinesType.FUTURES,
)
This throws error BinanceAPIException: APIError(code=-4088): Maximum time interval is 200 days.
client.get_historical_klines(
symbol="SOLUSD_PERP",
interval="1m",
start_str="2022-09-27 11:54:17.480347+00:00",
end_str="2022-09-27 12:54:17.480347+00:00",
klines_type=HistoricalKlinesType.FUTURES_COIN,
)
**Expected behavior**
Don't throw error, get candlestick array.
**Environment (please complete the following information):**
- Python version: 3.10.6
- Virtual Env: conda
- OS: Mac
- python-binance version 1.0.16
| open | 2022-11-01T11:02:54Z | 2023-01-13T08:09:41Z | https://github.com/sammchardy/python-binance/issues/1265 | [] | dignitas123 | 2 |
mljar/mercury | data-visualization | 288 | fix OutputDir docs | Docs for `OutputDir` has not working code https://runmercury.com/docs/output-widgets/outputdir/ | closed | 2023-05-22T07:21:10Z | 2023-05-23T09:54:40Z | https://github.com/mljar/mercury/issues/288 | [] | pplonski | 0 |
microsoft/nni | tensorflow | 5,624 | Node server crash | 
Besides, when I use nni to connect another machine (it can connect itself with "remote" platform), the same problem occurs:
"FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory"
**Environment**:
- NNI version: master
- Training service (local|remote|pai|aml|etc): local, and reusemode=False
- Client OS: windows 10.0.19042.1466
- Server OS (for remote mode only): windows 10.0.19042.1466
- Python version: 3.8
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2023-06-29T07:28:52Z | 2023-10-07T17:03:01Z | https://github.com/microsoft/nni/issues/5624 | [] | XiaoXiao-Woo | 6 |
huggingface/diffusers | pytorch | 11,008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | open | 2025-03-08T04:21:33Z | 2025-03-12T11:07:57Z | https://github.com/huggingface/diffusers/issues/11008 | [] | kexul | 3 |
wkentaro/labelme | computer-vision | 923 | [Feature] Import VGG Annotator .json to Labelme | I am trying to use Labelme2voc.py to generate masks png but i already finished creating polygon annotation to +1000 pictures in VGG Annotator. i don't see any way to import to Labelme to edit or to export as Labelme's json so i can use labelme2voc :(
is there any way to import VGG's json to Labelme?
i tried to convert VGG's json to coco format and then generate mask pngs from it but didn't work either.
even converting VGG's coco export settings are different from Labelme2coco settings so i could't revert the VGG's coco json to Labelme format either
if there's any suggestions or import method i am not aware of please let me know
| closed | 2021-09-26T09:46:22Z | 2021-10-23T21:08:19Z | https://github.com/wkentaro/labelme/issues/923 | [] | GhamdiOmar | 4 |
youfou/wxpy | api | 450 | 不到一天就退出登录了,要怎么办 | open | 2020-04-12T04:52:31Z | 2020-05-27T14:08:57Z | https://github.com/youfou/wxpy/issues/450 | [] | wtf-boy | 6 | |
fastapi/fastapi | api | 12,425 | python3.13 install uvicorn fault | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
o activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
Installing dependencies from Pipfile.lock (527371)...
: Collecting click==8.1.7 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 1))
: Using cached click-8.1.7-py3-none-any.whl (97 kB)
: Collecting h11==0.14.0 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 2))
: Using cached h11-0.14.0-py3-none-any.whl (58 kB)
: Collecting httptools==0.6.1 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 3))
: Using cached httptools-0.6.1.tar.gz (191 kB)
: Preparing metadata (setup.py): started
: Preparing metadata (setup.py): finished with status 'done'
: Collecting python-dotenv==1.0.1 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 4))
: Using cached python_dotenv-1.0.1-py3-none-any.whl (19 kB)
: Collecting pyyaml==6.0.2 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 5))
: Using cached PyYAML-6.0.2-cp313-cp313-win_amd64.whl (156 kB)
: Collecting uvicorn==0.31.1 (from uvicorn==0.31.1->-r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 6))
: Using cached uvicorn-0.31.1-py3-none-any.whl (63 kB)
: Collecting watchfiles==0.24.0 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 7))
: Using cached watchfiles-0.24.0-cp313-none-win_amd64.whl (276 kB)
: Collecting websockets==13.1 (from -r c:\users\starlee\appdata\local\temp\pipenv-dbv5y_0a-requirements\pipenv-2599a94r-hashed-reqs.txt (line 8))
: Using cached websockets-13.1-cp313-cp313-win_amd64.whl (159 kB)
: Building wheels for collected packages: httptools
: Building wheel for httptools (setup.py): started
: Building wheel for httptools (setup.py): finished with status 'error'
: Running setup.py clean for httptools
: Failed to build httptools
: error: subprocess-exited-with-error
:
: × python setup.py bdist_wheel did not run successfully.
: │ exit code: 1
: ╰─> [66 lines of output]
: C:\Users\starlee\.virtualenvs\test-uSjHFhYi\Lib\site-packages\setuptools\_distutils\dist.py:261: UserWarning: Unknown distribution option: 'test_suite'
: warnings.warn(msg)
: running bdist_wheel
: running build
: running build_py
: creating build\lib.win-amd64-cpython-313\httptools
: copying httptools\_version.py -> build\lib.win-amd64-cpython-313\httptools
: copying httptools\__init__.py -> build\lib.win-amd64-cpython-313\httptools
: creating build\lib.win-amd64-cpython-313\httptools\parser
: copying httptools\parser\errors.py -> build\lib.win-amd64-cpython-313\httptools\parser
: copying httptools\parser\__init__.py -> build\lib.win-amd64-cpython-313\httptools\parser
: running egg_info
: writing httptools.egg-info\PKG-INFO
: writing dependency_links to httptools.egg-info\dependency_links.txt
: writing requirements to httptools.egg-info\requires.txt
: writing top-level names to httptools.egg-info\top_level.txt
: reading manifest file 'httptools.egg-info\SOURCES.txt'
: reading manifest template 'MANIFEST.in'
: adding license file 'LICENSE'
: writing manifest file 'httptools.egg-info\SOURCES.txt'
: C:\Users\starlee\.virtualenvs\test-uSjHFhYi\Lib\site-packages\setuptools\command\build_py.py:218: _Warning: Package 'httptools.parser' is absent from the `packages` configuration.
: !!
:
: ********************************************************************************
: ############################
: # Package would be ignored #
: ############################
: Python recognizes 'httptools.parser' as an importable package[^1],
: but it is absent from setuptools' `packages` configuration.
:
: This leads to an ambiguous overall configuration. If you want to distribute this
: package, please make sure that 'httptools.parser' is explicitly added
: to the `packages` configuration field.
:
: Alternatively, you can also rely on setuptools' discovery methods
: (for example by using `find_namespace_packages(...)`/`find_namespace:`
: instead of `find_packages(...)`/`find:`).
:
: You can read more about "package discovery" on setuptools documentation page:
:
: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
:
: If you don't want 'httptools.parser' to be distributed and are
: already explicitly excluding 'httptools.parser' via
: `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
: you can try to use `exclude_package_data`, or `include-package-data=False` in
: combination with a more fine grained `package-data` configuration.
:
: You can read more about "package data files" on setuptools documentation page:
:
: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html
:
:
: [^1]: For Python, any directory (with suitable naming) can be imported,
: even if it does not contain any `.py` files.
: On the other hand, currently there is no concept of package data
: ********************************************************************************
:
: !!
: check.warn(importable)
:
: running build_ext
: building 'httptools.parser.parser' extension
: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
:
:
: note: This error originates from a subprocess, and is likely not a problem with pip.
: ERROR: Failed building wheel for httptools
: ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (httptools)
ERROR: Couldn't install package: [1m{}[0m
[33mPackage installation failed...[0m | closed | 2024-10-11T07:36:16Z | 2024-10-11T07:39:41Z | https://github.com/fastapi/fastapi/issues/12425 | [] | zaochayeming | 0 |
pywinauto/pywinauto | automation | 522 | Selecting Color from a Color pallete | I have a Color Selection window as below.

I use win32 backend
The properties from Swapy are as below -

If i want to select a color, is there a way to do that.
Workaround so far is to use sendkeys to send arrow keys e.g. {UP} or {RIGHT} etc. Is there a direct way to select the color? | closed | 2018-07-22T07:13:12Z | 2018-07-24T00:08:54Z | https://github.com/pywinauto/pywinauto/issues/522 | [
"question",
"wontfix"
] | madhavankumar | 3 |
Lightning-AI/pytorch-lightning | deep-learning | 19,764 | Resume from mid steps inside an epoch | ### Description & Motivation
LLMs are trained on growing size of corpora, only resume by epochs is not enough, as models may only be trained on a few epochs and one epoch may take a few days to train. Currently lightning prints a warning message as follows when trying to resume from mid steps inside an epoch and asks for a resumable dataloader:

However, I can't find any examples resuming from mid steps in docs/blogs(maybe my bad). And it's quite strange to me to implement a dataloader with state_dict/load_state_dict methods, as dataloader cannot hold states by design, it's the iterator derived from dataloader that is resumable and should hold the necessary states. Besides, we may not need the state_dict and load_state_dict methods to save/load dataloaders, as the epoch/step idx hold enough message to restore the necessary training batch state.
I proposed a possible hackin that can work around this issue, taking inspirations from [hugging face train script](https://github.com/huggingface/transformers/blob/edf0935dca9189db599ac6c3f3ef714160acbbd8/examples/pytorch/language-modeling/run_clm_no_trainer.py#L617).
### Pitch
_No response_
### Alternatives
Here is an ugly hackin(by callbacks in LightningModule) now I used to resume the specific batch:
```
class SkipBatchSampler(BatchSampler):
r"""
Modified from huggingface accelerate/data_loader.py
"""
def __init__(self, batch_sampler: BatchSampler, skip_batches: int = 0):
self.batch_sampler = batch_sampler
self.skip_batches = skip_batches
def __iter__(self):
for i, batch in enumerate(self.batch_sampler):
if i >= self.skip_batches:
yield batch
def __len__(self):
return len(self.batch_sampler) # - self.skip_batches, due to in loops.training_epoch_loop.py on_run_start(), which will set fetched value, ugly hackin here
_PYTORCH_DATALOADER_KWARGS_SUBSTITUTE = {
"num_workers": 0,
"collate_fn": None,
"pin_memory": False,
"timeout": 0,
"worker_init_fn": None,
"multiprocessing_context": None,
"generator": None,
"prefetch_factor": 2,
"persistent_workers": False,
}
def resume_dataloader(dataloader: DataLoader, steps_in_epoch: int) -> DataLoader:
r"""
We don't want to directly iterate on dataloader (which will cause data
preprocessing overhead), we iterate on sampler
"""
#TODO, currently not support iterable dataset, DataLoaderDispatcher, DataLoaderShard
assert not isinstance(dataloader.dataset, IterableDataset)
new_batch_sampler = SkipBatchSampler(dataloader.batch_sampler, steps_in_epoch)
kwargs = {k: getattr(dataloader, k, _PYTORCH_DATALOADER_KWARGS_SUBSTITUTE[k])
for k in _PYTORCH_DATALOADER_KWARGS_SUBSTITUTE}
return DataLoader(dataloader.dataset, batch_sampler=new_batch_sampler, **kwargs)
class LightningModel(L.LightningModule):
# hackins
def on_train_start(self):
self.restarted_run = False
def on_train_epoch_start(self):
# modify train dataloader
if self.trainer.fit_loop.restarting:
self.restarted_run = True
self.trainer.fit_loop.backup_dataloaders = self.trainer.fit_loop._combined_loader.flattened
self.trainer.fit_loop._combined_loader.flattened = [
resume_dataloader(dl, self.trainer.fit_loop.epoch_loop.batch_progress.current.completed)
for dl in self.trainer.fit_loop._combined_loader.flattened
]
# need to call iter to rebuild data_fetcher.iterator (which is originally
# set in setup_data)
self.trainer.fit_loop._data_fetcher.setup(self.trainer.fit_loop._combined_loader)
with isolate_rng():
iter(self.trainer.fit_loop._data_fetcher)
else:
if self.restarted_run:
self.trainer.fit_loop._combined_loader.flattened = self.trainer.fit_loop.backup_dataloaders
# set epoch again, cause the epoch right after restarting one will have problems
for dl in self.trainer.fit_loop._combined_loader.flattened:
_set_sampler_epoch(dl, self.trainer.current_epoch)
self.trainer.fit_loop._data_fetcher.setup(self.trainer.fit_loop._combined_loader)
# no need to rebuild iterator, already in epoch_loop.on_run_start
# iter(self.trainer.fit_loop._data_fetcher)
```
### Additional context
_No response_
cc @borda | open | 2024-04-11T15:58:21Z | 2024-11-22T19:34:26Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19764 | [
"feature",
"needs triage"
] | xiaosuyu1997 | 2 |
deepspeedai/DeepSpeed | deep-learning | 5,618 | [BUG] ZeRO optimizer with MoE Expert Parallelism | **Describe the bug**
Just like this PR: https://github.com/microsoft/DeepSpeed/pull/5259 , ZeRO optimizer also needs to be fixed:
1. partition logic of expert params.
<img width="808" alt="image" src="https://github.com/microsoft/DeepSpeed/assets/1720972/b9554638-2224-4510-866c-6e6c416d0b08">
3. average_tensor used in gradient reduce in zero2
<img width="1143" alt="image" src="https://github.com/microsoft/DeepSpeed/assets/1720972/1b8f0d0c-cfc8-4cfe-b6f7-ddd11117c070">
**To Reproduce**
Steps to reproduce the behavior:
use ep=4 and adamw optimizer to train llm
**Expected behavior**
expert gradients should be equal under ep=4 and ep=1, but currently it's 4 times bigger than ep=1 | closed | 2024-06-05T11:19:21Z | 2024-09-16T20:52:49Z | https://github.com/deepspeedai/DeepSpeed/issues/5618 | [
"bug",
"training"
] | Jack47 | 2 |
healthchecks/healthchecks | django | 1,042 | C#/.NET API Wrapper | Hi, I've started building out a new wrapper here: https://github.com/FizzBuzz791/NHealthCheck
I'm happy to submit a PR to have it added to the third-party resources list if you'd like. Otherwise, I would appreciate it if you could add it on my behalf.
Thanks. | closed | 2024-08-05T07:32:10Z | 2024-10-25T07:08:00Z | https://github.com/healthchecks/healthchecks/issues/1042 | [] | FizzBuzz791 | 1 |
wkentaro/labelme | deep-learning | 469 | how to use "flags" label ? | closed | 2019-08-15T08:54:11Z | 2020-01-27T01:29:59Z | https://github.com/wkentaro/labelme/issues/469 | [] | dlml | 6 | |
deepfakes/faceswap | machine-learning | 1,300 | Traning crash | *Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum)
or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without
response.*
**Crash reports MUST be included when reporting bugs.**
**Describe the bug**
Hello everyone, I use a GTX1070 with 8G memory for training, I use the "extract" option to generate two folders of training materials A1 and B2, they are all 128 * 128 in size, then I switch to the training tab, enter the parameters, no matter what trainer I selected, the training will end in a few dozen seconds after the start.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: windows10 with faceswap installer version downloaded from web
- Python Version [e.g. 3.5, 3.6]
- Conda Version [e.g. 4.5.12]
- Commit ID [e.g. e83819f]
-
**Additional context**
Add any other context about the problem here.
**Crash Report**
The crash report generated in the root of your Faceswap folder
```
01/28/2023 04:18:15 MainProcess _training _base set_timelapse_feed DEBUG Setting preview feed: (side: 'a', images: 498)
01/28/2023 04:18:15 MainProcess _training _base _load_generator DEBUG Loading generator, side: a, is_display: True, batch_size: 14
01/28/2023 04:18:15 MainProcess _training generator __init__ DEBUG Initializing PreviewDataGenerator: (model: villain, side: a, images: 498 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
01/28/2023 04:18:15 MainProcess _training generator _get_output_sizes DEBUG side: a, model output shapes: [(None, 128, 128, 3), (None, 128, 128, 3)], output sizes: [128]
01/28/2023 04:18:15 MainProcess _training cache __init__ DEBUG Initializing: RingBuffer (batch_size: 14, image_shape: (128, 128, 6), buffer_size: 2, dtype: uint8
01/28/2023 04:18:15 MainProcess _training cache __init__ DEBUG Initialized: RingBuffer
01/28/2023 04:18:15 MainProcess _training generator __init__ DEBUG Initialized PreviewDataGenerator
01/28/2023 04:18:15 MainProcess _training generator minibatch_ab DEBUG do_shuffle: False
01/28/2023 04:18:15 MainProcess _training multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run_3', thread_count: 1)
01/28/2023 04:18:15 MainProcess _training multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run_3'
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Starting thread(s): '_run_3'
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_3'
01/28/2023 04:18:15 MainProcess _run_3 generator _minibatch DEBUG Loading minibatch generator: (image_count: 498, do_shuffle: False)
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Started all threads '_run_3': 1
01/28/2023 04:18:15 MainProcess _training _base set_timelapse_feed DEBUG Setting preview feed: (side: 'b', images: 167)
01/28/2023 04:18:15 MainProcess _training _base _load_generator DEBUG Loading generator, side: b, is_display: True, batch_size: 14
01/28/2023 04:18:15 MainProcess _training generator __init__ DEBUG Initializing PreviewDataGenerator: (model: villain, side: b, images: 167 , batch_size: 14, config: {'centering': 'face', 'coverage': 87.5, 'icnr_init': False, 'conv_aware_init': False, 'optimizer': 'adam', 'learning_rate': 5e-05, 'epsilon_exponent': -7, 'autoclip': False, 'reflect_padding': False, 'allow_growth': False, 'mixed_precision': False, 'nan_protection': True, 'convert_batchsize': 16, 'loss_function': 'ssim', 'loss_function_2': 'mse', 'loss_weight_2': 100, 'loss_function_3': None, 'loss_weight_3': 0, 'loss_function_4': None, 'loss_weight_4': 0, 'mask_loss_function': 'mse', 'eye_multiplier': 3, 'mouth_multiplier': 2, 'penalized_mask_loss': True, 'mask_type': 'extended', 'mask_blur_kernel': 3, 'mask_threshold': 4, 'learn_mask': False, 'preview_images': 14, 'zoom_amount': 5, 'rotation_range': 10, 'shift_range': 5, 'flip_chance': 50, 'color_lightness': 30, 'color_ab': 8, 'color_clahe_chance': 50, 'color_clahe_max_size': 4})
01/28/2023 04:18:15 MainProcess _training generator _get_output_sizes DEBUG side: b, model output shapes: [(None, 128, 128, 3), (None, 128, 128, 3)], output sizes: [128]
01/28/2023 04:18:15 MainProcess _training cache __init__ DEBUG Initializing: RingBuffer (batch_size: 14, image_shape: (128, 128, 6), buffer_size: 2, dtype: uint8
01/28/2023 04:18:15 MainProcess _training cache __init__ DEBUG Initialized: RingBuffer
01/28/2023 04:18:15 MainProcess _training generator __init__ DEBUG Initialized PreviewDataGenerator
01/28/2023 04:18:15 MainProcess _training generator minibatch_ab DEBUG do_shuffle: False
01/28/2023 04:18:15 MainProcess _training multithreading __init__ DEBUG Initializing BackgroundGenerator: (target: '_run_4', thread_count: 1)
01/28/2023 04:18:15 MainProcess _training multithreading __init__ DEBUG Initialized BackgroundGenerator: '_run_4'
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Starting thread(s): '_run_4'
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Starting thread 1 of 1: '_run_4'
01/28/2023 04:18:15 MainProcess _run_4 generator _minibatch DEBUG Loading minibatch generator: (image_count: 167, do_shuffle: False)
01/28/2023 04:18:15 MainProcess _training multithreading start DEBUG Started all threads '_run_4': 1
01/28/2023 04:18:15 MainProcess _training _base set_timelapse_feed DEBUG Set time-lapse feed: {'a': <generator object BackgroundGenerator.iterator at 0x000001CC7156C2E0>, 'b': <generator object BackgroundGenerator.iterator at 0x000001CC7156C740>}
01/28/2023 04:18:15 MainProcess _training _base _setup DEBUG Set up time-lapse
01/28/2023 04:18:15 MainProcess _training _base output_timelapse DEBUG Getting time-lapse samples
01/28/2023 04:18:15 MainProcess _training _base generate_preview DEBUG Generating preview (is_timelapse: True)
01/28/2023 04:18:15 MainProcess _run_3 multithreading run DEBUG Error in thread (_run_3): 'NoneType' object is not subscriptable
01/28/2023 04:18:15 MainProcess _run_4 multithreading run DEBUG Error in thread (_run_4): 'NoneType' object is not subscriptable
01/28/2023 04:18:15 MainProcess _training multithreading check_and_raise_error DEBUG Thread error caught: [(<class 'TypeError'>, TypeError("'NoneType' object is not subscriptable"), <traceback object at 0x000001CAD465C2C0>)]
01/28/2023 04:18:15 MainProcess _training multithreading run DEBUG Error in thread (_training): 'NoneType' object is not subscriptable
01/28/2023 04:18:16 MainProcess MainThread train _monitor DEBUG Thread error detected
01/28/2023 04:18:16 MainProcess MainThread train _monitor DEBUG Closed Monitor
01/28/2023 04:18:16 MainProcess MainThread train _end_thread DEBUG Ending Training thread
01/28/2023 04:18:16 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
01/28/2023 04:18:16 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
01/28/2023 04:18:16 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training'
01/28/2023 04:18:16 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training'
Traceback (most recent call last):
File "C:\Users\redmond\faceswap\lib\cli\launcher.py", line 230, in execute_script
process.process()
File "C:\Users\redmond\faceswap\scripts\train.py", line 213, in process
self._end_thread(thread, err)
File "C:\Users\redmond\faceswap\scripts\train.py", line 253, in _end_thread
thread.join()
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 217, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 96, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\redmond\faceswap\scripts\train.py", line 275, in _training
raise err
File "C:\Users\redmond\faceswap\scripts\train.py", line 265, in _training
self._run_training_cycle(model, trainer)
File "C:\Users\redmond\faceswap\scripts\train.py", line 353, in _run_training_cycle
trainer.train_one_step(viewer, timelapse)
File "C:\Users\redmond\faceswap\plugins\train\trainer\_base.py", line 246, in train_one_step
self._update_viewers(viewer, timelapse_kwargs)
File "C:\Users\redmond\faceswap\plugins\train\trainer\_base.py", line 354, in _update_viewers
self._timelapse.output_timelapse(timelapse_kwargs)
File "C:\Users\redmond\faceswap\plugins\train\trainer\_base.py", line 1070, in output_timelapse
self._samples.images = self._feeder.generate_preview(is_timelapse=True)
File "C:\Users\redmond\faceswap\plugins\train\trainer\_base.py", line 510, in generate_preview
side_feed, side_samples = next(iterator[side])
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 287, in iterator
self.check_and_raise_error()
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 169, in check_and_raise_error
raise error[1].with_traceback(error[2])
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 96, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\redmond\faceswap\lib\multithreading.py", line 270, in _run
for item in self.generator(*self._gen_args, **self._gen_kwargs):
File "C:\Users\redmond\faceswap\lib\training\generator.py", line 221, in _minibatch
retval = self._process_batch(img_paths)
File "C:\Users\redmond\faceswap\lib\training\generator.py", line 334, in _process_batch
raw_faces, detected_faces = self._get_images_with_meta(filenames)
File "C:\Users\redmond\faceswap\lib\training\generator.py", line 245, in _get_images_with_meta
raw_faces = self._face_cache.cache_metadata(filenames)
File "C:\Users\redmond\faceswap\lib\training\cache.py", line 252, in cache_metadata
self._validate_version(meta, filename)
File "C:\Users\redmond\faceswap\lib\training\cache.py", line 312, in _validate_version
alignment_version = png_meta["source"]["alignments_version"]
TypeError: 'NoneType' object is not subscriptable
============ System Information ============
backend: nvidia
encoding: cp936
git_branch: master
git_commits: a1ef5ed tests: - unit test: tools.alignments.media - Add mypy test - Typing fixes
gpu_cuda: 11.3
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: NVIDIA GeForce GTX 1070
gpu_devices_active: GPU_0
gpu_driver: 527.56
gpu_vram: GPU_0: 8192MB (371MB free)
os_machine: AMD64
os_platform: Windows-10-10.0.19044-SP0
os_release: 10
py_command: C:\Users\redmond\faceswap\faceswap.py train -A D:/NetdiskDownload/ds/A1 -B D:/NetdiskDownload/ds/B2 -m D:/NetdiskDownload/ds/C -t villain -bs 1 -it 1000000 -D default -s 250 -ss 25000 -tia D:/NetdiskDownload/ds/A1 -tib D:/NetdiskDownload/ds/B2 -to D:/NetdiskDownload/ds/D -L INFO -gui
py_conda_version: conda 23.1.0
py_implementation: CPython
py_version: 3.9.16
py_virtual_env: True
sys_cores: 16
sys_processor: AMD64 Family 23 Model 1 Stepping 1, AuthenticAMD
sys_ram: Total: 16316MB, Available: 5597MB, Used: 10718MB, Free: 5597MB
``` | closed | 2023-01-27T20:30:53Z | 2023-01-29T03:23:50Z | https://github.com/deepfakes/faceswap/issues/1300 | [] | RedmondLee | 1 |
Miserlou/Zappa | django | 1,821 | Django HTTP_HOST not valid according to RFC | <!--- Provide a general summary of the issue in the Title above -->
## Context
My `requirements.txt`:
```
Django~=1.11
django-countries-plus==1.2.1
djangorestframework==3.9.0
-e git+https://github.com/cuda-networks/django-eb-sqs.git#egg=django-eb-sqs
pymysql
```
My `settings.py`:
```python
ALLOWED_HOSTS = [
'api.example.com',
'localhost',
]
```
`[DEBUG]` 2019-03-19T07:04:06.153Z **** Zappa Event:
```javascript
{
'resource':'/',
'path':'/',
'httpMethod':'GET',
'headers':{
'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'accept-encoding':'br, gzip, deflate',
'accept-language':'en-us',
'cookie':'_ga=GA1.2.****',
'Host':'api.example.com',
'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.3 Safari/605.1.15',
'X-Amzn-Trace-Id':'Root=****',
'X-Forwarded-For':'46.183.****',
'X-Forwarded-Port':'443',
'X-Forwarded-Proto':'https'
},
'multiValueHeaders':{
'accept':[
'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
],
'accept-encoding':[
'br, gzip, deflate'
],
'accept-language':[
'en-us'
],
'cookie':[
'_ga=GA1.2.****'
],
'Host':[
'api.example.com'
],
'User-Agent':[
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.3 Safari/605.1.15'
],
'X-Amzn-Trace-Id':[
'Root=****'
],
'X-Forwarded-For':[
'46.183.****'
],
'X-Forwarded-Port':[
'443'
],
'X-Forwarded-Proto':[
'https'
]
},
'queryStringParameters':None,
'multiValueQueryStringParameters':None,
'pathParameters':None,
'stageVariables':None,
'requestContext':{
'resourceId':'****',
'resourcePath':'/',
'httpMethod':'GET',
'extendedRequestId':'****',
'requestTime':'19/Mar/2019:07:04:06 +0000',
'path':'/',
'accountId':'****',
'protocol':'HTTP/1.1',
'stage':'prd',
'domainPrefix':'api',
'requestTimeEpoch':1552979046090,
'requestId':'****',
'identity':{
'cognitoIdentityPoolId':None,
'accountId':None,
'cognitoIdentityId':None,
'caller':None,
'sourceIp':'****',
'accessKey':None,
'cognitoAuthenticationType':None,
'cognitoAuthenticationProvider':None,
'userArn':None,
'userAgent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.3 Safari/605.1.15',
'user':None
},
'domainName':'api.example.com',
'apiId':'****'
},
'body':None,
'isBase64Encoded':False
}
```
## Expected Behavior
<!--- Tell us what should happen -->
I am not certain if I am mistaken , but I would hope that this error doesn't show.
## Actual Behavior
<!--- Tell us what happens instead -->
For some reason my *custom domain name* is three times in the HTTP_HOST variable:
```
DisallowedHost
Invalid HTTP_HOST header: 'api.exmaple.com, api.exmaple.com, api.exmaple.com'. The domain name provided is not valid according to RFC 1034/1035.
```
I did all the necessary setup in API Gateway, and have the domain direct to the *Target Domain Name* via `CNAME`.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
I did try to change the Django `ALLOWED_HOSTS` as an ugly workaround, which didn't work:
```python
ALLOWED_HOSTS = [
'api.example.com',
'api.example.com, api.example.com, api.example.com',
'localhost',
]
```
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
n / a
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: master (waiting for https://github.com/Miserlou/Zappa/pull/1762 to become available as version)
* Operating System and Python version: Python 3.7 on AWS Lambda
* The output of `pip freeze`:
```
argcomplete==1.9.3
awscli==1.16.125
awsebcli==3.14.13
blessed==1.15.0
boto3==1.9.115
botocore==1.12.115
cached-property==1.5.1
cement==2.8.2
certifi==2019.3.9
cfn-flip==1.1.0.post1
chardet==3.0.4
Click==7.0
colorama==0.3.9
coreapi==2.3.3
coreschema==0.0.4
Django==1.11.20
django-countries-plus==1.2.1
-e git+https://github.com/cuda-networks/django-eb-sqs.git@2c5dff0392fad2ed383e4991791239e1052437da#egg=django_eb_sqs
djangorestframework==3.9.0
docker==3.7.0
docker-compose==1.23.2
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
docutils==0.14
durationpy==0.5
future==0.16.0
hjson==3.0.1
idna==2.7
inflection==0.3.1
itypes==1.1.0
Jinja2==2.10
jmespath==0.9.3
jsonschema==2.6.0
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
pathspec==0.5.9
placebo==0.9.0
protobuf==3.7.0
pyasn1==0.4.5
PyMySQL==0.9.3
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.9
PyYAML==3.13
redis==3.2.1
requests==2.20.1
rsa==3.4.2
ruamel.yaml==0.15.89
s3transfer==0.2.0
semantic-version==2.5.0
six==1.12.0
termcolor==1.1.0
texttable==0.9.1
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.5
Unidecode==1.0.23
uritemplate==3.0.0
urllib3==1.24.1
wcwidth==0.1.7
websocket-client==0.55.0
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
-e git+https://github.com/Miserlou/Zappa.git@c925275b990527d534a6ac265f5a3472fdcf36fb#egg=zappa
```
* Link to your project (optional):
* Your `zappa_settings.py`:
```javascript
{
"prd": {
"aws_region": "eu-central-1",
"django_settings": "com****api.settingsprd",
"exclude": [
"*.sqlite3"
],
"profile_name": "eb-cli",
"project_name": "com_****_api",
"runtime": "python3.7",
"s3_bucket": "com.****.api.prd.zappa",
"certificate_arn": "arn:aws:acm:us-east-1:****:certificate/****",
"domain": "api.example.com",
"vpc_config": {
"SubnetIds": [
"subnet-****",
"subnet-****",
"subnet-****"
],
"SecurityGroupIds": [
"sg-****"
]
}
}
}
``` | open | 2019-03-19T07:03:38Z | 2019-03-21T14:04:58Z | https://github.com/Miserlou/Zappa/issues/1821 | [] | marvoloe | 9 |
Skyvern-AI/skyvern | api | 1,271 | Why is the task running so slowly? | Why is the task running so slowly? need 4 minutes for the fill the form scenario. | open | 2024-11-27T06:07:51Z | 2024-11-29T04:02:23Z | https://github.com/Skyvern-AI/skyvern/issues/1271 | [] | CloudZou | 1 |
jina-ai/serve | machine-learning | 5,585 | Change documentation for `CONTEXT` environment variables | **Describe your proposal/problem**
<!-- A clear and concise description of what the proposal is. -->
The [docs](https://docs.jina.ai/concepts/flow/yaml-spec/#context-variables) don't specify how to use context variables in a flow yaml.
It should be made clear that when defining a flow using the YAML specification `VALUE_A` & `VALUE_B` should appear in the `env` key.
---
**Flow.yml**
```
jtype: Flow
executors:
- name: executor1
uses: executor1/config.yml
env:
VALUE_A: 123
VALUE_B: hello
uses_with:
var_a: ${{ CONTEXT.VALUE_A }}
var_b: ${{ CONTEXT.VALUE_B }}
``` | closed | 2023-01-09T16:05:59Z | 2023-04-24T00:18:00Z | https://github.com/jina-ai/serve/issues/5585 | [
"Stale",
"area/docs"
] | npitsillos | 2 |
marcomusy/vedo | numpy | 183 | plotting time-series data over a network | Hi @marcomusy
I am trying to plot time series data related to nodes of a graph
```
import networkx as nx
from vedo import *
G = nx.gnm_random_graph(n=10, m=15, seed=1)
nxpos = nx.spring_layout(G)
nxpts = [nxpos[pt] for pt in sorted(nxpos)]
nx_lines = []
for i, j in G.edges():
p1 = nxpos[i].tolist() + [0] # add z-coord
p2 = nxpos[j].tolist() + [0]
nx_lines.append([p1, p2])
nx_pts = Points(nxpts, r=12)
nx_edg = Lines(nx_lines).lw(2)
# node values
values = [[100, .80, .10, .79, .70, .60, .75, .78, .65, .90],[1, .80, .10, .79, .70, .60, .75, .78, .65, .10],[1000, .30, .10, .79, .70, .60, .75, .78, .65, .90]]
time = [0, 0.1, 0.2] # in seconds
for val,t in zip(values, time):
nx_pts.pointColors(val, cmap='YlGn', vmin=min(val), vmax=max(val)).addScalarBar()
show(nx_pts, nx_edg, nx_pts.labels('id'), interactive=True, bg='black', title=f'{t} seconds')
```
I intend to create multiple frames with time stamp displayed in each frame. I couldn't achieve this using the above code. Suggestions on how to visualize the data over time will be highly appreciated. | closed | 2020-07-26T15:35:12Z | 2020-09-15T20:06:50Z | https://github.com/marcomusy/vedo/issues/183 | [] | DeepaMahm | 14 |
tensorflow/tensor2tensor | machine-learning | 1,789 | How to clone a SimulatedBatchGymEnv | ### Description
...How to clone a SimulatedBatchGymEnv ? In case to compare instant rewards between different actions.
### Environment information
```ubuntu
OS: Ubuntu
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2020-02-20T09:19:22Z | 2020-02-20T09:19:22Z | https://github.com/tensorflow/tensor2tensor/issues/1789 | [] | ZaneH1992 | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 660 | When running on an M2 Mac, the app keeps reporting errors | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "range() arg 3 must not be zero"
Traceback Error: "
File "UVR.py", line 4719, in process_start
File "separate.py", line 286, in seperate
File "separate.py", line 908, in prepare_mix
File "separate.py", line 894, in get_segmented_mix
"
Error Time Stamp [2023-07-13 13:24:49]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 1
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-07-13T05:26:02Z | 2023-07-13T05:26:02Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/660 | [] | Peaceminds | 0 |
yeongpin/cursor-free-vip | automation | 31 | ⏳ 第 1 次嘗試未獲取到 Token,2秒後重試 | ==================================================
🚀 Cursor 註冊工具
==================================================
🚀 正在啟動瀏覽器...
ℹ️ 導航到 https://yopmail.com/zh/email-generator...
📧 生成新郵箱...
✅ 生成郵箱成功
📧 選擇郵箱域名...
📧 選擇郵箱域名: @nomes.fr.nf
✅ 選擇郵箱域名成功
📧 生成新郵箱...
📧 獲取郵箱名稱: soibassogeque-2137
📧 獲取郵箱地址: soibassogeque-2137@nomes.fr.nf
📧 獲取郵箱地址: soibassogeque-2137@nomes.fr.nf
📧 進入郵箱...
✅ 進入郵箱成功
🚀 開始註冊流程...
📝 填寫註冊信息...
✅ 基本信息提交完成...
🔄 處理 Turnstile 驗證...
✅ 驗證通過
🔑 設置密碼...
🔄 處理 Turnstile 驗證...
✅ 驗證通過
⏳ 開始獲取驗證碼,將在60秒內嘗試......
❌ control.no_valid_verification_code
⏳ 第 1 次嘗試未獲取到驗證碼,剩餘時間: 18秒...
✅ 找到驗證碼: 610431
✅ 成功獲取驗證碼: 610431
✅ 驗證碼填寫完成...
🔄 處理 Turnstile 驗證...
✅ 驗證通過
⏳ 獲取 Cursor Session Token...
⏳ 第 1 次嘗試未獲取到 Token,2秒後重試
⏳ 第 2 次嘗試未獲取到 Token,2秒後重試
⏳ 第 3 次嘗試未獲取到 Token,2秒後重試
⏳ 第 4 次嘗試未獲取到 Token,2秒後重試
⏳ 第 5 次嘗試未獲取到 Token,2秒後重試
⏳ 第 6 次嘗試未獲取到 Token,2秒後重試
⏳ 第 7 次嘗試未獲取到 Token,2秒後重試
⏳ 第 8 次嘗試未獲取到 Token,2秒後重試
⏳ 第 9 次嘗試未獲取到 Token,2秒後重試
⏳ 第 10 次嘗試未獲取到 Token,2秒後重試
⏳ 第 11 次嘗試未獲取到 Token,2秒後重試
⏳ 第 12 次嘗試未獲取到 Token,2秒後重試
⏳ 第 13 次嘗試未獲取到 Token,2秒後重試
⏳ 第 14 次嘗試未獲取到 Token,2秒後重試
⏳ 第 15 次嘗試未獲取到 Token,2秒後重試
⏳ 第 16 次嘗試未獲取到 Token,2秒後重試
⏳ 第 17 次嘗試未獲取到 Token,2秒後重試
⏳ 第 18 次嘗試未獲取到 Token,2秒後重試
⏳ 第 19 次嘗試未獲取到 Token,2秒後重試
⏳ 第 20 次嘗試未獲取到 Token,2秒後重試
⏳ 第 21 次嘗試未獲取到 Token,2秒後重試
⏳ 第 22 次嘗試未獲取到 Token,2秒後重試
⏳ 第 23 次嘗試未獲取到 Token,2秒後重試
⏳ 第 24 次嘗試未獲取到 Token,2秒後重試
⏳ 第 25 次嘗試未獲取到 Token,2秒後重試
⏳ 第 26 次嘗試未獲取到 Token,2秒後重試
⏳ 第 27 次嘗試未獲取到 Token,2秒後重試
⏳ 第 28 次嘗試未獲取到 Token,2秒後重試
⏳ 第 29 次嘗試未獲取到 Token,2秒後重試
❌ 已達到最大嘗試次數(30),獲取 Token 失敗 | closed | 2025-01-16T02:57:18Z | 2025-01-17T01:29:16Z | https://github.com/yeongpin/cursor-free-vip/issues/31 | [] | sevenokey | 1 |
ultralytics/ultralytics | computer-vision | 19,449 | how to turn a custom finetuned segment model into a detect model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
# Change the nc in the yaml file to reflect the number of classes in the pt file before doing this.
model = YOLO("yolov8n.yaml").load("yolov8n-seg.pt")
model.ckpt["model"] = model.model
del model.ckpt["ema"]
# Save as a detect model
model.save("detect.pt")
I found this code here https://www.reddit.com/r/Ultralytics/comments/1enaswt/dyk_you_can_turn_a_segment_or_pose_model_into_a/?rdt=57462
Is it possible to use this with a different imgsz than the default?
### Additional
_No response_ | closed | 2025-02-26T18:51:10Z | 2025-03-14T01:01:41Z | https://github.com/ultralytics/ultralytics/issues/19449 | [
"question",
"segment",
"detect"
] | soniaeratt | 14 |
minivision-ai/photo2cartoon | computer-vision | 78 | 下载速度太慢,有解决方法吗? | Downloading: "https://www.adrianbulat.com/downloads/python-fan/2DFAN4-cd938726ad.zip" to C:\Users\Administrator/.cache\torch\hub\checkpoints\2DFAN4-cd938726ad.zip
0%| | 24.0k/91.9M [00:30<20:01:57, 1.34kB/s] | open | 2023-05-15T13:30:42Z | 2023-05-15T13:30:42Z | https://github.com/minivision-ai/photo2cartoon/issues/78 | [] | xiaoshiyaonuli | 0 |
alteryx/featuretools | data-science | 1,883 | Add black linting package and remove autopep8 for Featuretools | - Add black linting package and remove autopep8
- Update Makefile commands
- Add a notebook cleaner
- This should be very similar to this PR:
- https://github.com/alteryx/woodwork/pull/1164 | closed | 2022-02-07T19:23:00Z | 2022-03-28T17:00:55Z | https://github.com/alteryx/featuretools/issues/1883 | [] | gsheni | 0 |
biolab/orange3 | data-visualization | 6,746 | Report doesn't displayed and I can't download report |
When I press report button, first I have received errors and after that I didn't see nothing in report preview window, also I can't download report in html and pdf version, after I press save button nothing happen while I can save only x.report file.

Click on report button.
**What's your environment?**
Orange Version 3.36.2
```
PRETTY_NAME="LMDE 6 (faye)"
NAME="LMDE"
VERSION_ID="6"
VERSION="6 (faye)"
VERSION_CODENAME=faye
ID=linuxmint
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
ID_LIKE=debian
DEBIAN_CODENAME=bookworm
```
How you installed Orange:
```
pipx install orange3
pipx inject orange3 PyQt5
pipx inject orange3 PyQtWebEngine
#run orange3
orange-canvas
```
```
pipx list --include-injected
venvs are in /home/lol/.local/pipx/venvs
apps are exposed on your $PATH at /home/lol/.local/bin
package orange3 3.36.2, installed using Python 3.11.2
- orange-canvas
Injected Packages:
- pyqt5 5.15.10
- pyqtwebengine 5.15.6
package pyqt5 5.15.10, installed using Python 3.11.2
- pylupdate5
- pyrcc5
- pyuic5
``` | open | 2024-02-25T17:14:57Z | 2024-11-29T16:11:59Z | https://github.com/biolab/orange3/issues/6746 | [
"bug",
"bug report"
] | DevopsDmytro | 4 |
AutoGPTQ/AutoGPTQ | nlp | 196 | [BUG]torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 18163 is not positive-definite). | Error while quntising pretrained_model_dir = "tiiuae/falcon-7b" :-
2023-07-18 10:48:21 INFO [auto_gptq.modeling._base] Quantizing mlp.dense_4h_to_h in layer 2/32...
Traceback (most recent call last):
File "/home/intel-spc/Documents/tarun/auto-gpt/run.py", line 29, in <module>
model.quantize(examples)
File "/home/intel-spc/Documents/tarun/auto-gpt/qptq/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/intel-spc/Documents/tarun/auto-gpt/qptq/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 361, in quantize
scale, zero, g_idx = gptq[name].fasterquant(
File "/home/intel-spc/Documents/tarun/auto-gpt/qptq/lib/python3.10/site-packages/auto_gptq/quantization/gptq.py", line 96, in fasterquant
H = torch.linalg.cholesky(H, upper=True)
torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 18163 is not positive-definite).
| open | 2023-07-18T17:50:57Z | 2024-08-14T03:21:26Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/196 | [
"bug"
] | tarunmcom | 9 |
hyperspy/hyperspy | data-visualization | 2,719 | Automatically populating s.navigator when loading data | With the addition of a `navigator` attribute for lazy signals (https://github.com/hyperspy/hyperspy/pull/2631), it would be nice to automatically populate this `navigator` if it is present in the data file.
One example of this is the nexus file format, which sometimes includes navigation images. | open | 2021-04-23T13:16:38Z | 2021-09-10T09:43:45Z | https://github.com/hyperspy/hyperspy/issues/2719 | [] | magnunor | 3 |
dgtlmoon/changedetection.io | web-scraping | 2,197 | Request Headers are not used in Browser Steps | ## Description
When using "Browser Steps", the custom HTTP request headers from the "Request" tab are not sent.
Version: v0.45.14 (self-hosted via docker on a x86_64 Debian Linux)
## How to reproduce
1. Create a new watch for https://www.deviceinfo.me/ or https://myhttpheader.com/
2. At *Request*, choose *Playwright Chromium*.
3. Still at *Request*, click on *Show advanced options*.
4. Add a custom User-Agent in the *Request headers* field.
5. Go to *Browser Steps* and press Play.
6. Scroll down to where it shows both the User-Agent and the HTTP headers.
The custom headers are applied when doing a normal diff/watch. But they are not applied in the *Browser steps*. They seem to be applied to the *Visual Filter Selector* (or maybe the *Visual Filter Selector* uses a cached version that was fetched with the correct custom headers).
Unfortunately, [some sites break when using the *HeadlessChrome* User-Agent](https://github.com/microsoft/playwright/issues/27600) (see #2051). Thus, not only I need this feature, but also I expected the custom headers to be sent on all kinds of requests.
## Desktop
- OS: Manjaro Linux (not relevant)
- Browser: Firefox (not relevant)
(Bonus mini-question: Is there any other place to set a custom User-Agent?) | closed | 2024-02-16T10:25:13Z | 2024-05-26T14:58:21Z | https://github.com/dgtlmoon/changedetection.io/issues/2197 | [
"bug"
] | denilsonsa | 11 |
ageitgey/face_recognition | python | 627 | Miss classifications by spectacles | * face_recognition version:1.2.3
* Python version:3.5
* Operating System:Ubuntu
### Description
issues with persons wearing spectacles
### What I Did
I had trained a model for persons without spectacles. while running the model there are miss classifications with the same person with spectacles.
I assume that eyebrows encodings are getting hidden by the spectacles and that is the reason of miss classifications.
Is there any work around for this problem? ???
```
| open | 2018-09-21T11:28:18Z | 2018-09-21T11:28:18Z | https://github.com/ageitgey/face_recognition/issues/627 | [] | hemanthkumar3111 | 0 |
plotly/dash-table | plotly | 779 | timedelta64 in column | I've search quite a while every where but when I have a colum with the type timedelta64. DataTable displays it in an iso format. How can I display it as days, hours, minuts, etc...
Thanks for any help | open | 2020-05-12T16:23:08Z | 2020-05-12T16:23:08Z | https://github.com/plotly/dash-table/issues/779 | [] | tschinz | 0 |
coqui-ai/TTS | deep-learning | 2,677 | [Bug] Failed to load yourtts pth | ### Describe the bug
> Restoring from model_file.pth ...
> Restoring Model...
> Partial model initialization...
| > Layer missing in the model definition: speaker_encoder.conv1.weight
| > Layer missing in the model definition: speaker_encoder.conv1.bias
| > Layer missing in the model definition: speaker_encoder.bn1.weight
| > Layer missing in the model definition: speaker_encoder.bn1.bias
| > Layer missing in the model definition: speaker_encoder.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.0.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.0.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.0.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.0.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer1.0.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.0.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer1.1.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.1.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.1.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.1.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer1.1.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.1.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer1.2.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.2.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer1.2.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer1.2.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer1.2.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer1.2.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.0.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.0.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.0.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.0.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.1.bias
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.1.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.0.downsample.1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.1.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.1.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.1.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.1.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer2.1.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.1.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.2.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.2.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.2.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.2.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer2.2.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.2.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.3.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.3.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer2.3.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer2.3.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer2.3.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer2.3.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.0.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.0.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.0.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.0.downsample.1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.1.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.1.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.1.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.1.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.1.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.1.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.2.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.2.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.2.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.2.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.2.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.2.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.3.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.3.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.3.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.3.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.3.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.3.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.4.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.4.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.4.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.4.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.4.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.4.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.5.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.5.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer3.5.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer3.5.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer3.5.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer3.5.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.0.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.0.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.0.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.0.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.1.bias
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.1.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.0.downsample.1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.1.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.1.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.1.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.1.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer4.1.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.1.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.2.conv1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn1.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn1.bias
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn1.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn1.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn1.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.2.conv2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn2.bias
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn2.running_mean
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn2.running_var
| > Layer missing in the model definition: speaker_encoder.layer4.2.bn2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.layer4.2.se.fc.0.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.se.fc.0.bias
| > Layer missing in the model definition: speaker_encoder.layer4.2.se.fc.2.weight
| > Layer missing in the model definition: speaker_encoder.layer4.2.se.fc.2.bias
| > Layer missing in the model definition: speaker_encoder.torch_spec.0.filter
| > Layer missing in the model definition: speaker_encoder.torch_spec.1.spectrogram.window
| > Layer missing in the model definition: speaker_encoder.torch_spec.1.mel_scale.fb
| > Layer missing in the model definition: speaker_encoder.attention.0.weight
| > Layer missing in the model definition: speaker_encoder.attention.0.bias
| > Layer missing in the model definition: speaker_encoder.attention.2.weight
| > Layer missing in the model definition: speaker_encoder.attention.2.bias
| > Layer missing in the model definition: speaker_encoder.attention.2.running_mean
| > Layer missing in the model definition: speaker_encoder.attention.2.running_var
| > Layer missing in the model definition: speaker_encoder.attention.2.num_batches_tracked
| > Layer missing in the model definition: speaker_encoder.attention.3.weight
| > Layer missing in the model definition: speaker_encoder.attention.3.bias
| > Layer missing in the model definition: speaker_encoder.fc.weight
| > Layer missing in the model definition: speaker_encoder.fc.bias
| > Layer missing in the model definition: emb_l.weight
| > Layer missing in the model definition: duration_predictor.cond_lang.weight
| > Layer missing in the model definition: duration_predictor.cond_lang.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.0.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.1.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.2.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.3.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.4.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.5.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.6.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.7.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.8.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.emb_rel_k
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.emb_rel_v
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_q.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_q.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_k.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_k.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_v.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_v.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_o.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.attn_layers.9.conv_o.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.0.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.0.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.1.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.1.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.2.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.2.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.3.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.3.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.4.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.4.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.5.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.5.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.6.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.6.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.7.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.7.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.8.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.8.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.9.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_1.9.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.0.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.0.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.0.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.1.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.1.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.1.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.2.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.2.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.2.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.3.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.3.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.3.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.4.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.4.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.4.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.5.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.5.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.5.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.6.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.6.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.6.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.7.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.7.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.7.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.8.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.8.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.8.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.9.conv_1.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.9.conv_2.weight
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.ffn_layers.9.conv_2.bias
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.0.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.0.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.1.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.1.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.2.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.2.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.3.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.3.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.4.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.4.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.5.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.5.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.6.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.6.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.7.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.7.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.8.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.8.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.9.gamma
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.encoder.norm_layers_2.9.beta
| > Layer dimention missmatch between model definition and checkpoint: text_encoder.proj.weight
| > Layer dimention missmatch between model definition and checkpoint: duration_predictor.pre.weight
| > 724 / 896 layers are restored.
> Model restored from step 0
> Model has 86565676 parameters
### To Reproduce
modified from trained_yourtts.py for fine tuning.
```
import os
import torch
from trainer import Trainer, TrainerArgs
from TTS.bin.compute_embeddings import compute_embeddings
from TTS.bin.resample import resample_files
from TTS.config.shared_configs import BaseDatasetConfig
from TTS.tts.configs.vits_config import VitsConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.vits import CharactersConfig, Vits, VitsArgs, VitsAudioConfig
from TTS.utils.downloaders import download_vctk
torch.set_num_threads(24)
# pylint: disable=W0105
"""
This recipe replicates the first experiment proposed in the YourTTS paper (https://arxiv.org/abs/2112.02418).
YourTTS model is based on the VITS model however it uses external speaker embeddings extracted from a pre-trained speaker encoder and has small architecture changes.
In addition, YourTTS can be trained in multilingual data, however, this recipe replicates the single language training using the VCTK dataset.
If you are interested in multilingual training, we have commented on parameters on the VitsArgs class instance that should be enabled for multilingual training.
In addition, you will need to add the extra datasets following the VCTK as an example.
"""
CURRENT_PATH = os.path.dirname(os.path.abspath(__file__))
# Name of the run for the Trainer
RUN_NAME = "YourTTS-EN-VCTK-FT"
# Path where you want to save the models outputs (configs, checkpoints and tensorboard logs)
OUT_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "YourTTS_FT") # "/raid/coqui/Checkpoints/original-YourTTS/"
# If you want to do transfer learning and speedup your training you can set here the path to the original YourTTS model
RESTORE_PATH = "/root/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts/model_file.pth" # "/root/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts/model_file.pth"
# This paramter is useful to debug, it skips the training epochs and just do the evaluation and produce the test sentences
SKIP_TRAIN_EPOCH = False
# Set here the batch size to be used in training and evaluation
BATCH_SIZE = 32
# Training Sampling rate and the target sampling rate for resampling the downloaded dataset (Note: If you change this you might need to redownload the dataset !!)
# Note: If you add new datasets, please make sure that the dataset sampling rate and this parameter are matching, otherwise resample your audios
SAMPLE_RATE = 16000
# Max audio length in seconds to be used in training (every audio bigger than it will be ignored)
MAX_AUDIO_LEN_IN_SECONDS = 10
### Download VCTK dataset
#VCTK_DOWNLOAD_PATH = os.path.join(CURRENT_PATH, "VCTK")
# Define the number of threads used during the audio resampling
NUM_RESAMPLE_THREADS = 10
# Check if VCTK dataset is not already downloaded, if not download it
#if not os.path.exists(VCTK_DOWNLOAD_PATH):
#print(">>> Downloading VCTK dataset:")
#download_vctk(VCTK_DOWNLOAD_PATH)
#resample_files(VCTK_DOWNLOAD_PATH, SAMPLE_RATE, file_ext="flac", n_jobs=NUM_RESAMPLE_THREADS)
# init configs
# dataset config for one of the pre-defined datasets
vctk_config = BaseDatasetConfig(
formatter="ljspeech", meta_file_train="metadata.txt", language="en-us", path="./MyTTSDataset")
#vctk_config = BaseDatasetConfig(
# formatter="vctk",
# dataset_name="vctk",
# meta_file_train="",
# meta_file_val="",
# path=VCTK_DOWNLOAD_PATH,
# language="en",
# ignored_speakers=[
# "p261",
# "p225",
# "p294",
# "p347",
# "p238",
# "p234",
# "p248",
# "p335",
# "p245",
# "p326",
# "p302",
# ], # Ignore the test speakers to full replicate the paper experiment
#)
# Add here all datasets configs, in our case we just want to train with the VCTK dataset then we need to add just VCTK. Note: If you want to add new datasets, just add them here and it will automatically compute the speaker embeddings (d-vectors) for this new dataset :)
DATASETS_CONFIG_LIST = [vctk_config]
### Extract speaker embeddings
SPEAKER_ENCODER_CHECKPOINT_PATH = (
"https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/model_se.pth.tar"
)
SPEAKER_ENCODER_CONFIG_PATH = "https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/config_se.json"
D_VECTOR_FILES = [] # List of speaker embeddings/d-vectors to be used during the training
# Iterates all the dataset configs checking if the speakers embeddings are already computated, if not compute it
for dataset_conf in DATASETS_CONFIG_LIST:
# Check if the embeddings weren't already computed, if not compute it
embeddings_file = os.path.join(dataset_conf.path, "speakers.pth")
if not os.path.isfile(embeddings_file):
print(f">>> Computing the speaker embeddings for the {dataset_conf.dataset_name} dataset")
compute_embeddings(
SPEAKER_ENCODER_CHECKPOINT_PATH,
SPEAKER_ENCODER_CONFIG_PATH,
embeddings_file,
old_spakers_file=None,
config_dataset_path=None,
formatter_name=dataset_conf.formatter,
dataset_name=dataset_conf.dataset_name,
dataset_path=dataset_conf.path,
meta_file_train=dataset_conf.meta_file_train,
meta_file_val=dataset_conf.meta_file_val,
disable_cuda=False,
no_eval=False,
)
D_VECTOR_FILES.append(embeddings_file)
# Audio config used in training.
audio_config = VitsAudioConfig(
sample_rate=SAMPLE_RATE,
hop_length=256,
win_length=1024,
fft_size=1024,
mel_fmin=0.0,
mel_fmax=None,
num_mels=80,
)
# Init VITSArgs setting the arguments that are needed for the YourTTS model
model_args = VitsArgs(
d_vector_file=D_VECTOR_FILES,
use_d_vector_file=True,
d_vector_dim=512,
num_layers_text_encoder=10,
speaker_encoder_model_path=SPEAKER_ENCODER_CHECKPOINT_PATH,
speaker_encoder_config_path=SPEAKER_ENCODER_CONFIG_PATH,
resblock_type_decoder="2", # In the paper, we accidentally trained the YourTTS using ResNet blocks type 2, if you like you can use the ResNet blocks type 1 like the VITS model
# Useful parameters to enable the Speaker Consistency Loss (SCL) described in the paper
# use_speaker_encoder_as_loss=True,
# Useful parameters to enable multilingual training
# use_language_embedding=True,
# embedded_language_dim=4,
)
# General training config, here you can change the batch size and others useful parameters
config = VitsConfig(
output_path=OUT_PATH,
model_args=model_args,
run_name=RUN_NAME,
project_name="YourTTS",
run_description="""
- Original YourTTS trained using VCTK dataset
""",
dashboard_logger="tensorboard",
logger_uri=None,
audio=audio_config,
batch_size=BATCH_SIZE,
batch_group_size=48,
eval_batch_size=BATCH_SIZE,
num_loader_workers=8,
eval_split_max_size=256,
print_step=50,
plot_step=100,
log_model_step=1000,
save_step=5000,
save_n_checkpoints=2,
save_checkpoints=True,
target_loss="loss_1",
print_eval=False,
use_phonemes=False,
phonemizer="espeak",
phoneme_language="en",
compute_input_seq_cache=True,
add_blank=True,
text_cleaner="multilingual_cleaners",
characters=CharactersConfig(
characters_class="TTS.tts.models.vits.VitsCharacters",
pad="_",
eos="&",
bos="*",
blank=None,
characters="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\u00af\u00b7\u00df\u00e0\u00e1\u00e2\u00e3\u00e4\u00e6\u00e7\u00e8\u00e9\u00ea\u00eb\u00ec\u00ed\u00ee\u00ef\u00f1\u00f2\u00f3\u00f4\u00f5\u00f6\u00f9\u00fa\u00fb\u00fc\u00ff\u0101\u0105\u0107\u0113\u0119\u011b\u012b\u0131\u0142\u0144\u014d\u0151\u0153\u015b\u016b\u0171\u017a\u017c\u01ce\u01d0\u01d2\u01d4\u0430\u0431\u0432\u0433\u0434\u0435\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\u0451\u0454\u0456\u0457\u0491\u2013!'(),-.:;? ",
punctuations="!'(),-.:;? ",
phonemes="",
is_unique=True,
is_sorted=True,
),
phoneme_cache_path=None,
precompute_num_workers=12,
start_by_longest=True,
datasets=DATASETS_CONFIG_LIST,
cudnn_benchmark=False,
max_audio_len=SAMPLE_RATE * MAX_AUDIO_LEN_IN_SECONDS,
mixed_precision=False,
test_sentences=[
[
"It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
#"VCTK_p277",
"VCTK_MK",
None,
"en",
],
[
"Be a voice, not an echo.",
#"VCTK_p239",
"VCTK_MK",
None,
"en",
],
[
"I'm sorry Dave. I'm afraid I can't do that.",
#"VCTK_p258",
"VCTK_MK",
None,
"en",
],
[
"This cake is great. It's so delicious and moist.",
#"VCTK_p244",
"VCTK_MK",
None,
"en",
],
[
"Prior to November 22, 1963.",
#"VCTK_p305",
"VCTK_MK",
None,
"en",
],
],
# Enable the weighted sampler
use_weighted_sampler=True,
# Ensures that all speakers are seen in the training batch equally no matter how many samples each speaker has
weighted_sampler_attrs={"speaker_name": 1.0},
weighted_sampler_multipliers={},
# It defines the Speaker Consistency Loss (SCL) α to 9 like the paper
speaker_encoder_loss_alpha=9.0,
)
# Load all the datasets samples and split traning and evaluation sets
#train_samples, eval_samples = load_tts_samples(
# config.datasets,
# eval_split=True,
# eval_split_max_size=config.eval_split_max_size,
# eval_split_size=config.eval_split_size,
#)
train_samples, eval_samples = load_tts_samples(
config.datasets,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
# Init the model
model = Vits.init_from_config(config)
# Init the trainer and 🚀
trainer = Trainer(
TrainerArgs(restore_path=RESTORE_PATH, skip_train_epoch=SKIP_TRAIN_EPOCH),
config,
output_path=OUT_PATH,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
)
trainer.fit()
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A10G"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "1.13.1",
"TTS": "0.14.3",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
""
],
"processor": "x86_64",
"python": "3.7.0",
"version": "#40~20.04.1-Ubuntu SMP Mon Apr 24 00:21:13 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-06-15T04:42:29Z | 2024-03-22T08:36:03Z | https://github.com/coqui-ai/TTS/issues/2677 | [
"bug"
] | ZhichaoWang970201 | 10 |
microsoft/nni | machine-learning | 4,810 | TPE tuner failed to load nested search space json | **Describe the issue**:
**Environment**:
- NNI version: 2.7
- Training service (local|remote|pai|aml|etc): local
- Client OS: ubuntu 18
- Server OS (for remote mode only):
- Python version:python3.7
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?: yes
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
```json
{
"C": { "_type": "choice", "_value": [0.01, 0.1, 1, 10, 100] },
"kernel": { "_type": "choice", "_value": [
{ "_name": "linear" },
{ "_name": "poly",
"degree": { "_type": "randint", "_value": [1, 11] },
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
},
{ "_name": "rbf",
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
},
{ "_name": "sigmoid",
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
}
]},
"probability": {"_type":"choice", "_value": [true]}
}
```
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
- trail log:
```
[2022-04-26 15:16:58] PRINT {'C': 0.1, 'kernel': OrderedDict([('_name', 'sigmoid'), ('gamma', 'auto')]), 'probability': True}
[2022-04-26 15:16:58] ERROR :
File "/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py", line 255, in fit
fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
File "/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py", line 333, in _dense_fit
random_seed=random_seed,
File "sklearn/svm/_libsvm.pyx", line 173, in sklearn.svm._libsvm.fit
ValueError: OrderedDict([('_name', 'sigmoid'), ('gamma', 'auto')]) is not in list
```
**How to reproduce it?**:
Use this search space to train the sklearn.svm.SVC model
It is executable in version 2.5, but this problem occurs in version 2.7 | closed | 2022-04-26T07:32:07Z | 2022-09-08T03:10:20Z | https://github.com/microsoft/nni/issues/4810 | [] | tjlin-github | 3 |
BayesWitnesses/m2cgen | scikit-learn | 589 | Cannot export XGBClassifier model: TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' | ```
from sklearn.datasets import load_iris
from xgboost.sklearn import XGBClassifier
from xgboost import plot_importance
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
#记载样本数据集
iris = load_iris()
x,y = iris.data,iris.target
#数据集分割
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=123457)
xgb_clf = XGBClassifier(
booster = 'gbtree',
objective = 'multi:softmax',
num_class = 3,
gamma = 0.1,
max_depth = 6,
reg_lambda = 2,
subsample = 0.7,
colsample_bytree = 0.7,
min_child_weight = 3,
eta = 0.1,
seed = 1000,
nthread = 4,
)
#训练模型
xgb_clf.fit(x_train,y_train,eval_metric='auc')
import m2cgen as m2c
xgb_clf.base_score = 0
code = m2c.export_to_c(xgb_clf)
with open ('model.c', 'w') as f:
f.write(code)
```
Full trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[24], line 3
1 import m2cgen as m2c
2 xgb_clf.base_score = 0
----> 3 code = m2c.export_to_c(xgb_clf)
4 with open ('model.c', 'w') as f:
5 f.write(code)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\exporters.py:81, in export_to_c(model, indent, function_name)
61 """
62 Generates a C code representation of the given model.
63
(...)
75 code : string
76 """
77 interpreter = interpreters.CInterpreter(
78 indent=indent,
79 function_name=function_name
80 )
---> 81 return _export(model, interpreter)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\exporters.py:459, in _export(model, interpreter)
457 def _export(model, interpreter):
458 assembler_cls = get_assembler_cls(model)
--> 459 model_ast = assembler_cls(model).assemble()
460 return interpreter.interpret(model_ast)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\assemblers\boosting.py:214, in XGBoostModelAssemblerSelector.assemble(self)
213 def assemble(self):
--> 214 return self.assembler.assemble()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\assemblers\boosting.py:36, in BaseBoostingAssembler.assemble(self)
34 return self._assemble_bin_class_output(self._all_estimator_params)
35 else:
---> 36 return self._assemble_multi_class_output(self._all_estimator_params)
37 else:
38 result_ast = self._assemble_single_output(self._all_estimator_params, base_score=self._base_score)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\assemblers\boosting.py:62, in BaseBoostingAssembler._assemble_multi_class_output(self, estimator_params)
58 def _assemble_multi_class_output(self, estimator_params):
59 # Multi-class output is calculated based on discussion in
60 # https://github.com/dmlc/xgboost/issues/1746#issuecomment-295962863
61 # and the enhancement to support boosted forests in XGBoost.
---> 62 splits = _split_estimator_params_by_classes(
63 estimator_params, self._output_size,
64 self.multiclass_params_seq_len)
66 base_score = self._base_score
67 exprs = [
68 self._assemble_single_output(e, base_score=base_score, split_idx=i)
69 for i, e in enumerate(splits)
70 ]
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\m2cgen\assemblers\boosting.py:347, in _split_estimator_params_by_classes(values, n_classes, params_seq_len)
342 def _split_estimator_params_by_classes(values, n_classes, params_seq_len):
343 # Splits are computed based on a comment
344 # https://github.com/dmlc/xgboost/issues/1746#issuecomment-267400592
345 # and the enhancement to support boosted forests in XGBoost.
346 values_len = len(values)
--> 347 block_len = n_classes * params_seq_len
348 indices = list(range(values_len))
349 indices_by_class = np.array(
350 [[indices[i:i + params_seq_len]
351 for i in range(j, values_len, block_len)]
352 for j in range(0, block_len, params_seq_len)]
353 ).reshape(n_classes, -1)
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
```
xgboost version '2.0.3' | open | 2024-04-08T11:14:28Z | 2024-07-26T19:59:39Z | https://github.com/BayesWitnesses/m2cgen/issues/589 | [] | git2621 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,049 | How to change the max epoch when training? | The default epoch is 200, but how to change it? I can't find which parameter to adjust it.
IF I want to training 400 epoch?
Thank you | closed | 2020-05-29T16:42:37Z | 2020-05-30T08:30:25Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1049 | [] | zhouyingji | 2 |
ITCoders/Human-detection-and-Tracking | numpy | 25 | OpenCV Error: Unspecified error (The node is neither a map nor an empty collection) | OpenCV Error: Unspecified error (The node is neither a map nor an empty collection) in cvGetFileNodeByName, file C:\projects\opencv-python\opencv\modules\core\src\persistence.cpp, line 891
Traceback (most recent call last):
File "main.py", line 137, in <module>
recognizer.read("model.yaml")
cv2.error: C:\projects\opencv-python\opencv\modules\core\src\persistence.cpp:891: error: (-2) The node is neither a map nor an empty collection in function cvGetFileNodeByName
Earlier there was ' recognizer.load("model.yaml") ' in the original code but there is no attribute 'load' attribute in new version of current opencv which has been replaced by 'read'. | closed | 2018-01-15T08:20:23Z | 2018-09-19T07:17:08Z | https://github.com/ITCoders/Human-detection-and-Tracking/issues/25 | [
"version_conflict"
] | punnu97 | 6 |
art049/odmantic | pydantic | 507 | `model_validate_doc` consider `field(default=...)` but not `field(default_factory=...)` | # Bug
When defining fields with a `default` parameter, the `model_validate_doc` function works as it should, but when using `default_factory` the function treats it as a mandatory field.
### Current Behavior
```python
from typing import Dict
from bson import ObjectId
from odmantic Model, Field
class Person(Model):
phones: Dict = Field(default_factory=list)
```
```python
In [1]: Person.model_validate_doc(dict(_id=ObjectId()))
---------------------------------------------------------------------------
DocumentParsingError Traceback (most recent call last)
Cell In[134], line 1
----> 1 Person.model_validate_doc(dict(_id=ObjectId()))
File .venv/lib/python3.12/site-packages/odmantic/model.py:805, in _BaseODMModel.model_validate_doc(cls, raw_doc)
803 errors, obj = cls._parse_doc_to_obj(raw_doc)
804 if len(errors) > 0:
--> 805 raise DocumentParsingError(
806 errors=errors,
807 model=cls,
808 )
809 try:
810 instance = cls.model_validate(obj)
DocumentParsingError: 1 validation error for Person
phones
Key 'phones' not found in document [type=odmantic::key_not_found_in_document, input_value={'_id': ObjectId('6723d87aee92661091c3c8bc')}, input_type=dict]
```
### Expected behavior
```python
In [1]: Person.model_validate_doc(dict(_id=ObjectId()))
Out[1]: Person(phones=[])
```
### Environment
- ODMantic version: 1.0.2
- Pydantic infos (output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())`):
```yaml
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /home/lelzin/Projects/tests/git/odmantic/.venv/lib/python3.8/site-packages/pydantic
python version: 3.8.17 (default, Jul 8 2023, 21:39:49) [GCC 13.1.1 20230429]
platform: Linux-6.6.50-2-lts-x86_64-with-glibc2.34
related packages: mypy-1.4.1 fastapi-0.115.4 typing_extensions-4.12.2
commit: unknown
```
| open | 2024-10-31T19:24:54Z | 2024-11-01T01:44:18Z | https://github.com/art049/odmantic/issues/507 | [
"bug"
] | d3cryptofc | 1 |
syrupy-project/syrupy | pytest | 218 | A "no formatting" mode for CI runs which don't support text formatting | **Is your feature request related to a problem? Please describe.**
Snapshot failures in my Jenkins runs show the colour codes which make it hard to read the diffs.

**Describe the solution you'd like**
An opt-in `--no-formatting` (`--no-colors`) mode.
**Describe alternatives you've considered**
Using a better CI runner.
**Additional context**
jest has a `--no-colors` mode. | closed | 2020-05-08T22:22:33Z | 2020-09-20T00:28:03Z | https://github.com/syrupy-project/syrupy/issues/218 | [
"feature request",
"good first issue",
"released"
] | noahnu | 2 |
Urinx/WeixinBot | api | 249 | 可以发送视频文件么? | 微信机器人可以发送视频文件么?有木有相关接口?
多谢多谢 | open | 2018-01-31T03:41:23Z | 2018-01-31T03:41:23Z | https://github.com/Urinx/WeixinBot/issues/249 | [] | maxdylan | 0 |
plotly/dash | dash | 2,463 | [BUG] Internalerror when using pytest, needs quick fix | when using pytest, dash plugin, the following error appears with version 2.9. resolved by pinning 2.8.1
```
INTERNALERROR> File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/opt/hostedtoolcache/Python/3.8.16/x[64](https://github.com/huggingface/autotrain-backend/actions/runs/4446983814/jobs/7808039865#step:8:65)/lib/python3.8/site-packages/pluggy/_callers.py", line 37, in _multicall
INTERNALERROR> _raise_wrapfail(gen, "did not yield")
INTERNALERROR> File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pluggy/_result.py", line 9, in _raise_wrapfail
INTERNALERROR> raise RuntimeError(
INTERNALERROR> RuntimeError: wrap_controller at 'pytest_runtest_makereport' /opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/dash/testing/plugin.py:106 did not yield
``` | closed | 2023-03-17T12:44:33Z | 2023-03-18T21:29:02Z | https://github.com/plotly/dash/issues/2463 | [] | abhishekkrthakur | 4 |
ExpDev07/coronavirus-tracker-api | fastapi | 98 | Storing data in a data in database | Right now the data is just stored in cache. Is it perhaps better to sync the data to an actual MySQL database? It would allow for fast querying. | closed | 2020-03-19T21:39:00Z | 2025-03-13T15:46:50Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/98 | [
"enhancement",
"performance"
] | ExpDev07 | 15 |
matterport/Mask_RCNN | tensorflow | 2,434 | 3080 | closed | 2020-12-01T06:33:45Z | 2020-12-01T06:34:18Z | https://github.com/matterport/Mask_RCNN/issues/2434 | [] | moonplay85 | 0 | |
allenai/allennlp | nlp | 4,826 | TrackEpochCallback is not an EpochCallback | Is there any reason why `TrackEpochCallback` should not inherit from `EpochCallback`?
https://github.com/allenai/allennlp/blob/5b30658514a00e11000e648fec23be11a998bd92/allennlp/training/trainer.py#L179-L188 | closed | 2020-11-30T15:44:24Z | 2021-01-18T18:48:00Z | https://github.com/allenai/allennlp/issues/4826 | [
"bug"
] | mahnerak | 15 |
Avaiga/taipy | automation | 2,386 | [OTHER] Optional tasks / task logic | ### What would you like to share or ask?
Is there a way to make data nodes optional ?
In my execution graph, I have two ways to annotate my data. I would like to give the user the possibility to execute one or both of the tasks and still be able to execute the next one, which currently is linked to both output data nodes. As I understand `skippable=True` only entails, that the task must not be re-executed, but there is no way to make the input of a data node optional.
I see two possibilities to address this (I would prefer the first one):
- Implementation of OR / AND /... logic gate nodes that could be linked to data nodes
- Make data nodes optional (not really what I want, because if I make both optional, the task would still be executable by not executing a single one)
A workaround would be to handle the logic internally (use just one annotation task, that may execute one or both methods and passes one output). This would be sufficient, but limits the transparency of the execution graph and displayal of the individual results as data nodes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2025-01-09T09:18:08Z | 2025-01-09T14:02:49Z | https://github.com/Avaiga/taipy/issues/2386 | [
"❓ Question",
"Core: ⚙️ Configuration",
"Core: Job & Orchestrator",
"Core: 📁 Data node",
"💬 Discussion"
] | JosuaCarl | 4 |
pyeve/eve | flask | 1,151 | Only display the version number on the docs homepage | 
With 87a12259aa88e171fdc1761542253e6e378dcaf7 we changed the dev release format to avoid a warning at build time. Since then, `sphinx-build` fails at parsing the correct version out of the release number.
In the picture above, it should print "Version 0.8.1" - skipping the .dev0 part. | closed | 2018-05-15T14:27:34Z | 2018-05-15T14:33:27Z | https://github.com/pyeve/eve/issues/1151 | [
"documentation"
] | nicolaiarocci | 0 |
aleju/imgaug | deep-learning | 415 | [Feature Request] Augementation config file | It would be great to be able to define augmentation using some kind of config file. I imagine YAML would for well. For example a config file could look like:
``` yaml
sequential:
fliplr:
arg: 0.5
flipud:
arg: 0.2
sometimes:
cropandpad:
percent: (-0.05, 0.1)
pad_mode: all
pad_cval: (0, 255)
...
```
| open | 2019-09-12T18:45:40Z | 2020-11-23T16:22:19Z | https://github.com/aleju/imgaug/issues/415 | [
"TODO"
] | bkanuka | 8 |
chainer/chainer | numpy | 8,315 | Flaky test: tests/chainerx_tests/unit_tests/routines_tests/test_loss.py::test_SoftmaxCrossEntropy | ERROR: type should be string, got "https://travis-ci.org/chainer/chainer/jobs/601719725#L4484\r\n\r\nOccured in #8295.\r\n\r\n```\r\n[2019-10-23 10:13:00] ___________________ test_SoftmaxCrossEntropy_param_3_{t_dtype='int16', x_dtype='float16'}[native:0] ____________________\r\n[2019-10-23 10:13:00] \r\n[2019-10-23 10:13:00] device = native:0, args = (), kwargs = {}\r\n[2019-10-23 10:13:00] backend_config = <BackendConfig use_chainerx=True chainerx_device='native:0' use_cuda=False cuda_device=None use_cudnn='never' cudnn_deterministic=False autotune=False cudnn_fast_batch_normalization=False use_ideep='never'>\r\n[2019-10-23 10:13:00] obj = <chainer.testing._bundle.TestSoftmaxCrossEntropy_param_3_{t_dtype='int16', x_dtype='float16'} object at 0x7f102941a7f0>\r\n\r\n...\r\n\r\n[2019-10-23 10:13:00] E chainer.testing.function_link.FunctionTestError: double backward is not implemented correctly\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E (caused by)\r\n[2019-10-23 10:13:00] E AssertionError: check_double_backward failed (eps=0.001 atol=0.0001 rtol=0.001)\r\n[2019-10-23 10:13:00] E input[0]:\r\n[2019-10-23 10:13:00] E array([[-0.78076172, -1.33984375],\r\n[2019-10-23 10:13:00] E [-1.2578125 , -0.56689453]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E grad_output[0]:\r\n[2019-10-23 10:13:00] E array([0.04290771, 0.39892578], shape=(2,), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E grad_grad_input[0]:\r\n[2019-10-23 10:13:00] E array([[0.71972656, 0.72851562],\r\n[2019-10-23 10:13:00] E [0.38500977, 0.64404297]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E check_backward failed (eps=0.001 atol=0.0001 rtol=0.001)\r\n[2019-10-23 10:13:00] E inputs[0]:\r\n[2019-10-23 10:13:00] E array([[-0.78076172, -1.33984375],\r\n[2019-10-23 10:13:00] E [-1.2578125 , -0.56689453]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E inputs[1]:\r\n[2019-10-23 10:13:00] E array([0.04290771, 0.39892578], shape=(2,), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E grad_outputs[0]:\r\n[2019-10-23 10:13:00] E array([[0.71972656, 0.72851562],\r\n[2019-10-23 10:13:00] E [0.38500977, 0.64404297]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E directions[0]:\r\n[2019-10-23 10:13:00] E array([[0.19774412, 0.24381235],\r\n[2019-10-23 10:13:00] E [0.34787308, -0.21185603]], shape=(2, 2), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E directions[1]:\r\n[2019-10-23 10:13:00] E array([0.7028306 , 0.49151123], shape=(2,), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E gradients (numeric): array(0.06802642, shape=(), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E gradients (backward): array(0.06819657, shape=(), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E x: numeric gradient, y: backward gradient\r\n[2019-10-23 10:13:00] E Not equal to tolerance rtol=0.001, atol=0.0001\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E (mismatch 100.0%)\r\n[2019-10-23 10:13:00] E x: array(0.068026)\r\n[2019-10-23 10:13:00] E y: array(0.068197)\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E assert_allclose failed: \r\n[2019-10-23 10:13:00] E shape: () ()\r\n[2019-10-23 10:13:00] E dtype: float64 float64\r\n[2019-10-23 10:13:00] E i: (0,)\r\n[2019-10-23 10:13:00] E x[i]: 0.06802642484513102\r\n[2019-10-23 10:13:00] E y[i]: 0.06819657403670369\r\n[2019-10-23 10:13:00] E relative error[i]: 0.00249498151448332\r\n[2019-10-23 10:13:00] E absolute error[i]: 0.00017014919157266883\r\n[2019-10-23 10:13:00] E relative tolerance * |y[i]|: 6.819657403670369e-05\r\n[2019-10-23 10:13:00] E absolute tolerance: 0.0001\r\n[2019-10-23 10:13:00] E total tolerance: 0.0001681965740367037\r\n[2019-10-23 10:13:00] E x: 0.06802642\r\n[2019-10-23 10:13:00] E y: 0.06819657\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E (caused by)\r\n[2019-10-23 10:13:00] E AssertionError: check_backward failed (eps=0.001 atol=0.0001 rtol=0.001)\r\n[2019-10-23 10:13:00] E inputs[0]:\r\n[2019-10-23 10:13:00] E array([[-0.78076172, -1.33984375],\r\n[2019-10-23 10:13:00] E [-1.2578125 , -0.56689453]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E inputs[1]:\r\n[2019-10-23 10:13:00] E array([0.04290771, 0.39892578], shape=(2,), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E grad_outputs[0]:\r\n[2019-10-23 10:13:00] E array([[0.71972656, 0.72851562],\r\n[2019-10-23 10:13:00] E [0.38500977, 0.64404297]], shape=(2, 2), dtype=float16, device='native:0')\r\n[2019-10-23 10:13:00] E directions[0]:\r\n[2019-10-23 10:13:00] E array([[0.19774412, 0.24381235],\r\n[2019-10-23 10:13:00] E [0.34787308, -0.21185603]], shape=(2, 2), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E directions[1]:\r\n[2019-10-23 10:13:00] E array([0.7028306 , 0.49151123], shape=(2,), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E gradients (numeric): array(0.06802642, shape=(), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E gradients (backward): array(0.06819657, shape=(), dtype=float64, device='native:0')\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E x: numeric gradient, y: backward gradient\r\n[2019-10-23 10:13:00] E Not equal to tolerance rtol=0.001, atol=0.0001\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E (mismatch 100.0%)\r\n[2019-10-23 10:13:00] E x: array(0.068026)\r\n[2019-10-23 10:13:00] E y: array(0.068197)\r\n[2019-10-23 10:13:00] E \r\n[2019-10-23 10:13:00] E assert_allclose failed: \r\n[2019-10-23 10:13:00] E shape: () ()\r\n[2019-10-23 10:13:00] E dtype: float64 float64\r\n[2019-10-23 10:13:00] E i: (0,)\r\n[2019-10-23 10:13:00] E x[i]: 0.06802642484513102\r\n[2019-10-23 10:13:00] E y[i]: 0.06819657403670369\r\n[2019-10-23 10:13:00] E relative error[i]: 0.00249498151448332\r\n[2019-10-23 10:13:00] E absolute error[i]: 0.00017014919157266883\r\n[2019-10-23 10:13:00] E relative tolerance * |y[i]|: 6.819657403670369e-05\r\n[2019-10-23 10:13:00] E absolute tolerance: 0.0001\r\n[2019-10-23 10:13:00] E total tolerance: 0.0001681965740367037\r\n[2019-10-23 10:13:00] E x: 0.06802642\r\n[2019-10-23 10:13:00] E y: 0.06819657\r\n[2019-10-23 10:13:00] \r\n[2019-10-23 10:13:00] ../../../virtualenv/python3.5.6/lib/python3.5/site-packages/chainer/gradient_check.py:536: FunctionTestError\r\n```" | closed | 2019-10-23T10:57:19Z | 2019-10-30T03:58:23Z | https://github.com/chainer/chainer/issues/8315 | [
"cat:test",
"prio:high"
] | asi1024 | 5 |
CatchTheTornado/text-extract-api | api | 82 | [feat] Ollama concurrent requests | Ollama by default doesn't support concurrent requests. We need to work on it as it's pretty huge bottleneck for now.
More info: https://www.reddit.com/r/LocalLLaMA/comments/1dt5n6l/ollama_now_runs_inference_concurrently_by_default/?rdt=57766
Maybe we'll need to migrate to vllm - https://github.com/vllm-project/vllm | open | 2025-01-13T17:21:22Z | 2025-01-19T16:54:35Z | https://github.com/CatchTheTornado/text-extract-api/issues/82 | [] | pkarw | 1 |
robotframework/robotframework | automation | 4,568 | Add optional typed base classes for listener API | Issue #4567 proposes adding a base class for the dynamic library API and having similar base classes for the listener API would be convenient as well. The usage would be something like this:
```python
from robot.api.interfaces import ListenerV3
class Listener(ListenerV3):
...
```
The base class should have all available listener methods with documentation and appropriate type information. We should have base classes both for listener v2 and for v3 and they should have `ROBOT_LISTENER_API_VERSION` set accordingly.
Similarly as #4567, this would be easy to implement and could be done already in RF 6.1. We mainly need to agree on naming and where to import these base classes. | closed | 2022-12-19T16:22:17Z | 2023-03-15T12:50:16Z | https://github.com/robotframework/robotframework/issues/4568 | [
"enhancement",
"priority: medium",
"alpha 1",
"effort: medium"
] | pekkaklarck | 0 |
huggingface/transformers | pytorch | 36,900 | groot n1 | ### Model description
NVIDIA recently introduced the Isaac GR00T N1, an open-source foundation model designed to enhance humanoid robot reasoning and skills. This model features a dual-system architecture inspired by human cognition:
- **System 1:** A fast-thinking action model that mirrors human reflexes or intuition.
- **System 2:** A slow-thinking model responsible for deliberate, methodical decision-making.
GR00T N1 is trained on a diverse dataset, including real robot trajectories, human videos, and synthetic data generated using NVIDIA's Omniverse platform. It enables humanoid robots to perform tasks such as grasping, bimanual manipulation, and complex multistep operations.
## Useful Links for Implementation
- [Transformers-compatible implementation](https://github.com/NVIDIA/Isaac-GR00T)
- [Model weights](https://huggingface.co/nvidia/GR00T-N1-2B)
- [Research paper](https://arxiv.org/abs/2503.14734)
## Additional Resources
- [NVIDIA's official announcement](https://nvidianews.nvidia.com/news/nvidia-isaac-gr00t-n1-open-humanoid-robot-foundation-model-simulation-frameworks)
- [Demonstration video](https://www.youtube.com/watch?v=m1CH-mgpdYg)
Upstreaming the GR00T N1 code into the main Transformers repository would eliminate the need for `trust_remote_code` and facilitate adaptation to Transformers' processors, enhancing compatibility with systems like VLLM.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
| open | 2025-03-22T07:28:19Z | 2025-03-22T07:46:01Z | https://github.com/huggingface/transformers/issues/36900 | [
"New model"
] | sushmanthreddy | 1 |
kizniche/Mycodo | automation | 1,041 | Camera stream issue | ### Describe the problem/bug
Streaming from the pi camera freezes the whole system
### Versions:
- Mycodo Version: [8.11.0]
- Raspberry Pi Version: [4]
- Raspbian OS Version: [Raspian]
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Setup pi camera for streaming
2. start streaming
3. try to navigate somewhere
4. Nothing happens and at some point the web seems to lose the communication with the daemon
5. the system becomes completely irresponsive
### Expected behavior
being able to navigate.
### Screenshots


### Additional context
Is there anything that should be added to make it easier to address this issue?
| closed | 2021-06-27T16:31:24Z | 2021-08-30T02:47:47Z | https://github.com/kizniche/Mycodo/issues/1041 | [] | joaozorro | 5 |
modelscope/modelscope | nlp | 665 | Changing the encoding format allows the generated videos to be viewed in regular players | Hi,
I noticed that the generated videos can only be viewed in the VLC player due to "mp4v" encoding format in https://github.com/modelscope/modelscope/blob/master/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78
I changed the encoding format to "avc1" and the generated videos work on normal video players too. I googled various formats that support "mp4" extension to reach to "avc1".
Is their a specific reason for choosing "mp4v" and not "avc1"? | closed | 2023-12-07T08:34:48Z | 2023-12-18T09:08:21Z | https://github.com/modelscope/modelscope/issues/665 | [] | Hritikbansal | 0 |
django-import-export/django-import-export | django | 2,020 | Export does not support `QuerySet` `values()` | **Describe the bug**
If a queryset is defined which calls [`values()`](https://docs.djangoproject.com/en/5.1/ref/models/querysets/#values), then the resulting export will be empty.
This is because the Resource definition cannot map instance attributes to dict entries produced by the values() call.
**To Reproduce**
TBD
**Versions (please complete the following information):**
- Django Import Export: 4.3.3
- Python 3.12
- Django 5.1
**Expected behavior**
It should be possible to export when the resource uses values()
| closed | 2024-12-10T15:10:56Z | 2024-12-10T15:59:13Z | https://github.com/django-import-export/django-import-export/issues/2020 | [
"bug"
] | matthewhegarty | 0 |
gradio-app/gradio | python | 10,526 | Cropped Image from gr.ImageEditor Retains Transparent Background | ### Describe the bug
When using gr.ImageEditor to crop an image, the cropped output still retains the original canvas size, including the transparent/blank background. Instead of returning only the selected cropped region, the exported image contains transparency padding around it.
This behavior makes it difficult to directly use the cropped image for further processing or inference without additional post-processing to remove the blank areas.
Expected Behavior:
The cropped output should only include the selected region without any additional transparent padding.
The output image dimensions should match the cropped selection rather than keeping the original canvas size.
Actual Behavior:
The exported image maintains the original canvas size, filling the non-cropped areas with transparency.
Proposed Fix:
Adjust the gr.ImageEditor cropping functionality to return only the selected portion as a new image instead of maintaining the original canvas.
Provide an option or flag (e.g., trim_background=True) to automatically remove transparent areas from the output.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```
import gradio as gr
def process_image(image):
return image # Just returning the cropped output for debugging
crop_image = gr.ImageEditor(layers=False, interactive=True, brush=True, show_download_button=False, scale=3, min_width=500)
demo = gr.Interface(fn=process_image, inputs=crop_image, outputs="image")
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Version: 5.13.2
Python Version: 3.10.0
OS: windows
```
### Severity
I can work around it | open | 2025-02-06T12:50:23Z | 2025-02-07T03:27:53Z | https://github.com/gradio-app/gradio/issues/10526 | [
"bug",
"🖼️ ImageEditor"
] | suhaniquantanite | 3 |
piskvorky/gensim | data-science | 2,844 | Conflicts between hyperparameters for negative sampling? | Hi,
I wonder if there are possible interactions/conflicts when you use negative sampling with `negative>0 `and have hierarchical softmax accidentally activated` hs=1`? The docs says that only if hs=0 negative sampling will be used (negative>0). So I can hope that still if hs=1 and `negative>0 ` hopefully _**no**_ negative sampling is used?
Python 3.6
Win 10
NumPy 1.18.1
SciPy 1.1.0
gensim 3.8.1
| closed | 2020-05-18T16:30:41Z | 2020-10-28T02:08:32Z | https://github.com/piskvorky/gensim/issues/2844 | [
"question"
] | datistiquo | 1 |
holoviz/panel | matplotlib | 6,864 | 500 internal server error with Panel 1.4.3 | Thanks for contacting us! Please read and follow these instructions carefully, then delete this introductory text to keep your issue easy to read. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
#### ALL software version info
Panel 1.4.3
Python 3.11.9
Bokeh 3.4.1
Tornado 6.4
OS: AlmaLinux 8.9
nginx/1.14.1
#### Description of expected behavior and the observed behavior
#### Complete, minimal, self-contained example code that reproduces the issue
`test_app_servable.py`:
```
import panel as pn
pn.extension()
file_input = pn.widgets.FileInput()
pn.Column(file_input).servable()
```
Run it with the following command:
```
panel serve ./test_app_servable.py \
--port 10080 \
--allow-websocket-origin="example.com" \
--use-xheaders \
--prefix="test_app" \
--websocket-max-message-size=104857600
```
#### Stack traceback and/or browser JavaScript console output
```
$ bash ./run_test_app_servable.bash
2024-05-24 00:51:42,369 Starting Bokeh server version 3.4.1 (running on Tornado 6.4)
2024-05-24 00:51:42,370 Torndado websocket_max_message_size set to 104857600 bytes (100.00 MB)
2024-05-24 00:51:42,370 User authentication hooks NOT provided (default user enabled)
2024-05-24 00:51:42,372 Bokeh app running at: http://localhost:10080/test_app/test_app_servable
2024-05-24 00:51:42,372 Starting Bokeh server with process id: 232069
2024-05-24 00:52:09,224 Uncaught exception GET /test_app/test_app_servable (ip_address)
HTTPServerRequest(protocol='https', host='example.com:443', method='GET', uri='/test_app/test_app_servable', version='HTTP/1.1', remote_ip='ip_address')
Traceback (most recent call last):
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/tornado/web.py", line 1790, in _execute
result = await result
^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/panel/io/server.py", line 508, in get
payload = self._generate_token_payload()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/panel/io/server.py", line 452, in _generate_token_payload
payload.update(self.application_context.application.process_request(self.request))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/panel/io/application.py", line 103, in process_request
user = decode_signed_value(config.cookie_secret, 'user', user.value).decode('utf-8')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/tornado/web.py", line 3589, in decode_signed_value
return _decode_signed_value_v2(secret, name, value, max_age_days, clock)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/tornado/web.py", line 3674, in _decode_signed_value_v2
expected_sig = _create_signature_v2(secret, signed_string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/work/monodera/test/test_app/.venv/lib/python3.11/site-packages/tornado/web.py", line 3710, in _create_signature_v2
hash = hmac.new(utf8(secret), digestmod=hashlib.sha256)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monodera/.pyenv/versions/3.11.9/lib/python3.11/hmac.py", line 184, in new
return HMAC(key, msg, digestmod)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monodera/.pyenv/versions/3.11.9/lib/python3.11/hmac.py", line 53, in __init__
raise TypeError("key: expected bytes or bytearray, but got %r" % type(key).__name__)
TypeError: key: expected bytes or bytearray, but got 'NoneType'
2024-05-24 00:52:09,226 500 GET /test_app/test_app_servable (ip_address) 2.43ms
```
The browser displays "500: Internal Server Error". This error does not happen `Panel<1.4.3`.
- [x] I may be interested in making a pull request to address this, but so far I have no idea where to look at.
| closed | 2024-05-24T10:57:17Z | 2024-05-25T06:33:13Z | https://github.com/holoviz/panel/issues/6864 | [] | monodera | 2 |
3b1b/manim | python | 2,146 | `TexText` Bug: Some Chinese characters get hollowed | # English
### Describe the bug
Some of the Chinese characters in `TexText` get hollowed, even when using `SVGMoject` with compiled SVG files (nothing wrong with SVG files). I thought it was a rendering bug of ManimGL.
**Code**:
```python
from manimlib import *
class TestScene(Scene):
def construct(self):
self.add(TexText("TexText: 始终").to_edge(UP),
Text("Text: 始终").to_edge(DOWN))
self.wait()
```
**Wrong display or Error traceback**:

### Additional context
Someone else (@起床王王王 in Manim Kindergarten's QQ group) found the same problem in Feb this year. He solved by changing typeface, but other characters can also get a display error in other typefaces on my computer. This seemed to have different effects on different computers. E.g. “边” and “代” got hollowed on @起床王王王's computer, but it worked well for me. And I got hollowed “始” and “终” in Song typeface, deformed “奇” in Kai typeface.
The picture shows @起床王王王's result.

# 中文
### 描述
`TexText` 中有些汉字显示为空心,哪怕把相应的 SVG 放到 `SVGMobject` 里去也是同样效果,但编译出的 SVG 没有任何异常,我觉得应该是 ManimGL 渲染的问题。
**图片和源码见上文**
### 附加信息
今年 2 月的时候 mk 群里面也有人(群昵称 @起床王王王)发现了同样问题,他当时是用更换字体的方式解决的,但我这里更换了字体就会有其他汉字显示异常。这个问题貌似在不同电脑上表现不同,在 @起床王王王 的电脑上“边”“代”有问题,但在我的电脑上显示正常,在我的电脑上用宋体会导致“始”“终”空心,用楷体会导致“奇”字形扭曲。
图(见上文)为 @起床王王王 的显示效果。 | open | 2024-07-16T11:57:04Z | 2024-10-24T14:19:52Z | https://github.com/3b1b/manim/issues/2146 | [
"bug"
] | osMrPigHead | 6 |
pyqtgraph/pyqtgraph | numpy | 2,410 | How to set properly key binding | Hello,
I wanted to bind some keys (space bar, e.g. to pause the program), during overwriting the method:
```
class Plotting(pg.GraphicsLayoutWidget):
pressed_key = pyqtSignal(int)
def __init__(self, receiver):
self.pressed_key.connect(self.on_key)
def keyPressEvent(self, ev):
self.pressed_key.emit()
self.pressed_key(ev)
print("here")
def on_key(self, key):
print("key pressed")
```
However, after clicking the keys, nothing happened. How should I bind the keys properly?
Thanks in advance. | closed | 2022-09-08T07:19:00Z | 2022-09-10T18:01:29Z | https://github.com/pyqtgraph/pyqtgraph/issues/2410 | [
"answered?"
] | bilaljo | 2 |
alteryx/featuretools | scikit-learn | 2,462 | `NumCharacters` does not have test suite | We should add a test suite for the `NumCharacters` primitive | closed | 2023-01-24T23:14:32Z | 2023-01-27T22:53:33Z | https://github.com/alteryx/featuretools/issues/2462 | [
"testing"
] | sbadithe | 0 |
ultralytics/ultralytics | python | 19,093 | Triton inference bug | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
Hello
i have been using an old version of ultralytics library with my triton server without any issue
i tried to use the nms=true in exporting my onnx model
so i updated the library to the latest version
now in the inference process i am getting an error
when i switch back to the old version
i dont get the error
```
Traceback (most recent call last):
File "/mnt/1T/Safaei/social_internal/object_detector_service/update_es_yolo.py", line 130, in <module>
process_files()
File "/mnt/1T/Safaei/social_internal/object_detector_service/update_es_yolo.py", line 120, in process_files
process_and_save_yolo(batch_images, batch_file_ids, batch_doc_ids, batch_accounts)
File "/mnt/1T/Safaei/social_internal/object_detector_service/update_es_yolo.py", line 72, in process_and_save_yolo
results = model(batch_images, imgsz=320, batch=99)
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/engine/model.py", line 180, in __call__
return self.predict(source, stream, **kwargs)
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/engine/model.py", line 558, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 175, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context
response = gen.send(None)
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 268, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/models/yolo/detect/predict.py", line 25, in postprocess
preds = ops.non_max_suppression(
File "/mnt/1T/anaconda3/envs/td/lib/python3.10/site-packages/ultralytics/utils/ops.py", line 265, in non_max_suppression
output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
RuntimeError: Trying to create tensor with negative dimension -913: [0, -913]
```
### Environment
```
Ultralytics 8.3.71 🚀 Python-3.10.15 torch-2.5.1 CUDA:0 (NVIDIA GeForce RTX 4090, 24210MiB)
Setup complete ✅ (112 CPUs, 377.6 GB RAM, 234.2/467.4 GB disk)
OS Linux-5.15.0-97-generic-x86_64-with-glibc2.31
Environment Linux
Python 3.10.15
Install pip
RAM 377.56 GB
Disk 234.4/467.4 GB
CPU Intel Xeon Platinum 8176 2.10GHz
CPU count 112
GPU NVIDIA GeForce RTX 4090, 24210MiB
GPU count 3
CUDA 11.8
numpy ✅ 1.24.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.9.2>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.0>=4.64.0
psutil ✅ 6.1.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.11>=2.0.0
```
### Minimal Reproducible Example
```
from ultralytics import YOLO
# Load the Triton Server model
model = YOLO("http://localhost:8000/yolo", task="detect")
# Run inference on the server
results = model("path/to/image.jpg")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-06T06:50:20Z | 2025-02-06T15:10:02Z | https://github.com/ultralytics/ultralytics/issues/19093 | [
"detect",
"exports"
] | mohamad-tohidi | 5 |
encode/databases | asyncio | 118 | MySQL - asyncio.Lock get error | when use databases and aiomysql I get `Task <Task pending coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py:370> cb=[set.discard()]> got Future <Future pending> attached to a different loop`
detail:
```
Traceback (most recent call last):
File "/app/services/currency.py", line 120, in update_currency_ticker_by_id
return await database.execute(q)
File "/usr/local/lib/python3.7/site-packages/databases/core.py", line 122, in execute
return await connection.execute(query, values)
File "/usr/local/lib/python3.7/site-packages/databases/core.py", line 209, in execute
async with self._query_lock:
File "/usr/local/lib/python3.7/asyncio/locks.py", line 92, in __aenter__
await self.acquire()
File "/usr/local/lib/python3.7/asyncio/locks.py", line 192, in acquire
await fut
RuntimeError: Task <Task pending coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py:370> cb=[set.discard()]> got Future <Future pending> attached to a different loop
```
packages:
```
aiomysql==0.0.20
databases==0.2.3
alembic==1.0.11
fastapi==0.31.0
```
db.py
```python
from databases import Database
database = Database(DB_URL, min_size=5, max_size=20)
```
execute code snippet
```python
async def update_currency_ticker_by_id(currency_id: int, ticker: CurrencyTickerUpdateRequest):
tbl_c = currency
values = {k: v for k, v in ticker.dict().items() if v is not None}
if values:
q = tbl_c.update().where(tbl_c.c.id == currency_id).values(values)
try:
return await database.execute(q)
except Exception as e:
print('update error', e, "detail\n", traceback.format_exc())
```
run in docker with base image: python 3.7.3-alpine.
why did this happend? and how to fix it ... it's urgent.
| open | 2019-07-02T16:05:30Z | 2020-09-16T10:47:29Z | https://github.com/encode/databases/issues/118 | [] | watsy0007 | 9 |
rthalley/dnspython | asyncio | 1,103 | RFC 9567: EDNS0 Report-Channel (DNS Error Reporting) | **Motivation**
I'd like to be able to send messages with the Report-Channel option from RFC 9567.
**Describe the solution you'd like.**
It would be great if the corresponding `Option` subclass could be added to `edns.py`. | closed | 2024-07-21T07:37:03Z | 2024-07-24T18:29:36Z | https://github.com/rthalley/dnspython/issues/1103 | [
"Enhancement Request"
] | peterthomassen | 3 |
adbar/trafilatura | web-scraping | 70 | bare_extraction URL empty on success | First of all thank you for creating a very promising scraping library. I like your attention to detail and benchmarks.
Here is example code:
trafilatura.bare_extraction(trafilatura.fetch_url('http://www.k2.t.u-tokyo.ac.jp/vision/DPM/'), include_comments=False)
Returned is:
{'title': 'System Vision Design: High Speed Image Processing', 'author': None, **'url': None,** 'hostname': None, 'description': None,
(URL is empty) although the lib correctly redirects to http://ishikawa-vision.org/vision/DPM/ and extracts title/content from it.
The lib should always return the URL of the page it landed at.
Also what would be very useful is to have HTTP status code of the landing page (200, 404, 403 etc..) for additional context. Maybe there is already a way to get status code with the response?
Thank you!!
| closed | 2021-04-28T20:02:20Z | 2021-05-18T15:50:26Z | https://github.com/adbar/trafilatura/issues/70 | [
"enhancement"
] | vprelovac | 9 |
ivy-llc/ivy | pytorch | 27,902 | Fix Ivy Failing Test: numpy - statistical.mean | closed | 2024-01-11T17:42:02Z | 2024-01-17T20:28:24Z | https://github.com/ivy-llc/ivy/issues/27902 | [
"Sub Task"
] | samthakur587 | 1 | |
autokey/autokey | automation | 917 | API access to more of the abbreviation properties for a phrase, when using engine.create_phrase | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
X11
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [X] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [X] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Ubuntu 22.04.3 LTS / Gnome
### Which AutoKey GUI did you use?
GTK
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
pip3
### Can you briefly describe the issue?
I'm modifying the Abbreviation from selection script, and want to make the scripts it creates have the abbreviation.immediate = True. There doesn't seem to be a way to do this with the API. It would be good if you could set all the abbreviation properties, like backspace, ignoreCase, triggerInside etc from the engine.create_phrase() method.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | open | 2023-10-11T05:39:05Z | 2023-10-12T23:24:23Z | https://github.com/autokey/autokey/issues/917 | [
"enhancement",
"help-wanted",
"scripting"
] | stibinator | 3 |
sanic-org/sanic | asyncio | 2,441 | sanic.response.file() could work a bit harder wrt caching | I'm using `file()` to send static files from my application.
Initially, my code was the obvious, e.g.:
```python
@app.get("/images/<name>")
async def image(request, name):
path = Path(os.getcwd()) / "images" / name
return await file(path)
```
but I noticed that the images were fetched by the browser on each page view.
Digging in, I found the code or `file()` didn't deal with modification time of the files, and provided very little support for caching.
As a quick workaround, I wrote a function similar to this:
```python
async def send_file(path, max_age):
stat = os.stat(path)
size = stat.st_size
mtime = stat.st_mtime
headers = {
"last-modified": str(mtime),
"content-length": str(size),
"cache-control": f"max-age={max_age}",
}
return await file(path, headers=headers)
```
but that's just a small part of what would be useful.
I believe functionalities and code similar to this: <https://github.com/pallets/werkzeug/blob/main/src/werkzeug/utils.py#L329> would be useful, specially to support common use case around caching.
| closed | 2022-04-27T10:33:25Z | 2022-06-21T02:26:42Z | https://github.com/sanic-org/sanic/issues/2441 | [
"enhancement",
"help wanted",
"beginner",
"feature request"
] | sfermigier | 3 |
darrenburns/posting | automation | 31 | unicode characters in response body | https://github.com/darrenburns/posting/blob/7b1d0ae86d2990fa89d52b612284af3aaf590b55/src/posting/widgets/response/response_area.py#L80
In some JSON-type API requests, the returned content may contain Unicode characters, it will show as '\uxxx....' and making the response body unreadable.
Maybe, will you please add an 'ensure_ascii=False' parameter in json.dumps to resolve this kind of issue?

| closed | 2024-07-12T02:41:58Z | 2024-07-12T09:14:22Z | https://github.com/darrenburns/posting/issues/31 | [
"bug"
] | breakstring | 1 |
ray-project/ray | data-science | 51,169 | [telemetry] Using only library APIs should not cause "core" usage to be reported | See https://github.com/ray-project/ray/pull/51161, all of the libraries currently report "core" usage | closed | 2025-03-07T17:40:27Z | 2025-03-07T18:00:43Z | https://github.com/ray-project/ray/issues/51169 | [
"P1"
] | edoakes | 0 |
yuka-friends/Windrecorder | streamlit | 268 | Delete old video files directly instead of moving them to the trash | Windrecorder seems to be moving old files to the recycle bin during automatic maintenance at the moment, but this is putting a lot of pressure on my recycle bin.
This is my recycle.bin folder.↓

I think the file should be deleted directly. | open | 2025-03-09T12:23:08Z | 2025-03-12T07:17:35Z | https://github.com/yuka-friends/Windrecorder/issues/268 | [] | mitac-31709 | 1 |
gradio-app/gradio | data-visualization | 10,300 | Unify frontend build | I've attempted this once and it is difficult to find a generic solution, we need to either build something custom, hack around extensively, or wait for a generic framework to cover this usecase.
We have pretty complex needs when it comes to the built output, both. in terms of capability and 'shape'.
We have 4 'modes of consumption':
| Name | Architecture | Language | API | Notes |
|-------|--------|--------|--------|--------|
| Gradio (SSR) | SSR + CSR | `python` | `app.launch(ssr_mode=True)` | Node server serves the HTML file via a python proxy |
| Gradio (SPA) | SPA | `python` | `app.launch(ssr_mode=False)` | Python server serves the HTML entrypoint directly |
| Gradio Embed | SPA | `js`/ `html` | `<gradio-app>` | This the SPA but with a programmatic/ CE interface |
| Gradio Lite | SPA + custom runtime | `js`/ `html` | `<gradio-lite>` | This is essentially the custom element SPA app but with a modified JS runtime and injected wasm-gradio runtime in place of the python |
We currently cover these cases by doing three separate builds:
1. Gradio SSR via Svelte Kit
2. Gradio SPA via custom vite app
3. Gradio Lite via custom vite app
We additionally parse the SPAs generate HTML page to create a programmatic way to initialise the SPA in order to support embeds/ webcomponent usecases.
This comes with a number of tradeoffs. It triples our frontend build times, balloons the repo with multiple apps, complicates our tooling, introduces inconsistencies between the 3/4 app types, and increases the maintenance burden as we need to update some things in three different places!
While we are sharing a lot of core logic which reduces the maintenance burden somewhat, it still isn't an ideal situation.
We really need some way to have a single build application and build it for different usecases.
- Generate node app that serves SSR/CSR version
- Generate HTML entrypoint that serves the SPA (which we can then serve without. needing a Node server)
- Generate a programmatic interface for launching the SPA
- Generate a programmatic interface for launching the SPA with modified `lite` runtime.
The programmatic interface is the major issue right now, the SPA HTML entrypoint is also not quite there but we can kind of solve it, if we had that we could probably make all of the other cases work, although Svelte kit does come with certain opinions about how things are mean to work.
This is half tracking/ half brainstorming. Also chatting with the Svelte team, about this. WIll link the issue. | open | 2025-01-07T11:47:06Z | 2025-01-07T14:55:14Z | https://github.com/gradio-app/gradio/issues/10300 | [
"brainstorming",
"svelte",
"needs designing"
] | pngwn | 1 |
apachecn/ailearning | nlp | 363 | 第4章 基于概率论的分类方法:朴素贝叶斯其中一个问题 | 对朴素贝叶斯分类器训练函数的代码块中:
``p1Denom += sum(trainMatrix[i])``和``p0Denom``为何不是每次加一 | closed | 2018-04-18T15:51:06Z | 2018-04-20T01:40:43Z | https://github.com/apachecn/ailearning/issues/363 | [] | TronYY | 5 |
JaidedAI/EasyOCR | machine-learning | 1,123 | Update EasyOcr from 1.4.0 to 1.7.0 | Hello, I am trying update easyocr for more recent version, but it seems there are some performance issues.
In windows pc simple one image recognition goes arround 40% slower, and on linux multiple times slower, does anybody have some suggestion how to fix that problem?
It is run on intel cpu | open | 2023-08-24T11:33:49Z | 2023-08-24T11:35:52Z | https://github.com/JaidedAI/EasyOCR/issues/1123 | [] | Nitramdroll | 0 |
mwaskom/seaborn | pandas | 3,432 | Migrate examples to use seaborn objects | Hi there! love the library.
I've wanted to start using the objects interface as it seems a lot more powerful, but I've often been stymied by the (relative) lack of examples when compared to the existing `sns` examples. For example, the [swarmplot](https://seaborn.pydata.org/generated/seaborn.swarmplot.html) docs have a nice walk through with lots of example code that a user could probably just copy-paste and have it "just work". I think some equivalent would be invaluable for the `so` interface.
Would the project be open to PRs which attempt to reproduce existing examples but using the `so` interface instead of the `sns` interface? I'm not sure how this would be best accomplished, but some approaches might be:
- Reproducing the [`examples`](https://seaborn.pydata.org/examples/different_scatter_variables.html) gallery using `so`
- Including code in the existing `sns` functions (such as [swarmplot](https://seaborn.pydata.org/generated/seaborn.swarmplot.html)) showing the user how to accomplish an equivalent function but using the `so` interface
- Rewriting the [tutorial](https://seaborn.pydata.org/tutorial/introduction.html) to use the `so` interface.
I've assumed that the long term goal is to deprecate the `sns` interface in favour of the `so` interface, although I'm realising now that this isn't actually specified anywhere, so apologies if I've made a mistaken assumption. Looking forward to hearing feedback and helping out if possible. | closed | 2023-08-01T11:10:00Z | 2023-08-10T07:47:43Z | https://github.com/mwaskom/seaborn/issues/3432 | [
"docs"
] | beyarkay | 2 |
ipython/ipython | jupyter | 13,846 | Enforce shell.configurables to be configurables. | See https://github.com/ipython/ipykernel/issues/1038
| open | 2022-11-28T17:57:06Z | 2022-11-28T17:57:06Z | https://github.com/ipython/ipython/issues/13846 | [] | Carreau | 0 |
plotly/dash | jupyter | 3,076 | add support for ignoring specific folders under the assets directory | The existing parameter `assets_ignore` can ignore files under the assets folder that match a specified regular expression pattern, but it cannot ignore specific folders under the assets folder as a whole, which would be useful in many scenarios.
| open | 2024-11-15T01:58:40Z | 2024-11-20T13:30:18Z | https://github.com/plotly/dash/issues/3076 | [
"feature",
"P3"
] | CNFeffery | 3 |
cvat-ai/cvat | pytorch | 8,880 | Add Right-Click Mouse Functionality for Brightness/Contrast Adjustment and Image Zoom | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
Yes, while annotating and reviewing medical images, it is crucial to have tools that enhance the visibility of image details, especially for images with low contrast or brightness. Currently, there is a lack of functionality to adjust the brightness, contrast, and zoom for the related images (e.g., additional views or related scans). This limitation can hinder efficient image annotation and review, making it difficult to identify important details, which is crucial for training AI models on medical images. Additionally, doctors, who are the primary users, are accustomed to adjusting brightness and contrast by holding the right mouse button, and find using sliders cumbersome. Sliders obscure the image and require additional steps, such as clicking on the slider window, adjusting the parameters, and then closing the window.
### Describe the solution you'd like
I propose adding functionality to adjust the brightness and contrast of both the primary and related images directly via the mouse (by holding the right button, for instance), without the need for obstructive sliders. Zooming capabilities should also be integrated for the related images to allow detailed examination of image areas. This would enable users to fine-tune the image settings seamlessly and efficiently, providing a smoother and more intuitive workflow.
### Describe alternatives you've considered
_No response_
### Additional context
Adjusting brightness and contrast is particularly important in medical imaging where subtle variations in contrast may carry significant diagnostic value. Additionally, zooming can help focus on particular areas of an image for detailed annotation or analysis. This feature would enhance the platform's ability to support medical image annotation tasks for AI training. | open | 2024-12-26T07:49:27Z | 2024-12-27T07:33:24Z | https://github.com/cvat-ai/cvat/issues/8880 | [
"enhancement"
] | kosan0k | 2 |
deepspeedai/DeepSpeed | pytorch | 6,666 | [REQUEST] Support 8bit optimizer | Hi thanks for the library! It would be great if the AdamW cpu optimizer can have 8bit variant. For example, I would like to try adamw_8bit to full-finetune a 8B model on a 24GB GPU card (RTX4090). With deepspeed offload, the GPU memory is OK, but the CPU memory requirement is still very huge, partially because it uses normal adamw, thus needs 8x8=64GB for the optimizer itself. If we have 8bit adamw, then this part is reduced to 8x2=16GB. | closed | 2024-10-25T01:55:02Z | 2024-10-31T18:45:48Z | https://github.com/deepspeedai/DeepSpeed/issues/6666 | [
"enhancement"
] | fzyzcjy | 3 |
zihangdai/xlnet | nlp | 184 | no lr_layer_decay_rate for embedding | Thanks for your work.
I found that there is no lr_layer_decay_rate for embedding layer, which is weird because embedding is actually below transformer layers. | open | 2019-07-23T02:09:45Z | 2019-09-26T02:36:14Z | https://github.com/zihangdai/xlnet/issues/184 | [] | fyubang | 3 |
ultralytics/ultralytics | deep-learning | 18,677 | FP16 of ONNX format export | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
When I exported the model using the export function and specified the `--half` parameter:
```
from ultralytics import YOLO
# Load a model
model = YOLO("yolov10l.pt") # load an official model
# Export the model
model.export(format="onnx", imgsz=(640, 640), half=True, device="cuda:0")
```
I received the following information:
```
PyTorch: starting from 'yolov10l.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 6) (50.0 MB)
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.47...
ONNX: export success ✅ 5.7s, saved as 'yolov10l.onnx' (46.7 MB)
Export complete (10.3s)
Results saved to F:\zdj\code\ultralytics-yolo11-jingning
Predict: yolo predict task=detect model=yolov10l.onnx imgsz=640 half
Validate: yolo val task=detect model=yolov10l.onnx imgsz=640 data=None half
Visualize: https://netron.app
```
The code I predict with this fp16 onnx model:
```
from ultralytics import YOLO
# Load a pretrained YOLO11n model
model = YOLO(r"yolov10l.onnx", task="detect")
# Define path to the image file
source = r"000000001268.jpg"
# Run inference on the source
results = model(source, imgsz=640, half=True, device=0) # list of Results objects
```
The following error message was reported:
```
Loading yolov10l.onnx for ONNX Runtime inference...
Using ONNX Runtime CUDAExecutionProvider
[ 0 ; 9 3 m 2 0 2 5 - 0 1 - 1 4 1 7 : 0 5 : 2 1 . 3 8 3 9 4 9 1 [ W : o n n x r u n t i m e : , t r a n s f o r m e r _ m e m c p y . c c : 7 4 o n n x r u n t i m e : : M e m c p y T r a n s f o r m e r : : A p p l y I m p l ] 4 M e m c p y n o d e s a r e a d d e d t o t h e g r a p h m a i n _ g r a p h f o r C U D A E x e c u t i o n P r o v i d e r . I t m i g h t h a v e n e g a t i v e i m p a c t o n p e r f o r m a n c e ( i n c l u d i n g u n a b l e t o r u n C U D A g r a p h ) . S e t s e s s i o n _ o p t i o n s . l o g _ s e v e r i t y _ l e v e l = 1 t o s e e t h e d e t a i l l o g s b e f o r e t h i s m e s s a g e . [ m
Traceback (most recent call last):
File "D:\ultralytics\temp2.py", line 19, in <module>
results = model(source, imgsz=640, half=True, device=0) # list of Results objects
File "D:\ultralytics\ultralytics\engine\model.py", line 180, in __call__
return self.predict(source, stream, **kwargs)
File "D:\ultralytics\ultralytics\engine\model.py", line 558, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File "D:\ultralytics\ultralytics\engine\predictor.py", line 173, in __call__
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File "C:\Users\xxx\miniconda3\envs\yolo\lib\site-packages\torch\utils\_contextlib.py", line 36, in generator_context
response = gen.send(None)
File "D:\ultralytics\ultralytics\engine\predictor.py", line 239, in stream_inference
self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz))
File "D:\ultralytics\ultralytics\nn\autobackend.py", line 732, in warmup
self.forward(im) # warmup
File "D:\ultralytics\ultralytics\nn\autobackend.py", line 552, in forward
self.session.run_with_iobinding(self.io)
File "C:\Users\xxx\miniconda3\envs\yolo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 377, in run_with_iobinding
self._sess.run_with_iobinding(iobinding._iobinding, run_options)
RuntimeError: Error in execution: Unexpected output data type. Actual: (tensor(float16)) , expected: (tensor(float))
```
The error message indicates that the operation expected a tensor of type `float`, looks like the `--half` parameter did not take effect during the export of the ONNX model. However, when I attempted to perform inference without using the --half parameter:
```
results = model(source, imgsz=640, half=False, device=0) # list of Results objects
```
the following error occurred:
```
RuntimeError: Error in execution: Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(float16))
```
What could be the reason for this, and how can it be resolved?
### Environment
Ultralytics 8.3.58 🚀 Python-3.10.16 torch-2.5.0 CUDA:0 (NVIDIA GeForce RTX 3090, 24576MiB)
Setup complete ✅ (20 CPUs, 127.7 GB RAM, 1896.9/3725.7 GB disk)
OS Windows-10-10.0.19045-SP0
Environment Windows
Python 3.10.16
Install pip
RAM 127.69 GB
Disk 1896.9/3725.7 GB
CPU Intel Core(TM) i9-10900K 3.70GHz
CPU count 20
GPU NVIDIA GeForce RTX 3090, 24576MiB
GPU count 1
CUDA 12.1
numpy ✅ 2.0.1>=1.23.0
numpy ✅ 2.0.1<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.0>=1.4.1
torch ✅ 2.5.0>=1.8.0
torch ✅ 2.5.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.13>=2.0.0
{'OS': 'Windows-10-10.0.19045-SP0', 'Environment': 'Windows', 'Python': '3.10.16', 'Install': 'pip', 'RAM': '127.69 GB', 'Disk': '1896.9/3725.7 GB', 'CPU': 'Intel Core(TM) i9-10900K 3.70GHz', 'CPU count': 20, 'GPU': 'NVIDIA GeForce RTX 3090, 24576MiB', 'GPU count': 1, 'CUDA': '12.1', 'Package Info': {'numpy': '✅ 2.0.1<2.0.0; sys_platform == "darwin"', 'matplotlib': '✅ 3.10.0>=3.3.0', 'opencv-python': '✅ 4.10.0.84>=4.6.0', 'pillow': '✅ 11.0.0>=7.1.2', 'pyyaml': '✅ 6.0.2>=5.3.1', 'requests': '✅ 2.32.3>=2.23.0', 'scipy': '✅ 1.15.0>=1.4.1', 'torch': '✅ 2.5.0!=2.4.0,>=1.8.0; sys_platform == "win32"', 'torchvision': '✅ 0.20.0>=0.9.0', 'tqdm': '✅ 4.67.1>=4.64.0', 'psutil': '✅ 6.1.1', 'py-cpuinfo': '✅ 9.0.0', 'pandas': '✅ 2.2.3>=1.1.4', 'seaborn': '✅ 0.13.2>=0.11.0', 'ultralytics-thop': '✅ 2.0.13>=2.0.0'}}
### Minimal Reproducible Example
```
from ultralytics import YOLO
# Load a model
model = YOLO("yolov10l.pt") # load an official model
# Export the model
model.export(format="onnx", imgsz=(640, 640), half=True, device="cuda:0")
model = YOLO(r"yolov10l.onnx", task="detect")
# Define path to the image file
source = r"000000001268.jpg"
# Run inference on the source
results = model(source, imgsz=640, half=True, device=0) # list of Results objects
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2025-01-14T09:19:14Z | 2025-02-14T00:19:55Z | https://github.com/ultralytics/ultralytics/issues/18677 | [
"bug",
"Stale",
"exports"
] | kongyjjj | 4 |
vitalik/django-ninja | rest-api | 1,216 | Pagination does not work with code response - [BUG] | **Describe the bug**
If you add decorator @paginate you can't return a tuple where the first element is the response code, because it throws a ValidationError
Both replies in the following code throw ValidationError:
```
@router.get("/{post_id}/replies/", response={200: List[ReplyResponse], 400: Error, 404: Error})
@paginate
def list_replies_for_post(request, post_id: str):
try:
post = get_post_by_id(post_id)
if post is None:
raise Post.DoesNotExist(f"Post with id {post_id} does not exist")
replies = Post.approved.filter(parent=post)
return 200, replies
except Post.DoesNotExist as e:
return 404, {"message": str(e)}
```
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: 5.0.6
- Django-Ninja version: 1.1.0
- Pydantic version: 2.7.4
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
```
| closed | 2024-07-03T00:18:16Z | 2024-07-05T00:14:30Z | https://github.com/vitalik/django-ninja/issues/1216 | [] | pertile | 1 |
xinntao/Real-ESRGAN | pytorch | 48 | Will GFPGAN be supported in the portable NCNN builds? | Or does it rely on Pytorch? | open | 2021-08-22T18:59:45Z | 2021-08-29T16:05:06Z | https://github.com/xinntao/Real-ESRGAN/issues/48 | [
"helper wanted"
] | n00mkrad | 1 |
flasgger/flasgger | flask | 378 | Unable to validate formData when spec is in docstring | In the project I'm working on, the endpoint specifications are located in the docstrings of the endpoints. For example:
```py
@app.route("/foo")
def get_foo():
"""
Get foo.
---
tags: [Foo]
parameters:
# ...
```
I have been able to use Flasgger to validate POST body content **that is described by a schema**; by using the `@swagger.validate(...)` decorator. For example:
```py
@app.route("/foo")
@swagger.validate('MySchemaId')
def get_foo():
```
...where the schema whose ID is `MySchemaId` is defined in either the same or a different endpoint specification.
However, I have not been able to use Flasgger to validate **form data**. 🔥
For example, here's the endpoint specification I'm dealing with now (minus my employer's proprietary parts). Note: In reality, this is still in the docstring of an endpoint; I'm just formatting it as a YAML snippet here to facilitate reading.
```yaml
Store a file in an Amazon S3 bucket.
---
tags: [SomeTag]
consumes:
- multipart/form-data
parameters:
- name: file
in: formData
required: true
type: file
description: The file you want to store
- name: file_name
in: formData
description: The name you want the file to have while in storage
required: true
type: string
enum:
- data.csv
- summary.txt
responses:
200: {description: The file was stored successfully.}
```
Notice there is no schema in this case.
1) Can Flasgger validate `formData` described in a docstring (as opposed to described in a dedicated YAML file)?
2) As I wrote above, I am familiar with getting Flasgger to validate things that are described by a schema. Is it possible to define a schema that describes those two parameters, neither of which is an object that has properties? | open | 2020-04-06T22:14:32Z | 2020-04-06T22:21:51Z | https://github.com/flasgger/flasgger/issues/378 | [] | ongopongo | 0 |
pytorch/vision | computer-vision | 8,047 | '_OpNamespace' 'image' object has no attribute 'decode_jpeg | ### 🐛 Describe the bug
The method `decode_jpeg ` failed.
```python
File "/cache/lightning/src/lightning/data/cache/serializers.py", line 111, in deserialize
return decode_jpeg(array)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torchvision/io/image.py", line 170, in decode_jpeg
output = torch.ops.image.decode_jpeg(input, mode.value)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/_ops.py", line 761, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'image' object has no attribute 'decode_jpeg
```
```python
Original Traceback (most recent call last):
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/_ops.py", line 757, in __getattr__
op, overload_names = torch._C._jit_get_operation(qualified_op_name)
RuntimeError: No such operator image::decode_jpeg
```
### Versions
```python
collect_env.py 100%[======================================================================>] 21.23K --.-KB/s in 0s
2023-10-13 09:26:19 (55.4 MB/s) - ‘collect_env.py’ saved [21737/21737]
⚡ ~ python collect_env.py
Collecting environment information...
PyTorch version: 2.1.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1047-aws-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB
L1i cache: 64 KiB
L2 cache: 2 MiB
L3 cache: 35.8 MiB
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] pytorch-lightning==2.0.5
[pip3] torch==2.1.0+cpu
[pip3] torchaudio==2.1.0+cpu
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.24.3 pypi_0 pypi
[conda] pytorch-lightning 2.0.5 pypi_0 pypi
[conda] torch 2.1.0+cpu pypi_0 pypi
[conda] torchaudio 2.1.0+cpu pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
``` | closed | 2023-10-13T09:26:52Z | 2023-10-18T18:18:29Z | https://github.com/pytorch/vision/issues/8047 | [] | tchaton | 3 |
encode/databases | asyncio | 325 | Raised dialect-specific exception instead of generic ones | Hi.
I'm writing a very basic CRUD api server.
I'm noticing that exceptions raised by `database.execute` are dialect-specific:
```python
@database.transaction()
async def db_store_user(user: User):
query = users.insert().values(**user.dict(exclude_none=True))
try:
await database.execute(query)
except sqlalchemy.exc.IntegrityError:
raise ValueError("User alredy present")
except sqlalchemy.exc.SQLAlchemyError:
raise ValueError("User alredy present")
except Exception as e:
print(type(e))
--> <class 'sqlite3.IntegrityError'>
```
Am I doing something wrong? Is there a way to catch the more generic `IntegrityError`?
Thanks for the support. | closed | 2021-04-24T13:50:47Z | 2021-11-24T15:09:44Z | https://github.com/encode/databases/issues/325 | [] | Pedrozena | 3 |
serengil/deepface | machine-learning | 1,452 | [Classification] regarding the DeepFace library | ### Suggested Changes
Hi there,
I'm using `DeepFace.represent` function, in that I used `detector_backend` as `yolov8`
can I use that for commerical purpose?
| closed | 2025-03-12T08:45:41Z | 2025-03-12T09:36:34Z | https://github.com/serengil/deepface/issues/1452 | [
"documentation",
"question"
] | Mohankrish08 | 1 |
dask/dask | numpy | 11,515 | ⚠️ Upstream CI failed ⚠️ | [Workflow Run URL](https://github.com/dask/dask/actions/runs/11808903001)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/array/tests/test_array_core.py::test_zarr_pass_store: TypeError: LocalStore.__init__() got an unexpected keyword argument 'mode'
dask/tests/test_order.py::test_array_store_final_order: TypeError: LocalStore.__init__() got an unexpected keyword argument 'mode'
dask/tests/test_distributed.py::test_zarr_distributed_with_explicit_directory_store: TypeError: LocalStore.__init__() got an unexpected keyword argument 'mode'
dask/tests/test_distributed.py::test_zarr_distributed_with_explicit_memory_store: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode'
```
</details>
| closed | 2024-11-13T01:58:24Z | 2024-11-13T16:12:30Z | https://github.com/dask/dask/issues/11515 | [
"upstream"
] | github-actions[bot] | 0 |
tableau/server-client-python | rest-api | 643 | View filters for PDF request options are not working | I am getting pdf with filters which is applied during publishing.
It would be good if you guide me to apply multiple filters
```
pdf_req_option = TSC.PDFRequestOptions(page_type=TSC.PDFRequestOptions.PageType.A4,orientation=TSC.PDFRequestOptions.Orientation.Landscape)
pdf_req_option.vf("P_Unit","ABCD")
server.workbooks.populate_pdf(workbook,options)
``` | open | 2020-07-08T07:06:28Z | 2024-09-19T21:48:57Z | https://github.com/tableau/server-client-python/issues/643 | [
"help wanted"
] | vignesh1609 | 3 |
frappe/frappe | rest-api | 29,887 | Print Format HTML Table unexpected spacing | ## Description of the issue
I am having a bug with wkhtmltopdf which is causing extra spacing in some cells which can be seen in row#12, row#24,
I checked and found Spacing is not from data, and not even in print preview but appearing in pdf somehow.
wkhtmltopdf: wkhtmltopdf 0.12.6 (with patched qt)
Frappe Framework: v15.27.0
ERPNext: v15.24.0
**Output of `bench version`**
```
erpnext 15.24.0
frappe 15.27.0
payments 0.0.1
```
## Steps to reproduce the issue
1. Create custom print format with HTML and CSS provided below
```HTML
<div id="header-html">
<div style="">
<table width="100%" style="border: 2px solid black;">
<tr>
<td width="49%" style="border-right:2px solid black;">
<div>
<b>{{ _("Lade-Liste") }}</b><br>
{{ _("Test") }}<br>
{{ _("Kunde") }}: {{ doc.customer }}
</div>
</td>
<td width="23%"style="border-right:2px solid black;">
<div>
{{ _("BM = Bestell - Menge") }}<br>
{{ _("GM = Gelieferte - Menge") }}<br>
{{ _("OM = Offene - Menge") }}<br>
</div>
</td>
<td width="28%" style="border-right:2px solid black;">
<div>
{{ _("Datum") }}: {{ doc.get_formatted("delivery_date")}}<br>
{{ _("Seite: {0}/{1}").format('<span class="page"></span>', '<span class="topage"></span>') }}
</div>
</td>
</tr>
</table>
</div>
</div>
<div>
<table width="100%" class="table-bordered" >
<thead>
<tr>
<td width="4%">{{ _("LS") }}</td>
<td width="4%">{{ _("vst.") }}</td>
<td width="13%">{{ _("Menge") }}</td>
<td width="7%">{{ _("Palette") }}</td>
<td width="8%">{{ _("Auftrag") }}</td>
<td width="11%">{{ _("Intern") }}</td>
<td width="14%">{{ _("Artikel-Nr.") }}</td>
<td width="19%">{{ _("Charge") }}</td>
<td width="5%">{{ _("BM") }}</td>
<td width="5%">{{ _("GM") }}</td>
<td width="5%">{{ _("OM") }}</td>
<td width="5%">{{ _("Lief.Termin") }}</td>
</tr>
</thead>
<tbody>
{% for item in doc.items %}
<tr>
<td><div style="height: 68px;">{{ item.idx }}</div></td>
<td></td>
<td></td>
<td></td>
<td style="word-wrap:break-word;"><div style="width:40px">{{ doc.po_no or "" }}</div></td>
<td style="word-wrap:break-word;"><div style="width:66px">{{ doc.name }}</div></td>
<td style="word-wrap:break-word;"><div style="width:105px">{{ item.item_code }}</div</td>
<td></td>
<td>{{ item.get_formatted("qty") }}</td>
<td>{{ item.get_formatted("delivered_qty") }}</td>
<td>{{ frappe.utils.cint(item.qty - item.delivered_qty) }}</td>
<td>{{ frappe.utils.get_datetime(doc.delivery_date).strftime("%d.%m.%y") }}</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
```
```CSS
.print-format {
font-family: 'ArialGreek';
font-size: 3.5mm;
padding-left: 6mm;
padding-right: 6mm;
margin-left: 0mm;
margin-right: 0mm;
margin-top: 33mm;
margin-bottom: 1mm;
}
.table-bordered > thead > tr > th, .table-bordered > tbody > tr > th, .table-bordered > tfoot > tr > th, .table-bordered > thead > tr > td, .table-bordered > tbody > tr > td, .table-bordered > tfoot > tr > td {
border: 2px solid black !important;
}
```
### Observed result
Unexpected height of HTML table rows
PDF link: [AU25-0027.pdf ](https://drive.google.com/file/d/1iyzVu27Ci-84crYPq-q4bIyB-7IHC8tC/view)
### Expected result
All rows should have equal height
## Additional information
https://discuss.frappe.io/t/print-format-html-table-unpredicted-spacing/140971 | open | 2025-01-22T09:18:07Z | 2025-01-25T05:50:53Z | https://github.com/frappe/frappe/issues/29887 | [
"bug"
] | beingeek | 1 |
deepinsight/insightface | pytorch | 1,831 | arcface_torch resume training | Hi, thanks for your great repo and implementation of arcface_torch.
I am training resnet18 using webface dataset, after keyboard interrupt,the resume training turns to start training from epoch 0.
Here is the changed part I revise to make it work(I have one GPU, so I only tested it on one GPU)
the changed lines are marked with changed
main.py
```python
import argparse
import logging
import os
import torch
import torch.distributed as dist
import torch.nn.functional as F
import torch.utils.data.distributed
from torch.nn.utils import clip_grad_norm_
import losses
from backbones import get_model
from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX
from partial_fc import PartialFC
from utils.utils_amp import MaxClipGradScaler
# changed
from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackLoggingResume, CallBackModelCheckpoint, CallBackModelCheckpointResume
from utils.utils_config import get_config
from utils.utils_logging import AverageMeter, init_logging
def main(args):
cfg = get_config(args.config)
try:
world_size = int(os.environ['WORLD_SIZE'])
rank = int(os.environ['RANK'])
dist.init_process_group('nccl')
except KeyError:
world_size = 1
rank = 0
dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size)
local_rank = args.local_rank
torch.cuda.set_device(local_rank)
os.makedirs(cfg.output, exist_ok=True)
init_logging(rank, cfg.output)
if cfg.rec == "synthetic":
train_set = SyntheticDataset(local_rank=local_rank)
else:
train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank)
num_image = len(train_set)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True)
train_loader = DataLoaderX(
local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size,
sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True)
backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank)
total_batch_size = cfg.batch_size * world_size
cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch
cfg.total_step = num_image // total_batch_size * cfg.num_epoch
def lr_step_func(current_step):
cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch]
if current_step < cfg.warmup_step:
return current_step / cfg.warmup_step
else:
return 0.1 ** len([m for m in cfg.decay_step if m <= current_step])
# changed
if cfg.resume:
try:
backbone_pth = os.path.join(cfg.output, "savedckpt.pth")
savedckpt = torch.load(backbone_pth, map_location=torch.device(local_rank))
start_epoch = savedckpt['epoch'] + 1
global_step = int(num_image/cfg.batch_size) * (savedckpt['epoch']+1) + 1
backbone.load_state_dict(savedckpt['backbone'].module.state_dict())
if rank == 0:
logging.info("backbone resume successfully!")
except (FileNotFoundError, KeyError, IndexError, RuntimeError):
if rank == 0:
logging.info("resume fail, backbone init successfully!")
else:
start_epoch = 0
global_step = 0
backbone = torch.nn.parallel.DistributedDataParallel(
module=backbone, broadcast_buffers=False, device_ids=[local_rank])
backbone.train()
margin_softmax = losses.get_loss(cfg.loss)
module_partial_fc = PartialFC(
rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume,
batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes,
sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output)
opt_backbone = torch.optim.SGD(
params=[{'params': backbone.parameters()}],
lr=cfg.lr / 512 * cfg.batch_size * world_size,
momentum=0.9, weight_decay=cfg.weight_decay)
opt_pfc = torch.optim.SGD(
params=[{'params': module_partial_fc.parameters()}],
lr=cfg.lr / 512 * cfg.batch_size * world_size,
momentum=0.9, weight_decay=cfg.weight_decay)
scheduler_backbone = torch.optim.lr_scheduler.LambdaLR(
optimizer=opt_backbone, lr_lambda=lr_step_func)
scheduler_pfc = torch.optim.lr_scheduler.LambdaLR(
optimizer=opt_pfc, lr_lambda=lr_step_func)
for key, value in cfg.items():
num_space = 25 - len(key)
logging.info(": " + key + " " * num_space + str(value))
val_target = cfg.val_targets
callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec)
# changed
callback_logging = CallBackLoggingResume(50, rank, cfg.total_step, global_step, cfg.batch_size, world_size, None)
callback_checkpoint = CallBackModelCheckpointResume(rank, cfg.output)
loss = AverageMeter()
grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None
for epoch in range(start_epoch, cfg.num_epoch):
train_sampler.set_epoch(epoch)
for step, (img, label) in enumerate(train_loader):
global_step += 1
features = F.normalize(backbone(img))
x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc)
if cfg.fp16:
features.backward(grad_amp.scale(x_grad))
grad_amp.unscale_(opt_backbone)
clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)
grad_amp.step(opt_backbone)
grad_amp.update()
else:
features.backward(x_grad)
clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2)
opt_backbone.step()
opt_pfc.step()
module_partial_fc.update()
opt_backbone.zero_grad()
opt_pfc.zero_grad()
loss.update(loss_v, 1)
callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp)
callback_verification(global_step, backbone)
scheduler_backbone.step()
scheduler_pfc.step()
callback_checkpoint(global_step, epoch, backbone, module_partial_fc)
dist.destroy_process_group()
if __name__ == "__main__":
torch.backends.cudnn.benchmark = True
parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')
parser.add_argument('config', type=str, help='py config file')
parser.add_argument('--local_rank', type=int, default=0, help='local_rank')
main(parser.parse_args())
```
in utils_callback.py, I changed the following two:(only add one or two lines)
CallBackLoggingResume
```python
class CallBackLoggingResume(object):
def __init__(self, frequent, rank, total_step, start_step, batch_size, world_size, writer=None):
self.frequent: int = frequent
self.rank: int = rank
self.time_start = time.time()
# added
self.start_step: int = start_step
self.total_step: int = total_step
self.batch_size: int = batch_size
self.world_size: int = world_size
self.writer = writer
self.init = False
self.tic = 0
def __call__(self,
global_step: int,
loss: AverageMeter,
epoch: int,
fp16: bool,
learning_rate: float,
grad_scaler: torch.cuda.amp.GradScaler):
if self.rank == 0 and global_step > 0 and global_step % self.frequent == 0:
if self.init:
try:
speed: float = self.frequent * self.batch_size / (time.time() - self.tic)
speed_total = speed * self.world_size
except ZeroDivisionError:
speed_total = float('inf')
time_now = (time.time() - self.time_start) / 3600
# changed
time_total = time_now / ((global_step-self.start_step + 1) / self.total_step)
time_for_end = time_total - time_now
if self.writer is not None:
self.writer.add_scalar('time_for_end', time_for_end, global_step)
self.writer.add_scalar('learning_rate', learning_rate, global_step)
self.writer.add_scalar('loss', loss.avg, global_step)
if fp16:
msg = "Speed %.2f samples/sec Loss %.4f LearningRate %.4f Epoch: %d Global Step: %d " \
"Fp16 Grad Scale: %2.f Required: %1.f hours" % (
speed_total, loss.avg, learning_rate, epoch, global_step,
grad_scaler.get_scale(), time_for_end
)
else:
msg = "Speed %.2f samples/sec Loss %.4f LearningRate %.4f Epoch: %d Global Step: %d " \
"Required: %1.f hours" % (
speed_total, loss.avg, learning_rate, epoch, global_step, time_for_end
)
logging.info(msg)
loss.reset()
self.tic = time.time()
else:
self.init = True
self.tic = time.time()
```
and CallBackModelCheckpointResume
```python
class CallBackModelCheckpointResume(object):
def __init__(self, rank, output="./"):
self.rank: int = rank
self.output: str = output
def __call__(self, global_step, epoch, backbone, partial_fc):
if global_step > 100 and self.rank == 0:
path_module = os.path.join(self.output, "savedckpt.pth")
# changed
state = {'epoch': epoch, 'backbone': backbone}
torch.save(state, path_module)
logging.info("Pytorch Model Saved in '{}'".format(path_module))
if global_step > 100 and partial_fc is not None:
partial_fc.save_params()
```
| closed | 2021-11-17T07:31:47Z | 2022-05-05T02:13:44Z | https://github.com/deepinsight/insightface/issues/1831 | [] | lizhenstat | 2 |
amidaware/tacticalrmm | django | 1,076 | Take Control not connecting | **Server Info:**
- OS: Ubuntu 20.04
- Browser: chrome
- RMM Version: v0.12.4
**Installation Method:**
- [X] Standard
- [ ] Docker
**Agent Info:**
- Agent version: Agent v2.0.2
- Agent OS: Windows 10/11
**Describe the bug**
The agent works just fine, but when I try to run the "Take Control" it just never connects. I believe this could be a DNS/Reverse proxy issue, as we are running the RMM server behind an Nginx server.
NGINX config:
```
server {
listen 80;
server_name
remotermm.site.com api.site.com rmm.site.com;
location / {
proxy_pass http://xxx.xxx.xxx.xxx:80/;
proxy_http_version 1.1;
# Inform MeshCentral about the real host, port and protocol
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# HTTPS server. In this example, we use a wildcard as server name.
server {
listen 443 ssl;
server_name
remotermm.site.com api.site.com rmm.site.com;
# MeshCentral uses long standing web socket connections, set longer timeouts.
proxy_send_timeout 330s;
proxy_read_timeout 330s;
# We can use the MeshCentral generated certificate & key
ssl_certificate /www/server/panel/vhost/cert/remotermm.site.com/fullchain.pem;
ssl_certificate_key /www/server/panel/vhost/cert/remotermm.site.com/privkey.pem;
ssl_session_cache shared:WEBSSL:10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://xxx.xxx.xxx.xxx:443/;
proxy_http_version 1.1;
# Allows websockets over HTTPS.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
# Inform MeshCentral about the real host, port and protocol
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
**To Reproduce**
Steps to reproduce the behavior:
1. Login
2. Navigate to the computer
3. Click "Take Control"
Pop-up opens but never connects

**Expected behavior**
Should open the remote screen
**Screenshots**

**Additional context**
This is a new installation, got the agents to work, but mesh is not connecting apparently
| closed | 2022-04-20T19:47:36Z | 2022-04-20T19:49:56Z | https://github.com/amidaware/tacticalrmm/issues/1076 | [] | ccasalicchio | 0 |
apify/crawlee-python | automation | 388 | accept patch/minor/major as the release type | https://github.com/orhun/git-cliff/pull/744 Introduced this functionality - let's use it after it's released | closed | 2024-08-01T20:18:55Z | 2024-11-13T14:18:46Z | https://github.com/apify/crawlee-python/issues/388 | [
"t-tooling"
] | janbuchar | 0 |
proplot-dev/proplot | data-visualization | 222 | Proplot in jupyterlab generate continuous cpu usage | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
Proplot plots make busy jupyterlab all the time.
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
import proplot as pp
%matplotlib widget
f, axs = pp.subplots()
axs.plot([1, 5], [6, 7])
```
**Expected behavior**: Once plotted, jupyterlab should return to "idle" mode (when the mouse is not moving)
**Actual behavior**: Once plotted, jupyterlab switches between idle and busy very rapidly and loads the CPU.
### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
import matplotlib.pyplot as plt
%matplotlib widget
plt.plot([1, 5], [6, 7])
```
### Proplot version
Proplot: 0.6.1
Matplotlib: 3.3.1
jupyterlab: 2.2.6
ipympl: 0.5.7
| closed | 2020-08-26T07:54:48Z | 2021-06-30T02:39:40Z | https://github.com/proplot-dev/proplot/issues/222 | [
"bug"
] | ghislainp | 1 |
slackapi/python-slack-sdk | asyncio | 1,325 | files_upload_v2 is not working when bot is installed in multiple workspace | We using "files_upload_v2" to upload file to the private channel which is under one workspace and slack bot installed in multiple workspace. But it's failing with 'channel_not_found' error. seems files.completeUploadExternal is looking in different workspace.
Do we have any argument to mention team ID ?
```python
return self._client.files_upload_v2(**param)
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/client.py", line 3100, in files_upload_v2
completion = self.files_completeUploadExternal(
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/client.py", line 3157, in files_completeUploadExternal
return self.api_call("files.completeUploadExternal", params=kwargs)
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/base_client.py", line 156, in api_call
return self._sync_send(api_url=api_url, req_args=req_args)
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/base_client.py", line 187, in _sync_send
return self._urllib_api_call(
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/base_client.py", line 309, in _urllib_api_call
return SlackResponse(
File "/usr/local/lib/python3.8/site-packages/slack_sdk/web/slack_response.py", line 189, in validate
raise e.SlackApiError(message=msg, response=self)
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/files.completeUploadExternal)
The server responded with: {'ok': False, 'error': 'channel_not_found'}`
```
| closed | 2023-02-08T09:58:18Z | 2023-02-09T22:51:34Z | https://github.com/slackapi/python-slack-sdk/issues/1325 | [
"question",
"duplicate"
] | maariselvamm | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.