repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pyjanitor-devs/pyjanitor | pandas | 544 | Welcome Zijie to the team! | On the basis of his contribution to extend pyjanitor to be compatible with pyspark, I have added @zjpoh to the team. Welcome on board!
Zijie and @anzelpwj have the most experience amongst us with pyspark, and I'm looking forward to seeing their contributions!
cc: @zbarry @szuckerman @hectormz @shandou @sallyhong @anzelpwj @jk3587 | closed | 2019-08-23T21:57:33Z | 2019-09-06T23:16:42Z | https://github.com/pyjanitor-devs/pyjanitor/issues/544 | [] | ericmjl | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 724 | Performance decreases with high batch size | If I set the batch size to a value like 10, I get higher ram usage, around 100GB/128GB of ram usage, but the performance actually seems to get worse. Is this expected? | open | 2023-08-07T18:51:38Z | 2023-08-08T00:07:00Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/724 | [] | GilbertoRodrigues | 0 |
flasgger/flasgger | rest-api | 623 | Can syntax highlighting be supported? | In future planning, can syntax highlighting be used in description? | open | 2024-08-13T04:21:10Z | 2024-08-13T04:23:01Z | https://github.com/flasgger/flasgger/issues/623 | [] | rice0524168 | 0 |
OpenInterpreter/open-interpreter | python | 1,109 | Is the %% [commands] implemented not yet? Or is it a bug? | ### Describe the bug
I found the following message when I looked at %help.
<img width="401" alt="image" src="https://github.com/OpenInterpreter/open-interpreter/assets/2724312/ae75d188-8ffa-413d-a5c7-3f61dcc9be44">
According to this Help, `%% [commands]` seems to allow you to execute shell commands in the Interpreter console.
I found this to be a very nice feature.
However, when I typed the following command on Open Interpreter, nothing was executed.
Could this possibly not yet be implemented?

Or is it a bug?
If anyone knows anything about the behavior of this command, please let me know.
### Reproduce
1. Open Interpreter
2. Execute command `%% ls`
### Expected behavior
1. A list of files in the current current directory is displayed
### Screenshots
_No response_
### Open Interpreter version
0.2.3
### Python version
3.10.12
### Operating System name and version
Windows 11 WSL/Ubuntu 22.04.4 LTS
### Additional context
_No response_ | open | 2024-03-22T10:30:08Z | 2024-03-31T02:12:42Z | https://github.com/OpenInterpreter/open-interpreter/issues/1109 | [
"Bug",
"Help Required",
"Triaged"
] | ynott | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,201 | No gradient penalty in WGAN-GP | When computing WGAN-GP loss, the cal_gradient_penalty function is not called, and gradient penalty is not applied. | open | 2020-11-29T14:58:05Z | 2020-11-29T14:58:05Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1201 | [] | GwangPyo | 0 |
JaidedAI/EasyOCR | deep-learning | 320 | Can it be used in php? | Can it be used in php? | closed | 2020-12-02T06:33:20Z | 2022-03-02T09:24:09Z | https://github.com/JaidedAI/EasyOCR/issues/320 | [] | netwons | 0 |
hankcs/HanLP | nlp | 1,904 | pip install hanlp[full]时,会安装Collecting tensorflow>=2.8.0 Using cached tensorflow-2.17.0-cp39-cp39-win_amd64.whl (2.0 kB) | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
A clear and concise description of what the bug is.
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```python
# 在Windows 11 23H2系统上运行以下命令
pip install hanlp[full]
```
**Describe the current behavior**
在Windows 11 23H2系统上使用Python 3.9.13安装HanLP[full]时,会自动安装tensorflow==2.17.0,这与项目中setup.py所要求的版本不一致,导致程序无法正常运行。此外,尝试降级tensorflow版本时,会遇到多个依赖项之间的冲突。
**Expected behavior**
期望安装HanLP[full]时,能够自动匹配适合的tensorflow版本,或者提供解决依赖冲突的建议。
**System information**
-OS Platform and Distribution: Windows 11 23H2
-Python version: 3.9.13
-HanLP version: 2.1.0b59
**Other info / logs**
D:\>cd d:\Code-Compile\Language\Python39\Scripts
d:\Code-Compile\Language\Python39\Scripts>pip -V
pip 22.0.4 from D:\Code-Compile\Language\Python39\lib\site-packages\pip (python 3.9)
d:\Code-Compile\Language\Python39\Scripts>pip install hanlp[full]
Collecting hanlp[full]
Downloading hanlp-2.1.0b59-py3-none-any.whl (651 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 651.6/651.6 KB 42.8 MB/s eta 0:00:00
Collecting pynvml
Using cached pynvml-11.5.3-py3-none-any.whl (53 kB)
Collecting sentencepiece>=0.1.91
Using cached sentencepiece-0.2.0-cp39-cp39-win_amd64.whl (991 kB)
Collecting torch>=1.6.0
Using cached torch-2.4.0-cp39-cp39-win_amd64.whl (198.0 MB)
Collecting hanlp-downloader
Using cached hanlp_downloader-0.0.25-py3-none-any.whl
Collecting hanlp-trie>=0.0.4
Using cached hanlp_trie-0.0.5-py3-none-any.whl
Collecting termcolor
Using cached termcolor-2.4.0-py3-none-any.whl (7.7 kB)
Collecting hanlp-common>=0.0.20
Using cached hanlp_common-0.0.20-py3-none-any.whl
Collecting toposort==1.5
Using cached toposort-1.5-py2.py3-none-any.whl (7.6 kB)
Collecting transformers>=4.1.1
Downloading transformers-4.44.1-py3-none-any.whl (9.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.5/9.5 MB 28.7 MB/s eta 0:00:00
Collecting tensorflow>=2.8.0
Using cached tensorflow-2.17.0-cp39-cp39-win_amd64.whl (2.0 kB)
Collecting perin-parser>=0.0.12
Using cached perin_parser-0.0.14-py3-none-any.whl
Collecting penman==1.2.1
Using cached Penman-1.2.1-py3-none-any.whl (43 kB)
Collecting fasttext-wheel==0.9.2
Using cached fasttext_wheel-0.9.2-cp39-cp39-win_amd64.whl (225 kB)
Collecting networkx>=2.5.1
Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB)
Collecting pybind11>=2.2
Using cached pybind11-2.13.4-py3-none-any.whl (240 kB)
Requirement already satisfied: setuptools>=0.7.0 in d:\code-compile\language\python39\lib\site-packages (from fasttext-wheel==0.9.2->hanlp[full]) (58.1.0)
Collecting numpy
Using cached numpy-2.0.1-cp39-cp39-win_amd64.whl (16.6 MB)
Collecting phrasetree>=0.0.9
Using cached phrasetree-0.0.9-py3-none-any.whl
Collecting scipy
Using cached scipy-1.13.1-cp39-cp39-win_amd64.whl (46.2 MB)
Collecting tensorflow-intel==2.17.0
Using cached tensorflow_intel-2.17.0-cp39-cp39-win_amd64.whl (385.0 MB)
Collecting libclang>=13.0.0
Using cached libclang-18.1.1-py2.py3-none-win_amd64.whl (26.4 MB)
Collecting google-pasta>=0.1.1
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting grpcio<2.0,>=1.24.3
Downloading grpcio-1.65.5-cp39-cp39-win_amd64.whl (4.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.1/4.1 MB 53.0 MB/s eta 0:00:00
Collecting astunparse>=1.6.0
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting absl-py>=1.0.0
Using cached absl_py-2.1.0-py3-none-any.whl (133 kB)
Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3
Using cached protobuf-4.25.4-cp39-cp39-win_amd64.whl (413 kB)
Collecting wrapt>=1.11.0
Using cached wrapt-1.16.0-cp39-cp39-win_amd64.whl (37 kB)
Collecting h5py>=3.10.0
Using cached h5py-3.11.0-cp39-cp39-win_amd64.whl (3.0 MB)
Collecting ml-dtypes<0.5.0,>=0.3.1
Using cached ml_dtypes-0.4.0-cp39-cp39-win_amd64.whl (126 kB)
Collecting numpy
Using cached numpy-1.26.4-cp39-cp39-win_amd64.whl (15.8 MB)
Collecting keras>=3.2.0
Using cached keras-3.5.0-py3-none-any.whl (1.1 MB)
Collecting tensorboard<2.18,>=2.17
Using cached tensorboard-2.17.1-py3-none-any.whl (5.5 MB)
Collecting gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1
Using cached gast-0.6.0-py3-none-any.whl (21 kB)
Collecting opt-einsum>=2.3.2
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting typing-extensions>=3.6.6
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting flatbuffers>=24.3.25
Using cached flatbuffers-24.3.25-py2.py3-none-any.whl (26 kB)
Collecting requests<3,>=2.21.0
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Collecting packaging
Using cached packaging-24.1-py3-none-any.whl (53 kB)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
Using cached tensorflow_io_gcs_filesystem-0.31.0-cp39-cp39-win_amd64.whl (1.5 MB)
Collecting six>=1.12.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting fsspec
Using cached fsspec-2024.6.1-py3-none-any.whl (177 kB)
Collecting sympy
Using cached sympy-1.13.2-py3-none-any.whl (6.2 MB)
Collecting filelock
Using cached filelock-3.15.4-py3-none-any.whl (16 kB)
Collecting jinja2
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Collecting pyyaml>=5.1
Using cached PyYAML-6.0.2-cp39-cp39-win_amd64.whl (162 kB)
Collecting safetensors>=0.4.1
Using cached safetensors-0.4.4-cp39-none-win_amd64.whl (286 kB)
Collecting tqdm>=4.27
Using cached tqdm-4.66.5-py3-none-any.whl (78 kB)
Collecting regex!=2019.12.17
Using cached regex-2024.7.24-cp39-cp39-win_amd64.whl (269 kB)
Collecting huggingface-hub<1.0,>=0.23.2
Downloading huggingface_hub-0.24.6-py3-none-any.whl (417 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 417.5/417.5 KB 25.5 MB/s eta 0:00:00
Collecting tokenizers<0.20,>=0.19
Using cached tokenizers-0.19.1-cp39-none-win_amd64.whl (2.2 MB)
Collecting idna<4,>=2.5
Using cached idna-3.7-py3-none-any.whl (66 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2024.7.4-py3-none-any.whl (162 kB)
Collecting urllib3<3,>=1.21.1
Using cached urllib3-2.2.2-py3-none-any.whl (121 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.3.2-cp39-cp39-win_amd64.whl (100 kB)
Collecting colorama
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.5-cp39-cp39-win_amd64.whl (17 kB)
Collecting mpmath<1.4,>=1.1.0
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Collecting wheel<1.0,>=0.23.0
Using cached wheel-0.44.0-py3-none-any.whl (67 kB)
Collecting rich
Using cached rich-13.7.1-py3-none-any.whl (240 kB)
Collecting namex
Using cached namex-0.0.8-py3-none-any.whl (5.8 kB)
Collecting optree
Using cached optree-0.12.1-cp39-cp39-win_amd64.whl (263 kB)
Collecting tensorboard-data-server<0.8.0,>=0.7.0
Using cached tensorboard_data_server-0.7.2-py3-none-any.whl (2.4 kB)
Collecting markdown>=2.6.8
Downloading Markdown-3.7-py3-none-any.whl (106 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 106.3/106.3 KB 6.0 MB/s eta 0:00:00
Collecting werkzeug>=1.0.1
Using cached werkzeug-3.0.3-py3-none-any.whl (227 kB)
Collecting importlib-metadata>=4.4
Downloading importlib_metadata-8.4.0-py3-none-any.whl (26 kB)
Collecting pygments<3.0.0,>=2.13.0
Using cached pygments-2.18.0-py3-none-any.whl (1.2 MB)
Collecting markdown-it-py>=2.2.0
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Collecting zipp>=0.5
Using cached zipp-3.20.0-py3-none-any.whl (9.4 kB)
Collecting mdurl~=0.1
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Installing collected packages: toposort, sentencepiece, phrasetree, penman, namex, mpmath, libclang, flatbuffers, zipp, wrapt, wheel, urllib3, typing-extensions, termcolor, tensorflow-io-gcs-filesystem, tensorboard-data-server, sympy, six, safetensors, regex, pyyaml, pynvml, pygments, pybind11, protobuf, packaging, numpy, networkx, mdurl, MarkupSafe, idna, hanlp-common, grpcio, gast, fsspec, filelock, colorama, charset-normalizer, certifi, absl-py, werkzeug, tqdm, scipy, requests, optree, opt-einsum, ml-dtypes, markdown-it-py, jinja2, importlib-metadata, hanlp-trie, h5py, google-pasta, fasttext-wheel, astunparse, torch, rich, perin-parser, markdown, huggingface-hub, hanlp-downloader, tokenizers, tensorboard, keras, transformers, tensorflow-intel, tensorflow, hanlp
WARNING: The script penman.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script wheel.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script isympy.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script pygmentize.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script pybind11-config.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script f2py.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script normalizer.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script tqdm.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script markdown-it.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script markdown_py.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script huggingface-cli.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script tensorboard.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script transformers-cli.exe is installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts import_pb_to_tensorboard.exe, saved_model_cli.exe, tensorboard.exe, tf_upgrade_v2.exe, tflite_convert.exe, toco.exe and toco_from_protos.exe are installed in 'D:\Code-Compile\Language\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed MarkupSafe-2.1.5 absl-py-2.1.0 astunparse-1.6.3 certifi-2024.7.4 charset-normalizer-3.3.2 colorama-0.4.6 fasttext-wheel-0.9.2 filelock-3.15.4 flatbuffers-24.3.25 fsspec-2024.6.1 gast-0.6.0 google-pasta-0.2.0 grpcio-1.65.5 h5py-3.11.0 hanlp-2.1.0b59 hanlp-common-0.0.20 hanlp-downloader-0.0.25 hanlp-trie-0.0.5 huggingface-hub-0.24.6 idna-3.7 importlib-metadata-8.4.0 jinja2-3.1.4 keras-3.5.0 libclang-18.1.1 markdown-3.7 markdown-it-py-3.0.0 mdurl-0.1.2 ml-dtypes-0.4.0 mpmath-1.3.0 namex-0.0.8 networkx-3.2.1 numpy-1.26.4 opt-einsum-3.3.0 optree-0.12.1 packaging-24.1 penman-1.2.1 perin-parser-0.0.14 phrasetree-0.0.9 protobuf-4.25.4 pybind11-2.13.4 pygments-2.18.0 pynvml-11.5.3 pyyaml-6.0.2 regex-2024.7.24 requests-2.32.3 rich-13.7.1 safetensors-0.4.4 scipy-1.13.1 sentencepiece-0.2.0 six-1.16.0 sympy-1.13.2 tensorboard-2.17.1 tensorboard-data-server-0.7.2 tensorflow-2.17.0 tensorflow-intel-2.17.0 tensorflow-io-gcs-filesystem-0.31.0 termcolor-2.4.0 tokenizers-0.19.1 toposort-1.5 torch-2.4.0 tqdm-4.66.5 transformers-4.44.1 typing-extensions-4.12.2 urllib3-2.2.2 werkzeug-3.0.3 wheel-0.44.0 wrapt-1.16.0 zipp-3.20.0
WARNING: You are using pip version 22.0.4; however, version 24.2 is available.
You should consider upgrading via the 'D:\Code-Compile\Language\Python39\python.exe -m pip install --upgrade pip' command.
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2024-08-21T12:57:55Z | 2024-08-22T01:16:28Z | https://github.com/hankcs/HanLP/issues/1904 | [
"bug"
] | qingting04 | 1 |
mirumee/ariadne-codegen | graphql | 308 | include_all_inputs = false leads to IndexError in isort | `ariadne-codegen --config ariadne-codegen.toml` runs fine unless I add `include_all_inputs = false` in my `ariadne-codegen.toml` file. With include_all_inputs = false command fails with error:
```
File "/Users/user/my-project/venv/bin/ariadne-codegen", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/ariadne_codegen/main.py", line 37, in main
client(config_dict)
File "/Users/user/my-project/venv/lib/python3.12/site-packages/ariadne_codegen/main.py", line 81, in client
generated_files = package_generator.generate()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/ariadne_codegen/client_generators/package.py", line 152, in generate
self._generate_input_types()
File "/Users/user/my-project/venv/lib/python3.12/site-packages/ariadne_codegen/client_generators/package.py", line 307, in _generate_input_types
code = self._add_comments_to_code(ast_to_str(module), self.schema_source)
^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/ariadne_codegen/utils.py", line 33, in ast_to_str
return format_str(isort.code(code), mode=Mode())
^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/isort/api.py", line 92, in sort_code_string
sort_stream(
File "/Users/user/my-project/venv/lib/python3.12/site-packages/isort/api.py", line 210, in sort_stream
changed = core.process(
^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/isort/core.py", line 422, in process
parsed_content = parse.file_contents(import_section, config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/my-project/venv/lib/python3.12/site-packages/isort/parse.py", line 522, in file_contents
if "," in import_string.split(just_imports[-1])[-1]:
~~~~~~~~~~~~^^^^
IndexError: list index out of range
```
BTW, nothing is wrong with `include_all_enums = false`, it doesn't cause any issues.
UPD:
```
$ ariadne-codegen --version
ariadne-codegen, version 0.14.0
$ isort --version
_ _
(_) ___ ___ _ __| |_
| |/ _/ / _ \/ '__ _/
| |\__ \/\_\/| | | |_
|_|\___/\___/\_/ \_/
isort your imports, so you don't have to.
VERSION 5.12.0
``` | open | 2024-08-13T17:07:40Z | 2025-01-08T18:54:46Z | https://github.com/mirumee/ariadne-codegen/issues/308 | [] | weblab-misha | 7 |
MaartenGr/BERTopic | nlp | 2,304 | Lightweight installation: use safetensors without torch | ### Feature request
Remove dependency on `torch` when loading the topic model saved with safetensors.
### Motivation
I was happy to find [the guide](https://maartengr.github.io/BERTopic/getting_started/tips_and_tricks/tips_and_tricks.html#lightweight-installation) to lightweight bertopic installation (without `torch`), however, BERTopic.load() seems to depend on `torch` through safetensors.
Removing torch from requirements gives 2x speedup to the build of docker container for me, since I am hosting the embedding model on another service, so I would really like this to be avoided if possible.
Specifically, the problem is [here](https://github.com/MaartenGr/BERTopic/blob/master/bertopic/_save_utils.py#L514) in _save_utils.py:
``` python
def load_safetensors(path):
"""Load safetensors and check whether it is installed."""
try:
import safetensors.torch # <----
import safetensors
return safetensors.torch.load_file(path, device="cpu")
except ImportError:
raise ValueError("`pip install safetensors` to load .safetensors")
```
### Your contribution
I suggest using the function `safetensors.safe_open()` with `framework='numpy'` instead. This way the load() does not have the unnecessary requirement for torch to be installed. | closed | 2025-03-13T12:05:01Z | 2025-03-17T07:40:01Z | https://github.com/MaartenGr/BERTopic/issues/2304 | [] | hedgeho | 1 |
PokeAPI/pokeapi | api | 495 | Wurmple evolution chain error. |
In https://pokeapi.co/api/v2/evolution-chain/135/, Cascoon is set as an evolution of Silcoon instead of Wurmple. | closed | 2020-05-25T10:18:14Z | 2020-06-03T18:23:11Z | https://github.com/PokeAPI/pokeapi/issues/495 | [
"duplicate"
] | ESSutherland | 6 |
netbox-community/netbox | django | 18,585 | Filtering circuits by location not working | ### Deployment Type
NetBox Cloud
### NetBox Version
v4.2.3
### Python Version
3.11
### Steps to Reproduce
1. Attach a location to a circuit as a termination point.
2. Go to the location and under "Related objects" click on circuits (or circuits terminations).
3. `https://netbox.local/circuits/circuits/?location_id=<id>`
### Expected Behavior
Only the attached circuit(s) show(s) up.
### Observed Behavior
Filter not working, all circuits are being displayed. | closed | 2025-02-06T10:39:02Z | 2025-02-18T18:33:08Z | https://github.com/netbox-community/netbox/issues/18585 | [
"type: bug",
"status: accepted",
"severity: low"
] | Azmodeszer | 0 |
DistrictDataLabs/yellowbrick | matplotlib | 949 | Some plot directive visualizers not rendering in Read the Docs | Currently on Read the Docs (develop branch), a few of our visualizers that use the plot directive (#687) are not rendering the plots:
- [Classification Report](http://www.scikit-yb.org/en/develop/api/classifier/classification_report.html)
- [Silhouette Scores](http://www.scikit-yb.org/en/develop/api/cluster/silhouette.html)
- [ScatterPlot](http://www.scikit-yb.org/en/develop/api/contrib/scatter.html)
- [JointPlot](http://www.scikit-yb.org/en/develop/api/features/jointplot.html)
| closed | 2019-08-15T20:58:39Z | 2019-08-29T00:03:24Z | https://github.com/DistrictDataLabs/yellowbrick/issues/949 | [
"type: bug",
"type: documentation"
] | rebeccabilbro | 1 |
ets-labs/python-dependency-injector | asyncio | 31 | Make Objects compatible with Python 3.3 | Acceptance criterias:
- Tests on Python 3.3 passed.
- Badge with supported version added to README.md
| closed | 2015-03-17T13:02:30Z | 2015-03-26T08:03:49Z | https://github.com/ets-labs/python-dependency-injector/issues/31 | [
"enhancement"
] | rmk135 | 0 |
qwj/python-proxy | asyncio | 126 | reconnect ssh proxy session | Hi,
Please add ability to reconnect ssh session after disconnect. Current behavior after a failed or disconnected ssh session is just a session state notification in the log:
```
May 03 22:21:48 debian jumphost.py[1520]: ERROR:asyncio: Task exception was never retrieved
May 03 22:21:48 debian jumphost.py[1520]: future: <Task finished coro=<ProxySSH.patch_stream.<locals>.channel() done, defined at /usr/local/lib/python3.7/dist-packages/pproxy/server.py:610> exception=ConnectionLost('Connection lost')>
May 03 22:21:48 debian jumphost.py[1520]: Traceback (most recent call last):
May 03 22:21:48 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/pproxy/server.py", line 612, in channel
May 03 22:21:48 debian jumphost.py[1520]: buf = await ssh_reader.read(65536)
May 03 22:21:48 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/stream.py", line 131, in read
May 03 22:21:48 debian jumphost.py[1520]: return await self._session.read(n, self._datatype, exact=False)
May 03 22:21:48 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/stream.py", line 495, in read
May 03 22:21:48 debian jumphost.py[1520]: raise exc
May 03 22:21:48 debian jumphost.py[1520]: asyncssh.misc.ConnectionLost: Connection lost
May 03 22:23:34 debian jumphost.py[1520]: DEBUG:jumphost: socks5 z.z.z.z:65418 -> sshtunnel x.x.x.x:22 -> y.y.y.y:22
May 03 22:23:34 debian jumphost.py[1520]: Traceback (most recent call last):
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/pproxy/server.py", line 87, in stream_handler
May 03 22:23:34 debian jumphost.py[1520]: reader_remote, writer_remote = await roption.open_connection(host_name, port, local_addr, lbind)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/pproxy/server.py", line 227, in open_connection
May 03 22:23:34 debian jumphost.py[1520]: reader, writer = await asyncio.wait_for(wait, timeout=timeout)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/lib/python3.7/asyncio/tasks.py", line 416, in wait_for
May 03 22:23:34 debian jumphost.py[1520]: return fut.result()
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/pproxy/server.py", line 649, in wait_open_connection
May 03 22:23:34 debian jumphost.py[1520]: reader, writer = await conn.open_connection(host, port)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/connection.py", line 3537, in open_connection
May 03 22:23:34 debian jumphost.py[1520]: *args, **kwargs)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/connection.py", line 3508, in create_connection
May 03 22:23:34 debian jumphost.py[1520]: chan = self.create_tcp_channel(encoding, errors, window, max_pktsize)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/connection.py", line 2277, in create_tcp_channel
May 03 22:23:34 debian jumphost.py[1520]: errors, window, max_pktsize)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/channel.py", line 110, in __init__
May 03 22:23:34 debian jumphost.py[1520]: self._recv_chan = conn.add_channel(self)
May 03 22:23:34 debian jumphost.py[1520]: File "/usr/local/lib/python3.7/dist-packages/asyncssh/connection.py", line 939, in add_channel
May 03 22:23:34 debian jumphost.py[1520]: 'SSH connection closed')
May 03 22:23:34 debian jumphost.py[1520]: asyncssh.misc.ChannelOpenError: SSH connection closed
May 03 22:23:34 debian jumphost.py[1520]: DEBUG:jumphost: SSH connection closed from z.z.z.z
```
My monkeypatch as WA:
```python
class ProxySSH(pproxy.server.ProxySSH):
async def wait_open_connection(self, host, port, local_addr, family, tunnel=None):
if self.sshconn is not None and self.sshconn.cancelled():
self.sshconn = None
try:
await self.wait_ssh_connection(local_addr, family, tunnel)
conn = self.sshconn.result()
if isinstance(self.jump, pproxy.server.ProxySSH):
reader, writer = await self.jump.wait_open_connection(host, port, None, None, conn)
else:
host, port = self.jump.destination(host, port)
if self.jump.unix:
reader, writer = await conn.open_unix_connection(self.jump.bind)
else:
reader, writer = await conn.open_connection(host, port)
reader, writer = self.patch_stream(reader, writer, host, port)
return reader, writer
except Exception as ex:
if not self.sshconn.done():
self.sshconn.set_exception(ex)
self.sshconn = None
raise
pproxy.server.ProxySSH = ProxySSH
``` | closed | 2021-05-03T19:32:33Z | 2021-05-11T23:04:18Z | https://github.com/qwj/python-proxy/issues/126 | [] | keenser | 2 |
biolab/orange3 | pandas | 6,129 | Orange installed from conda/pip does not have an icon (on Mac) | ### Discussed in https://github.com/biolab/orange3/discussions/6122
<div type='discussions-op-text'>
<sup>Originally posted by **DylanZDD** September 4, 2022</sup>
<img width="144" alt="Screen Shot 2022-09-04 at 12 26 04" src="https://user-images.githubusercontent.com/44270787/188297386-c463907c-9e7f-45ea-b46f-b0ad9b6f8f23.png">
<img width="1431" alt="Screen Shot 2022-09-04 at 12 26 13" src="https://user-images.githubusercontent.com/44270787/188297398-7584db2e-be45-4b6b-839a-f20de5185e50.png">
</div>
A quick search led me here:
https://stackoverflow.com/questions/33134594/set-tkinter-python-application-icon-in-mac-os-x
I think other platforms have similar problems. | closed | 2022-09-05T11:49:28Z | 2023-01-20T08:39:42Z | https://github.com/biolab/orange3/issues/6129 | [
"bug",
"snack"
] | markotoplak | 0 |
glumpy/glumpy | numpy | 6 | ffmpeg dependency | Every gloo-*.py example seems to depend on ffmpeg. The problem is that python executes the package `__init__.py` file (`glumpy/ext/__init__.py`) even if one wants to import only from a sub-package (`from glumpy.ext.inputhook import inputhook_manager, stdin_ready`). Seems to me this "always import ffmpeg" feature is not intentional.
```
fdkz@woueao:~/fdkz/extlibsrc/glumpy/examples$ python gloo-terminal.py
Traceback (most recent call last):
File "gloo-terminal.py", line 7, in <module>
import glumpy
File "/Library/Python/2.7/site-packages/glumpy/__init__.py", line 8, in <module>
from . app import run
File "/Library/Python/2.7/site-packages/glumpy/app/__init__.py", line 17, in <module>
from glumpy.ext.inputhook import inputhook_manager, stdin_ready
File "/Library/Python/2.7/site-packages/glumpy/ext/__init__.py", line 12, in <module>
from . import ffmpeg_reader
File "/Library/Python/2.7/site-packages/glumpy/ext/ffmpeg_reader.py", line 37, in <module>
from . ffmpeg_conf import FFMPEG_BINARY # ffmpeg, ffmpeg.exe, etc...
File "/Library/Python/2.7/site-packages/glumpy/ext/ffmpeg_conf.py", line 86, in <module>
raise IOError("FFMPEG binary not found. Try installing MoviePy"
IOError: FFMPEG binary not found. Try installing MoviePy manually and specify the path to the binary in the file conf.py
```
| closed | 2014-11-18T18:45:38Z | 2014-11-27T08:45:38Z | https://github.com/glumpy/glumpy/issues/6 | [] | fdkz | 1 |
davidsandberg/facenet | computer-vision | 821 | AttributeError: 'dict' object has no attribute 'iteritems' | Epoch: [1][993/1000] Time 0.462 Loss nan Xent nan RegLoss nan Accuracy 0.456 Lr 0.00005 Cl nan
Epoch: [1][994/1000] Time 0.447 Loss nan Xent nan RegLoss nan Accuracy 0.556 Lr 0.00005 Cl nan
Epoch: [1][995/1000] Time 0.471 Loss nan Xent nan RegLoss nan Accuracy 0.489 Lr 0.00005 Cl nan
Epoch: [1][996/1000] Time 0.469 Loss nan Xent nan RegLoss nan Accuracy 0.556 Lr 0.00005 Cl nan
Epoch: [1][997/1000] Time 0.457 Loss nan Xent nan RegLoss nan Accuracy 0.589 Lr 0.00005 Cl nan
Epoch: [1][998/1000] Time 0.469 Loss nan Xent nan RegLoss nan Accuracy 0.456 Lr 0.00005 Cl nan
Epoch: [1][999/1000] Time 0.474 Loss nan Xent nan RegLoss nan Accuracy 0.500 Lr 0.00005 Cl nan
Epoch: [1][1000/1000] Time 0.460 Loss nan Xent nan RegLoss nan Accuracy 0.444 Lr 0.00005 Cl nan
Saving variables
Variables saved in 0.74 seconds
Saving metagraph
Metagraph saved in 3.01 seconds
Saving statistics
Traceback (most recent call last):
File "src/train_softmax.py", line 580, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/train_softmax.py", line 260, in main
for key, value in stat.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
| closed | 2018-07-27T02:32:16Z | 2021-12-20T15:04:53Z | https://github.com/davidsandberg/facenet/issues/821 | [] | alanMachineLeraning | 3 |
tflearn/tflearn | data-science | 741 | how to build ResNet 152-layer model and extract the penultimate hidden layer's image feature | Now I got a pre-trained ResNet 152-layer model [](http://download.tensorflow.org/models/resnet_v1_152_2016_08_28.tar.gz)
I just want to use this 152-layer model to extract the image feature, now i want to extract the penultimate hidden layer's image feature
(just as show in the code).
1. The main question is how to build 152-layer ResNet model ? (i just see that set n = 18 then resnet
is 110 layers.). Or how to build 50-layer ResNet model?
2. my extract the penultimate hidden layer's image feature code is right?
```
from __future__ import division, print_function, absolute_import
import tflearn
from PIL import Image
import numpy as np
# Residual blocks
# 32 layers: n=5, 56 layers: n=9, 110 layers: n=18
n = 18
# Data loading
from tflearn.datasets import cifar10
(X, Y), (testX, testY) = cifar10.load_data()
Y = tflearn.data_utils.to_categorical(Y, 10)
testY = tflearn.data_utils.to_categorical(testY, 10)
# Real-time data preprocessing
img_prep = tflearn.ImagePreprocessing()
img_prep.add_featurewise_zero_center(per_channel=True)
# Real-time data augmentation
img_aug = tflearn.ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_crop([32, 32], padding=4)
# Building Residual Network
net = tflearn.input_data(shape=[None, 32, 32, 3],
data_preprocessing=img_prep,
data_augmentation=img_aug)
net = tflearn.conv_2d(net, 16, 3, regularizer='L2', weight_decay=0.0001)
net = tflearn.residual_block(net, n, 16)
net = tflearn.residual_block(net, 1, 32, downsample=True)
net = tflearn.residual_block(net, n-1, 32)
net = tflearn.residual_block(net, 1, 64, downsample=True)
net = tflearn.residual_block(net, n-1, 64)
net = tflearn.residual_block(net, 1, 64, downsample=True)
net = tflearn.residual_block(net, n-1, 64)
net = tflearn.batch_normalization(net)
net = tflearn.activation(net, 'relu')
output_layer = tflearn.global_avg_pool(net)
# Regression
net = tflearn.fully_connected(output_layer, 10, activation='softmax')
mom = tflearn.Momentum(0.1, lr_decay=0.1, decay_step=32000, staircase=True)
net = tflearn.regression(net, optimizer=mom,
loss='categorical_crossentropy')
# Training
model = tflearn.DNN(net, checkpoint_path='resnet_v1_152.ckpt',
max_checkpoints=10, tensorboard_verbose=0,
clip_gradients=0.)
model.fit(X, Y, n_epoch=200, validation_set=(testX, testY),
snapshot_epoch=False, snapshot_step=500,
show_metric=True, batch_size=128, shuffle=True,
run_id='resnet_cifar10')
model.save('./resnet_v1_152.ckpt')
#---------------
# now extract the penultimate hidden layer's image feature
img = Image.open(file_path)
img = img.resize((32, 32), Image.ANTIALIAS)
img = np.asarray(img, dtype="float32")
imgs = np.asarray([img])
model_test = tflearn.DNN(output_layer, session = model.session)
model_test.load('resnet_v1_152.ckpt', weights_only = True)
predict_y = model_test.predict(imgs)
print('layer\'s feature'.format(predict_y))
``` | open | 2017-05-05T12:05:01Z | 2017-07-26T08:49:35Z | https://github.com/tflearn/tflearn/issues/741 | [] | willduan | 3 |
jupyterlab/jupyter-ai | jupyter | 485 | Empty string config fields cause confusing errors | ## Description
If a user types into a config field, then deletes what was written, then clicks save, sometimes the field is saved as an empty string `""`. This can cause confusing behavior because this will then be passed as a keyword argument to the underlying LangChain provider.
Next steps are to
1. Set up test case coverage, preferably E2E if possible
2. Implement changes in the frontend and backend to address this | open | 2023-11-21T18:22:40Z | 2023-11-21T18:23:09Z | https://github.com/jupyterlab/jupyter-ai/issues/485 | [
"bug"
] | dlqqq | 0 |
vastsa/FileCodeBox | fastapi | 74 | 可以做对S3对象存储协议兼容吗? | 希望能对接minio 兼容s3接口,应该是和oss类似,是否可以使用oss对接minio?希望能做一点S3兼容就可以更多存储对接了 | closed | 2023-07-06T09:19:48Z | 2023-08-15T09:26:20Z | https://github.com/vastsa/FileCodeBox/issues/74 | [] | Oldming1 | 2 |
gradio-app/gradio | data-visualization | 10,399 | TabbedInterface does not work with Chatbot defined in ChatInterface | ### Describe the bug
When defining a `Chatbot` in `ChatInterface`, the `TabbedInterface` does not render it properly.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def chat():
return "Hello"
chat_ui = gr.ChatInterface(
fn=chat,
type="messages",
chatbot=gr.Chatbot(type="messages"),
)
demo = gr.TabbedInterface([chat_ui], ["Tab 1"])
demo.launch()
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.27.2
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.4.10
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.27.2
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | open | 2025-01-21T17:11:08Z | 2025-01-22T17:19:36Z | https://github.com/gradio-app/gradio/issues/10399 | [
"bug"
] | arnaldog12 | 2 |
babysor/MockingBird | deep-learning | 381 | 两种预置声码器各有优缺点,该在什么方向上改进? | 预置的两种声码器g_hifigan和pretained,
用g_hifigan的生成出来的音频,音色特别准,但是会带有电音
用pretained的生成出来的音频,音色没那么准,音量也会变小,但是就不会带电音
这种问题应该往哪个方向去改进?使得结果两种优点都具有
是声码器训练问题?还是源音频的问题?还是合成器?
| open | 2022-02-10T10:36:47Z | 2022-02-10T12:51:42Z | https://github.com/babysor/MockingBird/issues/381 | [] | funboomen | 1 |
ivy-llc/ivy | pytorch | 28,344 | Fix Frontend Failing Test: tensorflow - math.tensorflow.math.argmax | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-02-20T09:49:40Z | 2024-02-20T15:36:42Z | https://github.com/ivy-llc/ivy/issues/28344 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
gradio-app/gradio | data-visualization | 10,160 | gr.BrowserState first variable entry is not value, its default_value | ### Describe the bug
One cannot replace gr.State with gr.BrowserState because assigning "value" causes an error, and the first entry is default_value. This causes multiple headaches in replacing one with the other. Having "default_value" be the first input also breaks the standard for of all other components.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
This doesn't work.
```python
import gradio as gr
local_storage=gr.BrowserState(value="my Thing")
```
### Screenshot
not relevant.
### Logs
```shell
running gradio 5.8.0,
```
### System Info
```shell
running on Mac M2,
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.8.0
gradio_client version: 1.5.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 22.1.0
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.3.2
gradio-client==1.5.1 is not installed.
httpx: 0.27.0
huggingface-hub: 0.25.2
jinja2: 3.1.2
markupsafe: 2.0.1
numpy: 1.26.4
orjson: 3.10.5
packaging: 23.2
pandas: 2.2.3
pillow: 10.4.0
pydantic: 2.7.4
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0
ruff: 0.5.0
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.12.0
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.10.0
httpx: 0.27.0
huggingface-hub: 0.25.2
packaging: 23.2
typing-extensions: 4.12.2
websockets: 11.0.3
```
### Severity
I can work around it | closed | 2024-12-09T17:43:14Z | 2024-12-12T15:34:17Z | https://github.com/gradio-app/gradio/issues/10160 | [
"bug",
"pending clarification"
] | robwsinnott | 2 |
koaning/scikit-lego | scikit-learn | 221 | add --pre-commit features | our ci jobs are taking a longer time now and since 25% of the issues are flake related it may be a good idea to add black to this project and force it with a pre commit hook
@MBrouns objections? | closed | 2019-10-18T14:09:08Z | 2020-01-24T21:41:41Z | https://github.com/koaning/scikit-lego/issues/221 | [
"enhancement"
] | koaning | 2 |
roboflow/supervision | deep-learning | 1,038 | Opencv channel swap in ImageSinks | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Hello I realize opencv used BGR instead of RGB, and therefore, the following code will cause channel swap:
```python
with sv.ImageSink(target_dir_path=output_dir, overwrite=True) as sink:
annotated_img = box_annotator.annotate(
scene=np.array(Image.open(img_dir).convert("RGB")),
detections=results,
labels=labels,
)
sink.save_image(
image=annotated_img, image_name="test.jpg"
)
```
Unless I use `cv2.cvtColor(annotated_img, cv2.COLOR_RGB2BGR)` . This also happens with video sinks and other places using `Opencv` for image writing.
I wonder if it is possible to add this conversion by default or at least mention this in the docs? Thanks a lot!
### Use case
For savings with `ImageSink()`
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-03-24T21:48:00Z | 2024-03-26T01:46:31Z | https://github.com/roboflow/supervision/issues/1038 | [
"enhancement"
] | zhmiao | 3 |
dot-agent/nextpy | pydantic | 157 | Why hasn't this project been updated for a while | This is a good project, but it hasn't been updated for a long time. Why is that | open | 2024-09-23T09:25:54Z | 2025-01-13T12:36:11Z | https://github.com/dot-agent/nextpy/issues/157 | [] | redpintings | 2 |
scikit-learn/scikit-learn | machine-learning | 30,594 | DOC: Example of `train_test_split` with `pandas` DataFrames | ### Describe the issue linked to the documentation
Currently, the example [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) only illustrates the use case of `train_test_split` for `numpy` arrays. I think an additional example featuring a `pandas` DataFrame would make this page more beginner-friendly. Would you guys be interested?
### Suggest a potential alternative/fix
The modification in [`model_selection/_split`](https://github.com/scikit-learn/scikit-learn/blob/d666202a9349893c1bd106cc9ee0ff0a807c7cf3/sklearn/model_selection/_split.py) would be the following:
```
"""
Example: Data are a `numpy` array
--------
>>> Current example
Example: Data are a `pandas` DataFrame
--------
>>> from sklearn import datasets
>>> from sklearn.model_selection import train_test_split
>>> iris = datasets.load_iris(as_frame=True)
>>> X, y = iris['data'], iris['target']
>>> X.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
>>> y.head()
0 0
1 0
2 0
3 0
4 0
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.33, random_state=42) # rows will be shuffled
>>> X_train.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
96 5.7 2.9 4.2 1.3
105 7.6 3.0 6.6 2.1
66 5.6 3.0 4.5 1.5
0 5.1 3.5 1.4 0.2
122 7.7 2.8 6.7 2.0
>>> y_train
96 1
105 2
66 1
0 0
122 2
>>> X_test.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
73 6.1 2.8 4.7 1.2
18 5.7 3.8 1.7 0.3
118 7.7 2.6 6.9 2.3
78 6.0 2.9 4.5 1.5
76 6.8 2.8 4.8 1.4
>>> y_test.head()
73 1
18 0
118 2
78 1
76 1
"""
``` | closed | 2025-01-06T11:53:30Z | 2025-02-06T10:44:52Z | https://github.com/scikit-learn/scikit-learn/issues/30594 | [
"Documentation"
] | victoris93 | 2 |
horovod/horovod | tensorflow | 3,181 | Horovod stalled ranks when using hvd.SyncBatchNorm in pytorch amp mode | Excuse me! Recently I use hvd.SyncBatchNorm to train pytorch resnet50, followed [this pr](https://github.com/horovod/horovod/pull/3018/files) and find the phenomenon that the horovod stalled rank when using pytorch amp mode. When disabling amp, it workes normly. All above experiments using `single machine 8gpus`.

### Environment
horovod: 0.19.2
pytorch: 1.7.0
nccl: 2.7.8
### The relevant code
```python
from torchvision import models
model = models.resnet50(norm_layer=hvd.SyncBatchNorm)
optimizer = torch.optim.SGD(model.parameters(), lr=args.learning_rate, momentum=args.momentum, weight_decay=args.weight_decay)
optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters())
# amp code
scaler = amp.grad_scaler.GradScaler()
...
with amp.autocast_mode.autocast():
output = model(data)
loss = loss_fn(output)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
```
The reason is the incompatibility between pytoch amp and horovod?
| closed | 2021-09-27T07:33:08Z | 2021-12-18T14:28:20Z | https://github.com/horovod/horovod/issues/3181 | [
"wontfix"
] | wuyujiji | 6 |
scikit-image/scikit-image | computer-vision | 7,331 | The function skimage.util.compare_images fails silently if called with integers matrices | ### Description:
I was trying to call the `skimage.util.compare_images` with integers matrices (the documentation does not prevent this).
The function does return a value, but not the expected one.
I found out that the function `skimage.util.img_as_float32` called in `skimage.util.compare_images` does not convert the int matrix to a float matrix as expected, resulting into a wrong return value for compare_images.
### Way to reproduce:
```python
import skimage.util as skiu
mat_a = skiu.img_as_float32([[1.0, 1.0], [1.0, 1.0]])
mat_b = skiu.img_as_float32([[1, 1], [1, 1]])
mat_c = skiu.compare_images(mat_a, mat_b, method='diff')
assert mat_a[0][0] == 1.0 # OK
assert mat_b[0][0] == 1.0 # Will fail
assert mat_c[0][0] == 0 #Will fail too
```
### Version information:
```Shell
3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0]
Linux-4.4.0-22621-Microsoft-x86_64-with-glibc2.36
scikit-image version: 0.22
```
| open | 2024-02-29T12:46:04Z | 2024-09-08T02:37:35Z | https://github.com/scikit-image/scikit-image/issues/7331 | [
":sleeping: Dormant",
":bug: Bug"
] | Hish15 | 2 |
sigmavirus24/github3.py | rest-api | 936 | Allow preventing file contents retrieval to enable updating large files | Please allow creating a `file_contents` object without retrieving the current contents. This is because [GitHub's contents API only supports retrieving files up to 1mb](https://developer.github.com/v3/repos/contents/#get-contents), and I need to update a file that is larger than 1mb but not necessarily read it. I would suggest the following syntax: `repo.file_contents('folder/myfile.txt', retrieve=False)`. Attempting to access file contents on an object without it's contents retrieved yet could either attempt to retrieve and return the contents, return `None`, or throw an exception. Manual retreival could be attempted with `.retrieve()`. #741 is a slightly related issue. Thanks! | open | 2019-04-16T01:00:07Z | 2019-04-16T01:00:07Z | https://github.com/sigmavirus24/github3.py/issues/936 | [] | stevennyman | 0 |
nolar/kopf | asyncio | 353 | [PR] Force annotations to end strictly with alphanumeric characters | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2020-04-28 11:05:15+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/353
> Merged by [nolar](https://github.com/nolar) at _2020-04-28 12:29:33+00:00_
## What do these changes do?
Force annotation names to end strictly with alphanumeric characters, not only to fit into 63 chars.
_"Learning Kubernetes the hard way."_
## Description
The annotation names are not only limited to 63 characters, and not only to a specific alphabet, but also to the beginning/ending characters. Otherwise, it fails to patch:
```
$ kubectl patch … --type=merge -p '{"metadata": {"annotations": {"kopf.zalando.org/long.handler.id.here-WumEzA--": "{}"}}}'
The … is invalid: metadata.annotations: Invalid value: "kopf.zalando.org/long.handler.id.here-WumEzA--": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
```
Since base64 of a digest was used to make the shortened annotations names unique in #346, it can end with `=` characters, for example.
In this case, we do not need the actual base64'ed value, we just need a persistent and unique suffix. So, cutting those special non-alphanumeric characters is fine.
The change is backward compatible (despite the hashing function change since 0.27rc4): first, it was never released beyond RC; second, it was not working for non-alphanumeric annotations anyway. Proper alphanumeric annotations will remain the same as in 0.27rc4.
## Issues/PRs
> Issues: #331
> Related: #346
## Type of changes
- Bug fix (non-breaking change which fixes an issue)
## Checklist
- [x] The code addresses only the mentioned problem, and this problem only
- [x] I think the code is well written
- [x] Unit tests for the changes exist
- [x] Documentation reflects the changes
- [x] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
<!-- Are there any questions or uncertainties left?
Any tasks that have to be done to complete the PR? -->
| closed | 2020-08-18T20:04:28Z | 2020-08-23T20:57:43Z | https://github.com/nolar/kopf/issues/353 | [
"bug",
"archive"
] | kopf-archiver[bot] | 0 |
deepfakes/faceswap | deep-learning | 800 | ValueError: Error initializing Aligner | **Describe the bug**
Extract Exception
**To Reproduce**
Steps to reproduce the behavior:
1. download Releases from [https://github.com/deepfakes/faceswap/releases/download/v1.0.0/faceswap_setup_x64.exe](url)
2. install it
3. open FaceSwap and select Extract page ,input 'input dir' & 'output dir'
4. click Extract button
5. See error
**Expected behavior**
The output folder should have many files
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows-10-10.0.17763-SP0]
- Browser [IE]
- Version [v11]
**Additional context**
`
07/20/2019 00:57:46 MainProcess MainThread multithreading start DEBUG Started all threads 'save_faces': 1
07/20/2019 00:57:46 MainProcess MainThread extract process_item_count DEBUG Items already processed: 0
07/20/2019 00:57:47 MainProcess MainThread extract process_item_count DEBUG Items to be Processed: 5985
07/20/2019 00:57:47 MainProcess MainThread pipeline launch DEBUG Launching aligner and detector
07/20/2019 00:57:47 MainProcess MainThread pipeline launch_aligner DEBUG Launching Aligner
07/20/2019 00:57:47 MainProcess MainThread multithreading __init__ DEBUG Initializing SpawnProcess: (target: 'Aligner.run', args: (), kwargs: {})
07/20/2019 00:57:47 MainProcess MainThread multithreading __init__ DEBUG Initialized SpawnProcess: 'Aligner.run'
07/20/2019 00:57:47 MainProcess MainThread multithreading start DEBUG Spawning Process: (name: 'Aligner.run', args: (), kwargs: {'event': <multiprocessing.synchronize.Event object at 0x000001D2082895C0>, 'error': <multiprocessing.synchronize.Event object at 0x000001D2082B4748>, 'log_init': <function set_root_logger at 0x000001D27F2F2C80>, 'log_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x1d27f34c828>, 'log_level': 10, 'in_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x1d208289908>, 'out_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x1d2082896d8>}, daemon: True)
07/20/2019 00:57:47 MainProcess MainThread multithreading start DEBUG Spawned Process: (name: 'Aligner.run', PID: 9360)
07/20/2019 00:57:49 Aligner.run MainThread _base initialize DEBUG _base initialize Align: (PID: 9360, args: (), kwargs: {'event': <multiprocessing.synchronize.Event object at 0x000002745378FA20>, 'error': <multiprocessing.synchronize.Event object at 0x000002745618D438>, 'log_init': <function set_root_logger at 0x0000027453A6AB70>, 'log_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c822f28>, 'log_level': 10, 'in_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c824080>, 'out_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c8240f0>})
07/20/2019 00:57:49 Aligner.run MainThread fan initialize INFO Initializing Face Alignment Network...
07/20/2019 00:57:49 Aligner.run MainThread fan initialize DEBUG fan initialize: (args: () kwargs: {'event': <multiprocessing.synchronize.Event object at 0x000002745378FA20>, 'error': <multiprocessing.synchronize.Event object at 0x000002745618D438>, 'log_init': <function set_root_logger at 0x0000027453A6AB70>, 'log_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c822f28>, 'log_level': 10, 'in_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c824080>, 'out_queue': <AutoProxy[Queue] object, typeid 'Queue' at 0x2745c8240f0>})
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats __init__ DEBUG Initializing GPUStats
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_device_count DEBUG GPU Device count: 1
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0]
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_handles DEBUG GPU Handles found: 1
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_driver DEBUG GPU Driver: 385.54
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_devices DEBUG GPU Devices: ['GeForce GTX 1060 6GB']
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_vram DEBUG GPU VRAM: [6144.0]
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats __init__ DEBUG Initialized GPUStats
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_device_count DEBUG GPU Device count: 1
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0]
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_handles DEBUG GPU Handles found: 1
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_free DEBUG GPU VRAM free: [5860.96484375]
07/20/2019 00:57:49 Aligner.run MainThread gpu_stats get_card_most_free DEBUG Active GPU Card with most free VRAM: {'card_id': 0, 'device': 'GeForce GTX 1060 6GB', 'free': 5860.96484375, 'total': 6144.0}
07/20/2019 00:57:49 Aligner.run MainThread _base get_vram_free VERBOSE Using device GeForce GTX 1060 6GB with 5860MB free of 6144MB
07/20/2019 00:57:49 Aligner.run MainThread fan initialize VERBOSE Reserving 2240MB for face alignments
07/20/2019 00:57:49 Aligner.run MainThread fan load_graph VERBOSE Initializing Face Alignment Network model...
07/20/2019 00:57:53 Aligner.run MainThread _base run ERROR Caught exception in child process: 9360
07/20/2019 00:57:53 Aligner.run MainThread _base run ERROR Traceback:
Traceback (most recent call last):
File "C:\faceswap\plugins\extract\align\_base.py", line 112, in run
self.align(*args, **kwargs)
File "C:\faceswap\plugins\extract\align\_base.py", line 127, in align
self.initialize(*args, **kwargs)
File "C:\faceswap\plugins\extract\align\fan.py", line 47, in initialize
raise err
File "C:\faceswap\plugins\extract\align\fan.py", line 41, in initialize
self.model = FAN(self.model_path, ratio=tf_ratio)
File "C:\faceswap\plugins\extract\align\fan.py", line 199, in __init__
self.session = self.set_session(ratio)
File "C:\faceswap\plugins\extract\align\fan.py", line 221, in set_session
session = self.tf.Session(config=config)
File "C:\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "C:\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "C:\faceswap\lib\cli.py", line 122, in execute_script
process.process()
File "C:\faceswap\scripts\extract.py", line 61, in process
self.run_extraction()
File "C:\faceswap\scripts\extract.py", line 181, in run_extraction
self.extractor.launch()
File "C:\faceswap\plugins\extract\pipeline.py", line 171, in launch
self.launch_aligner()
File "C:\faceswap\plugins\extract\pipeline.py", line 206, in launch_aligner
raise ValueError("Error initializing Aligner")
ValueError: Error initializing Aligner
============ System Information ============
encoding: cp936
git_branch: master
git_commits: 5da91b7 Add .ico back for legacy windows installs
gpu_cuda: 9.0
gpu_cudnn: 7.0.5
gpu_devices: GPU_0: GeForce GTX 1060 6GB
gpu_devices_active: GPU_0
gpu_driver: 385.54
gpu_vram: GPU_0: 6144MB
os_machine: AMD64
os_platform: Windows-10-10.0.17763-SP0
os_release: 10
py_command: C:\faceswap\faceswap.py extract -i C:/Users/Administrator/Desktop/faceswap-master/Data/input/6026016444322A7E967FAD9273213C77.mp4 -o C:/Users/Administrator/Desktop/faceswap-master/Data/output -l 0.4 --serializer json -D mtcnn -A fan -nm none -bt 0.0 -sz 256 -min 0 -een 1 -si 0 -L INFO -gui
py_conda_version: conda 4.7.5
py_implementation: CPython
py_version: 3.6.8
py_virtual_env: True
sys_cores: 4
sys_processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
sys_ram: Total: 8128MB, Available: 5726MB, Used: 2402MB, Free: 5726MB
=============== Pip Packages ===============
absl-py==0.7.1
astor==0.7.1
certifi==2019.6.16
cloudpickle==1.2.1
cycler==0.10.0
cytoolz==0.10.0
dask==2.1.0
decorator==4.4.0
fastcluster==1.1.25
ffmpy==0.2.2
gast==0.2.2
grpcio==1.16.1
h5py==2.9.0
imageio==2.5.0
imageio-ffmpeg==0.3.0
joblib==0.13.2
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==2.2.2
mkl-fft==1.0.12
mkl-random==1.0.2
mkl-service==2.0.2
mock==3.0.5
networkx==2.3
numpy==1.16.2
nvidia-ml-py3==7.352.0
olefile==0.46
pathlib==1.0.1
Pillow==6.1.0
protobuf==3.8.0
psutil==5.6.3
pyparsing==2.4.0
pyreadline==2.1
python-dateutil==2.8.0
pytz==2019.1
PyWavelets==1.0.3
pywin32==223
PyYAML==5.1.1
scikit-image==0.15.0
scikit-learn==0.21.2
scipy==1.2.1
six==1.12.0
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==1.1.0
toolz==0.10.0
toposort==1.5
tornado==6.0.3
tqdm==4.32.1
Werkzeug==0.15.4
wincertstore==0.2
============== Conda Packages ==============
# packages in environment at C:\Anaconda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.1.0 gpu
absl-py 0.7.1 py36_0
astor 0.7.1 py36_0
blas 1.0 mkl
ca-certificates 2019.5.15 0
certifi 2019.6.16 py36_0
cloudpickle 1.2.1 py_0
cudatoolkit 10.0.130 0
cudnn 7.6.0 cuda10.0_0
cycler 0.10.0 py36h009560c_0
cytoolz 0.10.0 py36he774522_0
dask-core 2.1.0 py_0
decorator 4.4.0 py36_1
fastcluster 1.1.25 py36h830ac7b_1000 conda-forge
ffmpeg 4.1.3 h6538335_0 conda-forge
ffmpy 0.2.2 pypi_0 pypi
freetype 2.9.1 ha9979f8_1
gast 0.2.2 py36_0
grpcio 1.16.1 py36h351948d_1
h5py 2.9.0 py36h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.1 vc14_0 conda-forge
imageio 2.5.0 py36_0
imageio-ffmpeg 0.3.0 py_0 conda-forge
intel-openmp 2019.4 245
joblib 0.13.2 py36_0
jpeg 9c hfa6e2cd_1001 conda-forge
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py36ha925a31_0
libblas 3.8.0 8_mkl conda-forge
libcblas 3.8.0 8_mkl conda-forge
liblapack 3.8.0 8_mkl conda-forge
liblapacke 3.8.0 8_mkl conda-forge
libmklml 2019.0.3 0
libpng 1.6.37 h7602738_0 conda-forge
libprotobuf 3.8.0 h7bd577a_0
libtiff 4.0.10 h6512ee2_1003 conda-forge
libwebp 1.0.2 hfa6e2cd_2 conda-forge
lz4-c 1.8.3 he025d50_1001 conda-forge
markdown 3.1.1 py36_0
matplotlib 2.2.2 py36had4c4a9_2
mkl 2019.4 245
mkl-service 2.0.2 py36he774522_0
mkl_fft 1.0.12 py36h14836fe_0
mkl_random 1.0.2 py36h343c172_0
mock 3.0.5 py36_0
networkx 2.3 py_0
numpy 1.16.2 py36h19fb1c0_0
numpy-base 1.16.2 py36hc3f5095_0
nvidia-ml-py3 7.352.0 pypi_0 pypi
olefile 0.46 py36_0
opencv 4.1.0 py36hb4945ee_5 conda-forge
openssl 1.1.1c he774522_1
pathlib 1.0.1 py36_1
pillow 6.1.0 py36hdc69c19_0
pip 19.1.1 py36_0
protobuf 3.8.0 py36h33f27b4_0
psutil 5.6.3 py36he774522_0
pyparsing 2.4.0 py_0
pyqt 5.9.2 py36h6538335_2
pyreadline 2.1 py36_1
python 3.6.8 h9f7ef89_7
python-dateutil 2.8.0 py36_0
pytz 2019.1 py_0
pywavelets 1.0.3 py36h8c2d366_1
pywin32 223 py36hfa6e2cd_1
pyyaml 5.1.1 py36he774522_0
qt 5.9.7 hc6833c9_1 conda-forge
scikit-image 0.15.0 py36ha925a31_0
scikit-learn 0.21.2 py36h6288b17_0
scipy 1.2.1 py36h29ff71c_0
setuptools 41.0.1 py36_0
sip 4.19.8 py36h6538335_0
six 1.12.0 py36_0
sqlite 3.29.0 he774522_0
tensorboard 1.13.1 py36h33f27b4_0
tensorflow 1.13.1 gpu_py36h9006a92_0
tensorflow-base 1.13.1 gpu_py36h871c8ca_0
tensorflow-estimator 1.13.0 py_0
tensorflow-gpu 1.13.1 h0d30ee6_0
termcolor 1.1.0 py36_1
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge
tornado 6.0.3 py36he774522_0
tqdm 4.32.1 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.15.26706 h3a45250_4
werkzeug 0.15.4 py_0
wheel 0.33.4 py36_0
wincertstore 0.2 py36h7fe50ca_0
xz 5.2.4 h2fa13f4_1001 conda-forge
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h2fa13f4_1005 conda-forge
zstd 1.4.0 hd8a0e53_0 conda-forge
`
| closed | 2019-07-20T01:22:15Z | 2019-07-20T09:59:27Z | https://github.com/deepfakes/faceswap/issues/800 | [] | 463728946 | 1 |
sigmavirus24/github3.py | rest-api | 523 | "422 Invalid request" when creating a commit without author and/or committer | [The document](http://github3py.readthedocs.org/en/latest/repos.html) says `author` and `committer` parameter of `create_commit` method is optional but it seems not true.
```
>>> commit = repo.create_commit('test commit', t.sha, [])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/site-packages/github3/decorators.py", line 38, in auth_wrapper
return func(self, *args, **kwargs)
File "/usr/lib/python3.5/site-packages/github3/repos/repo.py", line 508, in create_commit
json = self._json(self._post(url, data=data), 201)
File "/usr/lib/python3.5/site-packages/github3/models.py", line 100, in _json
if self._boolean(response, status_code, 404) and response.content:
File "/usr/lib/python3.5/site-packages/github3/models.py", line 121, in _boolean
raise GitHubError(response)
github3.models.GitHubError: 422 Invalid request.
"email", "name" weren't supplied.
"email", "name" weren't supplied.
>>> commit = repo.create_commit('test commit', t.sha, [], {'name':'Yi EungJun', 'email':'test@mail.com''})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/site-packages/github3/decorators.py", line 38, in auth_wrapper
return func(self, *args, **kwargs)
File "/usr/lib/python3.5/site-packages/github3/repos/repo.py", line 508, in create_commit
json = self._json(self._post(url, data=data), 201)
File "/usr/lib/python3.5/site-packages/github3/models.py", line 100, in _json
if self._boolean(response, status_code, 404) and response.content:
File "/usr/lib/python3.5/site-packages/github3/models.py", line 121, in _boolean
raise GitHubError(response)
github3.models.GitHubError: 422 Invalid request.
"email", "name" weren't supplied.
```
I am using 0.9.4.
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/29406457-422-invalid-request-when-creating-a-commit-without-author-and-or-committer?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2015-12-26T12:21:47Z | 2018-03-22T02:23:45Z | https://github.com/sigmavirus24/github3.py/issues/523 | [] | eungjun-yi | 2 |
thtrieu/darkflow | tensorflow | 461 | restore cnn in native tensorflow | thanks for project. it works well on my custom dataset. i want to restore only cnn (from input to last convolution layer) in native tensorflow (or tf slim), how to do it? thanks! | open | 2017-12-05T21:03:10Z | 2018-01-08T03:55:05Z | https://github.com/thtrieu/darkflow/issues/461 | [] | ghost | 0 |
nolar/kopf | asyncio | 400 | [archival placeholder] | This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | closed | 2020-08-18T20:05:38Z | 2020-08-18T20:05:39Z | https://github.com/nolar/kopf/issues/400 | [
"archive"
] | kopf-archiver[bot] | 0 |
sammchardy/python-binance | api | 721 | Instance of 'Client' has no 'futures_coin_account_trades' | Hi,
I have been using this lib since December till now.
Thank you for the contributor for this python-binance!
Recently I had updated to version 0.7.9, however, the futures_coin_XXX() functions are all not able to be called.
Here is the sample code I am using:
```
# Get environment variables
api_key = os.environ.get('API_KEY')
api_secret = os.environ.get('API_SECRET')
client = Client(api_key, api_secret)
ping = client.futures_coin_ping()
print(ping)
```
Error Return:
```
Traceback (most recent call last):
File "test.py", line 15, in <module>
ping = client.futures_coin_ping()
AttributeError: 'Client' object has no attribute 'futures_coin_ping'
```
However, this works well if I replace `client.futures_coin_ping()` to `client.futures_ping()`.
Seems like only the futures_coin functions has some issue.
| open | 2021-03-06T06:54:12Z | 2021-03-12T22:33:40Z | https://github.com/sammchardy/python-binance/issues/721 | [] | zyairelai | 1 |
twelvedata/twelvedata-python | matplotlib | 1 | [Bug] get_stock_exchanges_list not working | **Describe the bug**
it is not possible to call the function td.get_stock_exchanges_list()
**To Reproduce**
When using td.get_stock_exchanges_list() python returns:
```
/usr/local/lib/python3.6/dist-packages/twelvedata/client.py in get_stock_exchanges_list(self)
44 :rtype: StockExchangesListRequestBuilder
45 """
---> 46 return StockExchangesListRequestBuilder(ctx=self.ctx)
47
48 def get_forex_pairs_list(self):
NameError: name 'StockExchangesListRequestBuilder' is not defined
```
**Cause of the error**
The reason for the error is probably because the endpoints are named differently
```
from .endpoints import (
StocksListEndpoint,
StockExchangesListEndpoint,
ForexPairsListEndpoint,
CryptocurrenciesListEndpoint,
CryptocurrencyExchangesListEndpoint,
)
```
**Further notes**
The same thing seems to apply for some other endpoints. | closed | 2020-01-08T13:14:33Z | 2020-01-13T00:11:29Z | https://github.com/twelvedata/twelvedata-python/issues/1 | [] | bandor151 | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 421 | Cannot allocate memory : Getting RAM full issue while using 110 GB RAM and 1024 memory queue size. | Hi Kevin,
I have around 15k images and around 6k labels for them. While training for the first epoch itself, I see that my 110GB RAM space is occupied and hence the trainnig stops with error:
`RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 51380224 bytes. Error code 12 (Cannot allocate memory)`
I have below settings:
batch_size = 8
out_embedding_size = 2048 # final embedding size
memory_size = 1024 #XBM queue size
Attaching the RAM usage snapshot.
<img width="1131" alt="Ram full" src="https://user-images.githubusercontent.com/7069488/153206740-52ff1c7a-606c-4a0a-bb91-3c5f288ab49c.png">
| closed | 2022-02-09T13:04:07Z | 2022-02-10T07:29:42Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/421 | [
"question"
] | abhinav3 | 6 |
explosion/spaCy | machine-learning | 12,140 | ro_core_news_lg missing | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Click on
https://spacy.io/models/ro#ro_core_news_lg
or try
python -m spacy [download](https://spacy.io/api/cli#download) ro_core_news_lg
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
| closed | 2023-01-22T07:58:52Z | 2023-03-03T01:43:41Z | https://github.com/explosion/spaCy/issues/12140 | [
"lang / ro"
] | TigranI | 4 |
uriyyo/fastapi-pagination | fastapi | 842 | Pagination is not working with tortoise-orm version 0.20.0 | Hi!
I'm trying to use pagination with the latest version of `tortoise-orm (0.20.0)` according to the [official integration tortoise-orm example](https://github.com/uriyyo/fastapi-pagination/blob/main/examples/pagination_tortoise.py).
But it's not working and getting pydantic model serialization error.
Later tried the following and it's working with the default `paginate` method like below.
```python
from typing import Any
from fastapi import Depends
from fastapi_pagination import Params, paginate, Page
from apps.contacts.models import Country
from apps.utils.cbv.routers import InferringRouter
from apps.utils.cbv.views import cbv
router = InferringRouter()
@cbv(router)
class CountryViews:
@router.get("/countries", response_model=Page[Any], tags=["address"])
async def country_list(self, params: Params = Depends()) -> Any:
return paginate(await Country.all().values(), params=params)
```
I'm using [cbv](https://fastapi-utils.davidmontague.xyz/user-guide/class-based-views/) and [InferringRouter](https://fastapi-utils.davidmontague.xyz/user-guide/inferring-router/) from [FastAPI utils](https://fastapi-utils.davidmontague.xyz/user-guide/class-based-views/).
As they don't have support for Pydantic version 2.0 yet, so I've copied two modules in my local utility packages so that I can use them with Pydantic version 2.0.
Can you please tell me what I can do to use the tortoise-orm integration version of `fastapi-pagination` instead of default `paginate`?
@uriyyo | closed | 2023-09-22T17:42:26Z | 2023-09-26T15:04:42Z | https://github.com/uriyyo/fastapi-pagination/issues/842 | [
"bug"
] | xalien10 | 4 |
jonaswinkler/paperless-ng | django | 1,332 | [BUG] How to debug classifier that doesn't auto classify? | The model seems generated according to the admin/log, yet after consumption nothing else happens to any new documents.
Before a certain(?) amount of documents the log contained `classify..no model existing yet..`. Now `consumption finished` remains the last output for each document.
I've tried:
- Creating an inbox tag and and adding/removing it to existing documents
- Adding type, correspondent and tag to 50 of ca. 70 documents
- `document-retagger`[^1]
I can't see any suspicious log entry to start investigating.
Where should I start debugging?
Note: Tags of type `Any` are working
[^1]: https://paperless-ng.readthedocs.io/en/latest/administration.html?highlight=inbox#document-retagger
**Relevant information**
- Host OS of the machine running paperless: [raspbian 5.10.52-v7+]
- Installation method: https://paperless-ng.readthedocs.io/en/latest/setup.html#setup-docker-script
```
[2021-09-20 23:13:25,610] [INFO] [paperless.consumer] Document 2021-02-01 scan_2021-03-01-113753_001 consumption finished
[2021-09-21 03:01:19,445] [DEBUG] [paperless.classifier] Gathering data from database...
[2021-09-21 03:01:21,376] [DEBUG] [paperless.classifier] 67 documents, 8 tag(s), 8 correspondent(s), 3 document type(s).
[2021-09-21 03:01:21,378] [DEBUG] [paperless.classifier] Vectorizing data...
[2021-09-21 03:01:33,719] [DEBUG] [paperless.classifier] Training tags classifier...
[2021-09-21 03:02:08,405] [DEBUG] [paperless.classifier] Training correspondent classifier...
[2021-09-21 03:02:30,866] [DEBUG] [paperless.classifier] Training document type classifier...
[2021-09-21 03:02:52,639] [INFO] [paperless.tasks] Saving updated classifier model to /usr/src/paperless/src/../data/classification_model.pickle...
[2021-09-21 08:30:04,520] [DEBUG] [paperless.management.consumer] Not consuming file /usr/src/paperless/src/../consume/__paperless_write_test_9826__: File has moved.
``` | open | 2021-09-21T10:05:11Z | 2021-09-21T14:54:02Z | https://github.com/jonaswinkler/paperless-ng/issues/1332 | [] | RidaAyed | 0 |
pandas-dev/pandas | python | 60,933 | BUG: Converting string of type lxml.etree._ElementUnicodeResult to a datetime using pandas.to_datetime results in a TypeError. | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from lxml import etree
import pandas as pd
example_date = "2025-02-05 16:59:57"
default_format = "%Y-%m-%d %H:%M:%S"
xml_node = etree.XML(f"<date>{example_date}</date>")
example_date_from_xml = xml_node.xpath("/date/node()")[0]
assert isinstance(example_date, str)
assert isinstance(example_date_from_xml, str)
assert isinstance(example_date_from_xml, etree._ElementUnicodeResult)
assert not isinstance(example_date, etree._ElementUnicodeResult)
assert example_date_from_xml == example_date
pd.to_datetime(pd.Series([example_date])) # OK
pd.to_datetime(pd.Series([example_date_from_xml])) # OK
pd.to_datetime(pd.Series([example_date_from_xml]), format=default_format) # KO: TypeError: Expected unicode, got lxml.etree._ElementUnicodeResult
# Shorter way of doing this
pd.to_datetime(pd.Series([etree._ElementUnicodeResult(example_date)])) # OK
pd.to_datetime(pd.Series([etree._ElementUnicodeResult(example_date)]), format=default_format) # KO
```
### Issue Description
Hello,
When trying to convert a string that comes from an XML file parsing with `pandas.to_datetime`, I struggled with an unexpected `TypeError`.
I managed to write a reproducible example that both works on the latest 2.2.3 version and `3.0.0dev` with Python 3.12.8.
It looks like when I'm trying to convert datetimes in a Series initialized from a list of `lxml.etree._ElementUnicodeResult` with the argument `format`, an error is raised.
Also, using `Series.astype(str)` does not work (values are still `lxml.etre._ElementUnicodeResult`).
### Expected Behavior
No `TypeError`.
### Installed Versions
3.0.0dev
<details>
INSTALLED VERSIONS
------------------
commit : 19ea997815d4dadf490d7052a0a3c289be898588
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 5.15.146.1-microsoft-standard-WSL2
Version : #1 SMP Thu Jan 11 04:09:03 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 3.0.0.dev0+1943.g19ea997815
numpy : 2.3.0.dev0+git20250211.bbfb823
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pytz : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
2.2.3
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 5.15.146.1-microsoft-standard-WSL2
Version : #1 SMP Thu Jan 11 04:09:03 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : 8.1.3
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.1
sqlalchemy : None
tables : None
tabulate : None
xarray : 2025.1.2
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details> | open | 2025-02-14T16:44:17Z | 2025-02-14T22:36:44Z | https://github.com/pandas-dev/pandas/issues/60933 | [
"Bug",
"Needs Discussion",
"datetime.date"
] | bentriom | 2 |
flavors/django-graphql-jwt | graphql | 201 | Need 'get_all_permissions' method like in django.contrib.auth | Need a way to get_all_permissions like its available in django.contrib.auth | closed | 2020-05-16T06:50:30Z | 2020-08-02T07:37:10Z | https://github.com/flavors/django-graphql-jwt/issues/201 | [] | arjun2504 | 1 |
flairNLP/flair | pytorch | 3,348 | [Feature]: Add support for MobIE NER Dataset | ### Problem statement
Hey,
in my latest [blog post](https://huggingface.co/blog/stefan-it/autotrain-flair-mobie) I used the MobIE NER Dataset to show how to fine-tune models with Flair.
I wrote a custom dataset loader for the MobIE NER Dataset:
The German MobIE Dataset was introduced in the [MobIE](https://aclanthology.org/2021.konvens-1.22/) paper by Hennig, Truong and Gabryszak (2021).
It's a German-language dataset that has been human-annotated with 20 coarse- and fine-grained entity types, and it includes entity linking information for geographically linkable entities. The dataset comprises 3,232 social media texts and traffic reports, totaling 91K tokens, with 20.5K annotated entities, of which 13.1K are linked to a knowledge base. In total, 20 different named entities are annotated.
### Solution
Add MobIE support into Flair directly - example class:
https://github.com/stefan-it/autotrain-flair-mobie/blob/main/mobie_dataset.py
It also has some unit tests:
https://github.com/stefan-it/autotrain-flair-mobie/blob/main/script.py#L11-L19
### Additional Context
_No response_ | closed | 2023-10-23T11:41:14Z | 2023-10-24T13:43:35Z | https://github.com/flairNLP/flair/issues/3348 | [
"feature"
] | stefan-it | 1 |
autogluon/autogluon | data-science | 4,742 | [Feature Request]: Ensembling multiple individual `TabularPredictor`s | Sometimes we need to run several experiments one by one to gain insights for the project.
And we get many versions of Autogluon models.
Now I am interested in ensembling previous autogluon models together. This is different to the internel ensembling of each autogluon model itself.
How can I implement this? | closed | 2024-12-18T12:33:04Z | 2025-01-12T18:27:59Z | https://github.com/autogluon/autogluon/issues/4742 | [
"enhancement",
"module: tabular"
] | 2catycm | 4 |
yt-dlp/yt-dlp | python | 12,258 | ERROR: [youtube] RZ8ZwL3VY8U: Failed to extract any player response; | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Russia
### Provide a description that is worded well enough to be understood
ERROR: [youtube] RZ8ZwL3VY8U: Failed to extract any player response;
Dear Friends.
yt-dlp does not work now, but it was ok week ago.
Please help.
[bug.txt](https://github.com/user-attachments/files/18631413/bug.txt)
Cordially,
___
Peter
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
F:\S\Загрузки\yt-dlp>yt-dlp -vU --cookies-from-browser firefox -f 136+140-0 https://youtu.be/RZ8ZwL3VY8U
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'firefox', '-f', '136+140-0', 'https://youtu.be/RZ8ZwL3VY8U']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 6.0-full_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\peter\AppData\Roaming\Mozilla\Firefox\Profiles\2wajd2vt.default-release\cookies.sqlite"
Extracted 761 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.26 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.26 from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://youtu.be/RZ8ZwL3VY8U
[youtube] RZ8ZwL3VY8U: Downloading webpage
WARNING: [youtube] Unable to download webpage: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))
[youtube] RZ8ZwL3VY8U: Downloading tv client config
WARNING: [youtube] Unable to download webpage: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))
[youtube] RZ8ZwL3VY8U: Downloading iframe API JS
WARNING: [youtube] Unable to download webpage: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))
[youtube] RZ8ZwL3VY8U: Downloading tv player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (1/3)...
[youtube] RZ8ZwL3VY8U: Downloading tv player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (2/3)...
[youtube] RZ8ZwL3VY8U: Downloading tv player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (3/3)...
[youtube] RZ8ZwL3VY8U: Downloading tv player API JSON
WARNING: [youtube] Unable to download API page: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)) (caused by TransportError("('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))"))
[youtube] RZ8ZwL3VY8U: Downloading ios player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (1/3)...
[youtube] RZ8ZwL3VY8U: Downloading ios player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (2/3)...
[youtube] RZ8ZwL3VY8U: Downloading ios player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (3/3)...
[youtube] RZ8ZwL3VY8U: Downloading ios player API JSON
WARNING: [youtube] Unable to download API page: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)) (caused by TransportError("('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))"))
[youtube] RZ8ZwL3VY8U: Downloading web player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (1/3)...
[youtube] RZ8ZwL3VY8U: Downloading web player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (2/3)...
[youtube] RZ8ZwL3VY8U: Downloading web player API JSON
WARNING: [youtube] ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)). Retrying (3/3)...
[youtube] RZ8ZwL3VY8U: Downloading web player API JSON
WARNING: [youtube] Unable to download API page: ('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None)) (caused by TransportError("('Connection aborted.', ConnectionResetError(10054, 'Удаленный хост принудительно разорвал существующее подключение', None, 10054, None))"))
ERROR: [youtube] RZ8ZwL3VY8U: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\youtube.py", line 4600, in _real_extract
File "yt_dlp\extractor\youtube.py", line 4564, in _download_player_responses
File "yt_dlp\extractor\youtube.py", line 4198, in _extract_player_responses
F:\S\Загрузки\yt-dlp>
``` | closed | 2025-02-02T07:04:34Z | 2025-02-02T16:09:50Z | https://github.com/yt-dlp/yt-dlp/issues/12258 | [
"question"
] | peterkurnev | 5 |
encode/uvicorn | asyncio | 1,878 | uvicorn suddenly shudown | currently, i'm using fast API for my new service, and using uvicorn to run it
here my main.py
```python
uvicorn.run(app, port=app_config.APP_PORT, host="0.0.0.0", log_level="debug")
```
```docker
FROM python:3.11.2-alpine3.16 as build_base
RUN apk update && apk upgrade && \
apk --no-cache --update add bash git make gcc libc-dev libffi-dev
RUN python3 -m venv /venv
ENV PATH=/venv/bin:$PATH
RUN pip3 install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
#
FROM build_base AS server_builder
COPY --from=build_base /venv /venv
ENV PATH=/venv/bin:$PATH
WORKDIR /app
COPY . .
RUN ls -lh
EXPOSE 5011
CMD python3 main.py
```
deployed in aws
some times there is not so much action only check health, but the service suddenly shutdown
```
2023-02-28T17:45:39.654+07:00 | INFO: Shutting down
-- | --
| 2023-02-28T17:45:39.755+07:00 | INFO: Waiting for application shutdown.
| 2023-02-28T17:45:39.755+07:00 | INFO: Application shutdown complete.
| 2023-02-28T17:45:39.755+07:00CopyINFO: Finished server process [1] | INFO: Finished server process [1]
``` | closed | 2023-02-28T11:07:02Z | 2023-02-28T11:08:31Z | https://github.com/encode/uvicorn/issues/1878 | [] | sjuliper7 | 0 |
gee-community/geemap | jupyter | 2,231 | geemap import fails on GitHub actions | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Github runner setup:
```
Current runner version: '2.322.0'
Operating System
Runner Image
Runner Image Provisioner
GITHUB_TOKEN Permissions
Secret source: Actions
Prepare workflow directory
Prepare all required actions
Getting action download info
Download action repository 'actions/checkout@v3' (SHA:f43a0e5ff2bd294095638e18286ca9a3d1956744)
Download action repository 'actions/setup-python@v2' (SHA:e9aba2c848f5ebd159c070c61ea2c4e2b122355e)
Download action repository 'pre-commit/action@v3.0.0' (SHA:646c83fcd040023954eafda54b4db0192ce70507)
Download action repository 'conda-incubator/setup-miniconda@v3' (SHA:505e6394dae86d6a5c7fbb6e3fb8938e3e863830)
Getting action download info
Download action repository 'actions/cache@v3' (SHA:2f8e54208210a422b2efd51efaa6bd6d7ca8920f)
Complete job name: test
```
### Description
It appears that importing `from IPython.core.display import display` should now be `from IPython.display import display`. I believe that this is causing the issue in Github Actions.
### What I Did
Here is the error in Github Actions:
```
ImportError while loading conftest '/home/runner/work/basinscout/basinscout/tests/conftest.py'.
tests/conftest.py:5: in <module>
from .fixtures import *
tests/fixtures/__init__.py:2: in <module>
from .basinscout_fxt import (
tests/fixtures/basinscout_fxt.py:7: in <module>
from basinscout import BasinScout
basinscout/__init__.py:1: in <module>
from .basinscout import BasinScout
basinscout/basinscout.py:34: in <module>
from .features.field import Field
basinscout/features/field.py:26: in <module>
from ..models.sb_irrigate import _get_openet_dataframe, _get_prism_dataframe
basinscout/models/sb_irrigate.py:22: in <module>
from geemap import common as geemap
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/__init__.py:55: in <module>
raise e
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/__init__.py:45: in <module>
from .geemap import *
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/geemap.py:30: in <module>
from . import core
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/core.py:15: in <module>
from . import toolbar
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/toolbar.py:20: in <module>
from IPython.core.display import display
E ImportError: cannot import name 'display' from 'IPython.core.display' (/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/IPython/core/display.py)
Please restart Jupyter kernel after installation if you encounter any errors when importing geemap.
```
| closed | 2025-03-04T23:19:07Z | 2025-03-05T23:39:40Z | https://github.com/gee-community/geemap/issues/2231 | [
"bug"
] | dharp | 10 |
ageitgey/face_recognition | machine-learning | 1,621 | Getting accuracy of 94% while doing cosine simmilarity on face encodings | * face_recognition version: 1.3.0
* Python version: 3.12
* Operating System: OSX
### Description
It is given in Readme that modal is having 99% accuracy. While upserting approx 500 face encodings in vector database, we ran the benchmark and found out that accuracy is 94%. We are trying to understand if something went wrong from our end or there is better approach to search across large dataset.
Dataset used: https://www.kaggle.com/datasets/jessicali9530/lfw-dataset
| open | 2024-12-16T15:45:06Z | 2024-12-16T16:13:48Z | https://github.com/ageitgey/face_recognition/issues/1621 | [] | avirajkhare00 | 0 |
harry0703/MoneyPrinterTurbo | automation | 192 | 合成视频的时候报这个错误 | 
合成视频的时候报这个错误convert-im6.q16: attempt to perform an operation not allowed by the security policy
ImageMagick已经安装了
| closed | 2024-04-08T06:57:07Z | 2024-04-13T13:37:15Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/192 | [] | okmija | 1 |
gradio-app/gradio | data-science | 10,122 | Unable to display video in a gr.HTML() component | ### Describe the bug
I want to display a video in gr.HTML(). It works with Gradio==4.44.0, But as soon as I upgrade to the newest version it stops working and the video is not shown.
When I downgrade to 4.44.0 version, it works, so the code and paths and permissions are correct. I also tried static folder and set_static_paths solutions, but it doesn't work. I want to show a video in HTML component because it allows me to display the video at a specific start and end time which is unavailable in gr.Video() as far as I know.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
html = """
<div class='myVideo'>
<video controls>
<source src='file/shots/output.MP4' type='video/mp4'>
Your browser does not support the video tag.
</video>
</div>
"""
with gr.Blocks() as demo:
with gr.Row():
gr.HTML(html)
demo.launch(allowed_paths=["/Users/a612/Desktop/Hj/"])
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 4.44.0
gradio_client version: 1.3.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.3.0 is not installed.
httpx: 0.27.0
huggingface-hub: 0.25.1
importlib-resources: 6.4.5
jinja2: 3.1.4
markupsafe: 2.1.5
matplotlib: 3.9.0
numpy: 1.26.4
orjson: 3.10.7
packaging: 24.0
pandas: 2.2.2
pillow: 10.3.0
pydantic: 2.7.3
pydub: 0.25.1
python-multipart: 0.0.12
pyyaml: 6.0.2
ruff: 0.6.8
semantic-version: 2.10.0
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.11.0
urllib3: 2.2.3
uvicorn: 0.31.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.9.0
httpx: 0.27.0
huggingface-hub: 0.25.1
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
### Severity
I can work around it | closed | 2024-12-04T18:45:45Z | 2025-03-24T10:05:04Z | https://github.com/gradio-app/gradio/issues/10122 | [
"bug"
] | HamoonJafarianTR | 3 |
Kanaries/pygwalker | plotly | 423 | Data inferring - dates, datetimes, etc. | Currently Pygwalker assigns to types of data:
facts(#) and dimensions.
I would love it to recognize dates, datetimes and other time-related dimensions, so it would allow:
- Datetime filtering
- Datetime slicing with a hierarchy (year-month-day-etc.)
- Relative and absolute datetime functions
I have tried to load data in a standard datetime format but no luck. | closed | 2024-02-02T22:46:55Z | 2024-02-26T01:01:18Z | https://github.com/Kanaries/pygwalker/issues/423 | [] | Vfisa | 1 |
localstack/localstack | python | 12,171 | bug: localstack includes vulnerable python package setuptools 65.5.1 CVE-2024-6345 | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
Current localstack release includes vulnerable python package setuptools 65.5.1 (CVE-2024-6345)
Fixed version 75.8.0 is already available
https://nvd.nist.gov/vuln/detail/cve-2024-6345
"A vulnerability in the package_index module of pypa/setuptools versions up to 69.1.1 allows for remote code execution via its download functions. "


### Expected Behavior
Not vulnerable packages to be shipped in the localstack image
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS: Debian 12.7
- LocalStack:
LocalStack version: 4.0.4.dev124
LocalStack build date: 2025-01-23
LocalStack build git hash: 20f919b3a
LocalStack Docker image sha: 4cac59e88053
```
### Anything else?
_No response_ | open | 2025-01-23T11:36:10Z | 2025-01-27T16:03:41Z | https://github.com/localstack/localstack/issues/12171 | [
"type: bug",
"status: backlog"
] | gianlucabonetti | 0 |
geopandas/geopandas | pandas | 2,899 | BUG: Possible bug in test_geodataframe.py, test_sjoin being skipped by impossible condition | - [ x] I have checked that this issue has not already been reported.
- [x ] I have confirmed this bug exists on the latest version of geopandas.
- [x ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
This is the current test in test_geodataframe.py:
```python
# Your code here
@pytest.mark.skipif(
not (compat.USE_PYGEOS and compat.HAS_RTREE and compat.USE_SHAPELY_20),
reason="sjoin needs `rtree` or `pygeos` dependency",
)
def test_sjoin(self, how, predicate):
"""
Basic test for availability of the GeoDataFrame method. Other
sjoin tests are located in /tools/tests/test_sjoin.py
"""
left = read_file(geopandas.datasets.get_path("naturalearth_cities"))
right = read_file(geopandas.datasets.get_path("naturalearth_lowres"))
expected = geopandas.sjoin(left, right, how=how, predicate=predicate)
result = left.sjoin(right, how=how, predicate=predicate)
assert_geodataframe_equal(result, expected)
```
#### Problem description
I believe that the condition ``compat.USE_PYGEOS and compat.USE_SHAPELY_20`` means that this test is ALWAYS skipped, since ``USE_PYGEOS = not(USE_SHAPELY_20)``.
#### Expected Code
```python
# Your code here
@pytest.mark.skipif(
not (compat.USE_PYGEOS or compat.HAS_RTREE or compat.USE_SHAPELY_20),
reason="sjoin needs `rtree` or `pygeos` dependency",
)
```
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 09:05:00) [Clang 14.0.6 ]
executable : -
machine : macOS-13.3.1-x86_64-i386-64bit
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.2
GEOS lib : None
GDAL : 3.6.4
GDAL data dir: -
PROJ : 9.2.0
PROJ data dir: -
PYTHON DEPENDENCIES
-------------------
geopandas : 0+untagged.1761.g1f27f35.dirty
numpy : 1.24.3
pandas : 2.0.1
pyproj : 3.5.0
shapely : 2.0.1
fiona : 1.9.3
geoalchemy2: 0.13.2
geopy : 2.3.0
matplotlib : 3.7.1
mapclassify: 2.5.0
pygeos : None
pyogrio : None
psycopg2 : 2.9.3 (dt dec pq3 ext lo64)
pyarrow : 12.0.0
rtree : 1.0.1
</details>
| closed | 2023-05-22T08:28:00Z | 2023-06-09T09:11:10Z | https://github.com/geopandas/geopandas/issues/2899 | [
"Testing"
] | chris-hedemann | 2 |
pyjanitor-devs/pyjanitor | pandas | 636 | [ENH] Expose sparse library functions | XArray supports the [Sparse](https://github.com/pydata/sparse) package but doesn't expose the functions to convert to/from sparse objects. These functions could be nicely packaged in pyjanitor to do so:
```python
import sparse
@register_xarray_dataarray_method
def to_scipy_sparse(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.to_scipy_sparse, da)
return da
@register_xarray_dataarray_method
def todense(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.todense, da)
return da
@register_xarray_dataarray_method
def tocsc(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.tocsc, da)
return da
@register_xarray_dataarray_method
def tocsr(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return xr.apply_ufunc(sparse.COO.tocsr, da)
return da
@register_xarray_dataarray_method
def tosparse(
da: xr.DataArray,
) -> xr.DataArray:
if isinstance(da.data, sparse.COO):
return da
return xr.apply_ufunc(sparse.COO, da)
``` | open | 2020-02-22T16:01:10Z | 2020-02-25T14:06:20Z | https://github.com/pyjanitor-devs/pyjanitor/issues/636 | [
"enhancement",
"good first issue",
"good intermediate issue",
"available for hacking"
] | zbarry | 0 |
wkentaro/labelme | computer-vision | 705 | Pre-Compiled Binaries? | The main readme states that "there are pre-built executables in the release section."
I tried downloading several releases but could not find bin files in any of them.
Could you please say which versions include executables in them or add a page with downloads of the latest stable version? if possible for Mac / Windows / Linux.
I can help compile on any of the above OS. | closed | 2020-06-29T04:36:01Z | 2020-07-16T01:47:29Z | https://github.com/wkentaro/labelme/issues/705 | [] | yurikleb | 6 |
krish-adi/barfi | streamlit | 4 | Enhancement Request: Categorize Blocks | The submenu for new blocks gets really long if you have quite a few blocks. Wondering if it is possible to categorize blocks so that in the JS frontend when you create a new block, you see the main category and then the blocks that fall under that specific category. | closed | 2022-07-14T00:46:03Z | 2022-07-21T19:54:35Z | https://github.com/krish-adi/barfi/issues/4 | [
"enhancement"
] | zabrewer | 5 |
facebookresearch/fairseq | pytorch | 4,990 | The difference between 'complete' and 'complete_doc' in class TokenBlockDataset | I found that we i use 'break_mode=complete' and 'break_mode=complete_doc', i got the same results.
It seemed that `complete_doc` also crossed documents
my fairseq==0.10.0
| open | 2023-02-23T12:53:57Z | 2023-02-23T13:06:08Z | https://github.com/facebookresearch/fairseq/issues/4990 | [
"question",
"needs triage"
] | ShiyuNee | 1 |
huggingface/transformers | pytorch | 36,562 | Stop output to stdout in streamers.py methods | ### System Info
transformers 4.48.3 on MacOS 15.4 or Ubuntu 6.8.0-1019-ibm using Python 3.11.11
### Who can help?
@gante et al:
I'm using the AsyncTextIteratorStreamer (and others) in streamers.py to receive tokens as they become available. However, the code is printing a status message to stdout for each iteration, showing:
Loop x secs: y device=y input_ids device=z
I could see this being enabled with an argument "show_progress" (defaulting to False to disable them), but I don't see a reason to interrupt applications' screen displays using these methods. I've tried printing what I've received, but it gets garbled with these other outputs intervening.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Call .generate() in a thread using an AsyncTextIterationStreamer and iterate through the returned values as shown in the comments of streamers.py:
```
>>> async def main():
... # Important: AsyncTextIteratorStreamer must be initialized inside a coroutine!
... streamer = AsyncTextIteratorStreamer(tok)
... generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20)
... thread = Thread(target=model.generate, kwargs=generation_kwargs)
... thread.start()
... generated_text = ""
... async for new_text in streamer:
... generated_text += new_text
>>> print(generated_text)
>>> asyncio.run(main())
An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,
```
### Expected behavior
Unless requested via argument to enable debugging, no output should be written to stdout by library methods. | open | 2025-03-05T14:08:53Z | 2025-03-19T17:38:31Z | https://github.com/huggingface/transformers/issues/36562 | [
"bug"
] | wnm3 | 6 |
Farama-Foundation/Gymnasium | api | 899 | [Question] AssertionError: Using `env.reset(seed=123)` is non-deterministic as the observations are not equivalent. | ### Question
Hi guys,
I have a problem that I don't understand. I have the following gymnasium code and I want to solve it using stable-baselines 3
```
import gymnasium as gym
from gymnasium import Env
from gymnasium.spaces import Discrete, Box, Tuple, MultiDiscrete
import numpy as np
import pandas as pd
from stable_baselines3 import PPO
from stable_baselines3 import DQN
from stable_baselines3 import A2C
import stable_baselines3 as sb3
import os
#Define parameters of the battery model BYD B-BOX 10.0
battery_capacity = 10 # Unit: [kWh]
charing_efficiency = 95 # Unit: [%]
maximum_charging_power = 10 #Unit: [kW]
pv_peak_power = 2.5 #Unit: [kW]
time_resolution = 15 * 60 #Unit: [s]
class RL_Env(Env):
def __init__(self):
import pandas as pd
import numpy as np
# Read the CSV file into a DataFrame
file_path = 'Data_Training_Exercise_6_ESHL_May_2020.csv' # Replace with the actual file path
df = pd.read_csv(file_path, sep=';')
# Extract the values from the DataFrame into an array with the dimensionality (31,96) for the second (pv_generation) and third column (electricity consumption)
self.pv_generation_data = df.iloc[:, 1].values.reshape((31, 96))
self.electricity_consumption_data = df.iloc[:, 2].values.reshape((31, 96))
# Create a Box for the action space
self.action_space = gym.spaces.Box(low=-1 * maximum_charging_power, high=maximum_charging_power, shape=(1,))
# Define observation space
low = np.array([0, 0, 0], dtype=np.float64)
high = np.array([3500, 3500, 1], dtype=np.float64)
self.observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float64)
self.battery_state_of_charge = 0
self.index_current_day = 0
self.index_current_time_slot_of_the_day = 0
#Reset the environment
def reset(self, **kwargs):
self.battery_state_of_charge = 0
#Choose a random day from the training data
import random
random_integer_day_index = random.randint(0, 20)
self.index_current_day = random_integer_day_index
#Resert the index for the time slot counter of the day
self.index_current_time_slot_of_the_week = 0
self.observation_space = np.array([self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.battery_state_of_charge])
info = {}
#Call the super method
super().reset(**kwargs)
return self.observation_space, info
def render(self):
pass
def step(self, action):
# Execute the action
action_battery_charging= action[0]
#Adjust the action due to technical constraint: not enough energy in the battery for discharging with the choosen action
if action_battery_charging * time_resolution < ((-1) * self.battery_state_of_charge * battery_capacity):
action_battery_charging = ((-1) * self.battery_state_of_charge * self.battery_state_of_charge * battery_capacity)/time_resolution
# Adjust the action due to technical constraint: not enough pv generated for charging with the choosen action
if action_battery_charging > self.pv_generation_data [self.index_current_day, self.index_current_time_slot_of_the_day]:
action_battery_charging = self.pv_generation_data [self.index_current_day, self.index_current_time_slot_of_the_day]
self.battery_state_of_charge = self.battery_state_of_charge + (action_battery_charging * time_resolution * charing_efficiency ) / (battery_capacity*3600000)
energy_balance = self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day] + action_battery_charging - self.pv_generation_data [self.index_current_day, self.index_current_time_slot_of_the_day]
required_power_from_the_grid = energy_balance
if required_power_from_the_grid < 0:
required_power_from_the_grid = 0
# calculate state
observation_space = 0
# calculate reward
reward =0
if energy_balance < 0:
reward = (-1*energy_balance)
if energy_balance >=0:
reward = energy_balance
#Define observatin space
observation_space = np.array([self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.battery_state_of_charge])
#Update index counters
self.index_current_time_slot_of_the_day = self.index_current_time_slot_of_the_day + 1
#Check end of the day
if self.index_current_time_slot_of_the_day >= 96 - 1:
terminated = True
truncuated = True
else:
terminated = False
truncuated = False
info = {}
return observation_space, reward, terminated,truncuated, info
#Create gymnasium environment
gym.register("battery-env-v0", lambda: RL_Env())
env = gym.make("battery-env-v0")
#Check the environment
check_environment = True
if check_environment == True:
from gymnasium.utils.env_checker import check_env
check_env(env.unwrapped)
from stable_baselines3.common.env_checker import check_env
check_env(env)
#Define the model directory (PPO, A2C, TD3, DQN)
string_run_name = "test_1"
models_dir = "Trained_RL_Models/" + string_run_name + "_PPO"
logdir = "Trained_RL_Models/" + string_run_name + "_PPO"
if not os.path.exists(models_dir):
os.makedirs(models_dir)
if not os.path.exists(logdir):
os.makedirs(logdir)
#Define the model directory (PPO, A2C, TD3, DQN)
model = PPO('MlpPolicy', env, verbose=1, learning_rate= 0.0003, ent_coef= 0.2) #Default values: ent_coef= 0.0, learning_rate= 0.0003
#train and save the model
model.learn(total_timesteps=100)
model.save(os.path.join(models_dir, 'trained_PPO_model'))
```
So I have one action variable which is the action to charge a battery or discharge it ranging from `[ -1 * maximum_charging_power ,maximum_charging_power]`. Then I have a 3-dimensional state space with the first 2 values ranging from 0 to 3500 (they are read from file `observation_space = np.array([self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.electricity_consumption_data [self.index_current_day, self.index_current_time_slot_of_the_day], self.battery_state_of_charge])` while the 3rd one is caluclated in every iteration and it ranges from 0 to 1.
When running the code I get the error from the environment checker "AssertionError: Using `env.reset(seed=123)` is non-deterministic as the observations are not equivalent." It is related to the reset method and I have no clue why it occurs and how I can get rid of it. What is meant by `seed=123`. I don't generate any random seed. | closed | 2024-01-29T17:31:02Z | 2024-01-29T17:39:53Z | https://github.com/Farama-Foundation/Gymnasium/issues/899 | [
"question"
] | PBerit | 1 |
zappa/Zappa | flask | 490 | [Migrated] Implement Locked Down "Production" Mode | Originally from: https://github.com/Miserlou/Zappa/issues/1301 by [Miserlou](https://github.com/Miserlou)
- Define and document separate modes/settings for "dev" and "production" operation modes
- Define/Implement IAM-minimum execution roles/policies
- Disable any public-facing error reporting
- Check for correct VPC/Ingress permissions when using VPC | closed | 2021-02-20T09:43:24Z | 2024-04-13T16:36:20Z | https://github.com/zappa/Zappa/issues/490 | [
"production mode",
"no-activity",
"auto-closed"
] | jneves | 2 |
kennethreitz/responder | graphql | 216 | Whats the difference between this and starlette ?! | I should be completly dumb ;-) ... sorry
But I can't see the things that "responder" does more than "starlette" ?
What are the added values ?
I really think it should be clarified in the doc ...
All highlighted features comes from starlette, no ? | closed | 2018-11-08T15:30:28Z | 2018-11-29T12:39:19Z | https://github.com/kennethreitz/responder/issues/216 | [] | manatlan | 3 |
xinntao/Real-ESRGAN | pytorch | 94 | 运行时gpu不工作,cpu和内存跑满 | 我在使用
python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth --input inputs
这个命令时,cpu和内存直接跑满,gpu却一点都不工作,我想让cpu工作,请问怎么解决?

| closed | 2021-09-24T15:19:48Z | 2022-04-20T14:00:29Z | https://github.com/xinntao/Real-ESRGAN/issues/94 | [] | githublhc | 2 |
ultralytics/yolov5 | deep-learning | 12,969 | Is it possible to add ShuffleNetV2 as backbone in the official repo? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
According to the information I found, ShuffleNetV2 has achieved a good balance in speed and accuracy. ShuffleNetV2 is conducive to the promotion of yoloV5 on various devices.
so,Is it possible to add ShuffleNetV2 as backbone in the official repo?
[ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://www.researchgate.net/publication/318205093_ShuffleNet_An_Extremely_Efficient_Convolutional_Neural_Network_for_Mobile_Devices)
[Who is the king of lightweight CNN? Comprehensive evaluation in 7 dimensions mobilenet/shufflenet/ghostnet](https://www.bilibili.com/read/cv8801259/)
### Additional
_No response_ | closed | 2024-04-28T02:57:21Z | 2024-10-20T19:44:59Z | https://github.com/ultralytics/yolov5/issues/12969 | [
"question",
"Stale"
] | superbayes | 3 |
LibreTranslate/LibreTranslate | api | 401 | New line characters are returned as "\n" in HTML translations | Edit: never mind, this is just a formatting output issue of the client. | closed | 2023-02-04T23:09:27Z | 2023-02-04T23:13:07Z | https://github.com/LibreTranslate/LibreTranslate/issues/401 | [
"bug"
] | pierotofy | 0 |
marimo-team/marimo | data-visualization | 4,058 | The python module reactable does not work in marimo it seems | ### Describe the bug
using the **reactable-py** project: https://machow.github.io/reactable-py/get-started/index.html
**i am testing this snippet of code in a marimo cell and it does not work:**
from reactable import Reactable, embed_css
from reactable.data import cars_93
embed_css() # to put css into notebooks.
Reactable(
cars_93[["manufacturer", "model", "type", "price"]],
default_page_size=5,
searchable=True,
filterable=True,
)
@mscolnick can you check it out?
### Environment
python 3.9
marimo>=0.11.17
### Code to reproduce
from reactable import Reactable, embed_css
from reactable.data import cars_93
embed_css() # to put css into notebooks.
Reactable(
cars_93[["manufacturer", "model", "type", "price"]],
default_page_size=5,
searchable=True,
filterable=True,
) | closed | 2025-03-11T12:15:36Z | 2025-03-11T12:39:22Z | https://github.com/marimo-team/marimo/issues/4058 | [
"bug"
] | Yasin197 | 1 |
pyqtgraph/pyqtgraph | numpy | 2,913 | Obtain the coordinates of a point cloud using the mouse in GLViewWidget with pyqtgraph. | I want to obtain the coordinates of a point cloud using the mouse in GLViewWidget. The point cloud is read using Open3D and displayed using pyqtgraph. opengl. However, I don't know how to obtain the 3D world coordinates, so I can only obtain the screen coordinates. The following is my point cloud reading and visualization code. I am using pyqtgraph version 0.13.3 and pyqt5 version 5.15.9. Can anyone help me? Thank you very much.
`def read_pointcloud(self):
# print("test well")
fileName, _ = QFileDialog.getOpenFileName(None, "选择 .pcd 文件", "", "PCD Files (*.pcd)")
if fileName != '':
self.pcd = o3d.io.read_point_cloud(fileName)
pos_view=self.pcd.get_center()
self.textEdit.clear()
np_points = np.asarray(self.pcd.points)
self.textEdit.append("文件路径:" + str(fileName))
self.textEdit.append("点云数量:" + str(int(np_points.size/3)))
self.plot = gl.GLScatterPlotItem()
self.plot.setData(pos=np_points, color=(0.0, 1.0, 1.0, 1.0), size=5, pxMode=True) # 0.05表示点的大小
self.openGLWidget.addItem(self.plot)
self.openGLWidget.setCameraPosition(QtGui.QVector3D(pos_view[0],pos_view[1],pos_view[2]))
` | open | 2024-01-09T05:57:18Z | 2024-01-09T06:00:58Z | https://github.com/pyqtgraph/pyqtgraph/issues/2913 | [] | dx-momo | 1 |
gee-community/geemap | streamlit | 1,242 | Add visualization options to goes_timelapse function | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
Add one parameter to goes_timelapse function so that the output visualization bands could be changed.
Now only true color is available,
bands = ["CMI_C02", "CMI_GREEN", "CMI_C01"],
it will be useful to have false color also because fires are more visible using these bands
bands = ["CMI_C05", "CMI_C03", "CMI_GREEN"].
### Source code
def goes_timelapse(
roi=None,
out_gif=None,
start_date="2021-10-24T14:00:00",
end_date="2021-10-25T01:00:00",
data="GOES-17",
scan="full_disk",
dimensions=768,
framesPerSecond=10,
date_format="YYYY-MM-dd HH:mm",
xy=("3%", "3%"),
text_sequence=None,
font_type="arial.ttf",
font_size=20,
font_color="#ffffff",
add_progress_bar=True,
progress_bar_color="white",
progress_bar_height=5,
loop=0,
crs=None,
overlay_data=None,
overlay_color="black",
overlay_width=1,
overlay_opacity=1.0,
mp4=False,
fading=False,
vis = "TrueColor"
**kwargs,
):
"""Create a timelapse of GOES data. The code is adapted from Justin Braaten's code: https://code.earthengine.google.com/57245f2d3d04233765c42fb5ef19c1f4.
Credits to Justin Braaten. See also https://jstnbraaten.medium.com/goes-in-earth-engine-53fbc8783c16
Args:
out_gif (str): The file path to save the gif.
start_date (str, optional): The start date of the time series. Defaults to "2021-10-24T14:00:00".
end_date (str, optional): The end date of the time series. Defaults to "2021-10-25T01:00:00".
data (str, optional): The GOES satellite data to use. Defaults to "GOES-17".
scan (str, optional): The GOES scan to use. Defaults to "full_disk".
roi (ee.Geometry, optional): The region of interest. Defaults to None.
dimensions (int, optional): a number or pair of numbers in format WIDTHxHEIGHT) Maximum dimensions of the thumbnail to render, in pixels. If only one number is passed, it is used as the maximum, and the other dimension is computed by proportional scaling. Defaults to 768.
frames_per_second (int, optional): Animation speed. Defaults to 10.
date_format (str, optional): The date format to use. Defaults to "YYYY-MM-dd HH:mm".
xy (tuple, optional): Top left corner of the text. It can be formatted like this: (10, 10) or ('15%', '25%'). Defaults to None.
text_sequence (int, str, list, optional): Text to be drawn. It can be an integer number, a string, or a list of strings. Defaults to None.
font_type (str, optional): Font type. Defaults to "arial.ttf".
font_size (int, optional): Font size. Defaults to 20.
font_color (str, optional): Font color. It can be a string (e.g., 'red'), rgb tuple (e.g., (255, 127, 0)), or hex code (e.g., '#ff00ff'). Defaults to '#000000'.
add_progress_bar (bool, optional): Whether to add a progress bar at the bottom of the GIF. Defaults to True.
progress_bar_color (str, optional): Color for the progress bar. Defaults to 'white'.
progress_bar_height (int, optional): Height of the progress bar. Defaults to 5. loop (int, optional): controls how many times the animation repeats. The default, 1, means that the animation will play once and then stop (displaying the last frame). A value of 0 means that the animation will repeat forever. Defaults to 0.
crs (str, optional): The coordinate reference system to use, e.g., "EPSG:3857". Defaults to None.
overlay_data (int, str, list, optional): Administrative boundary to be drawn on the timelapse. Defaults to None.
overlay_color (str, optional): Color for the overlay data. Can be any color name or hex color code. Defaults to 'black'.
overlay_width (int, optional): Line width of the overlay. Defaults to 1.
overlay_opacity (float, optional): Opacity of the overlay. Defaults to 1.0.
mp4 (bool, optional): Whether to save the animation as an mp4 file. Defaults to False.
fading (int | bool, optional): If True, add fading effect to the timelapse. Defaults to False, no fading. To add fading effect, set it to True (1 second fading duration) or to an integer value (fading duration).
vis (str, optional): Wheter to use TrueColor or FalseColor band combination to generate GOES timelapse. Defaults to TrueColor.
Raises:
Exception: Raise exception.
"""
try:
if "region" in kwargs:
roi = kwargs["region"]
if out_gif is None:
out_gif = os.path.abspath(f"goes_{random_string(3)}.gif")
if vis == "TrueColor":
bands = ["CMI_C02", "CMI_GREEN", "CMI_C01"]
elif vis == "FalseColor":
bands = ["CMI_C05", "CMI_C03", "CMI_GREEN"]
visParams = {
"bands": bands,
"min": 0,
"max": 0.8,
}
col = goes_timeseries(start_date, end_date, data, scan, roi)
col = col.select(bands).map(
lambda img: img.visualize(**visParams).set(
{
"system:time_start": img.get("system:time_start"),
}
)
)
if overlay_data is not None:
col = add_overlay(
col, overlay_data, overlay_color, overlay_width, overlay_opacity
)
if roi is None:
roi = ee.Geometry.Polygon(
[
[
[-159.5954, 60.4088],
[-159.5954, 24.5178],
[-114.2438, 24.5178],
[-114.2438, 60.4088],
]
],
None,
False,
)
if crs is None:
crs = col.first().projection()
videoParams = {
"bands": ["vis-red", "vis-green", "vis-blue"],
"min": 0,
"max": 255,
"dimensions": dimensions,
"framesPerSecond": framesPerSecond,
"region": roi,
"crs": crs,
}
if text_sequence is None:
text_sequence = image_dates(col, date_format=date_format).getInfo()
download_ee_video(col, videoParams, out_gif)
if os.path.exists(out_gif):
add_text_to_gif(
out_gif,
out_gif,
xy,
text_sequence,
font_type,
font_size,
font_color,
add_progress_bar,
progress_bar_color,
progress_bar_height,
duration=1000 / framesPerSecond,
loop=loop,
)
try:
reduce_gif_size(out_gif)
if isinstance(fading, bool):
fading = int(fading)
if fading > 0:
gif_fading(out_gif, out_gif, duration=fading, verbose=False)
except Exception as _:
pass
if mp4:
out_mp4 = out_gif.replace(".gif", ".mp4")
gif_to_mp4(out_gif, out_mp4)
return out_gif
except Exception as e:
raise Exception(e)
| closed | 2022-09-02T18:51:44Z | 2022-09-02T20:46:37Z | https://github.com/gee-community/geemap/issues/1242 | [
"Feature Request"
] | SerafiniJose | 1 |
opengeos/leafmap | streamlit | 156 | Add an Inspector tool for retrieving pixel value from COG and STAC | Using titler endpoint `/stac/point/{lon},{lat}`
#137 | closed | 2021-12-27T16:17:08Z | 2021-12-28T17:02:59Z | https://github.com/opengeos/leafmap/issues/156 | [
"Feature Request"
] | giswqs | 1 |
fastapi-users/fastapi-users | fastapi | 1,053 | _get_user() fails when no user in db | ```python
async def _get_user(self, statement: Select) -> Optional[UP]:
results = await self.session.execute(statement)
user = results.first()
```
fails at `.first()` when there is no records in user table and then results is None | closed | 2022-08-01T18:13:29Z | 2022-08-01T18:39:15Z | https://github.com/fastapi-users/fastapi-users/issues/1053 | [] | kamikaze | 1 |
Kanaries/pygwalker | plotly | 429 | pygwalker 0.4.5 error: VM1227:1 Uncaught (in promise) SyntaxError: Unexpected token '<', "<html><tit"... is not valid JSON | pygwalker 0.4.5
python 3.9.0
streamlit 1.30.0
I want to show chart only on streamlit so I used this code I found
```
renderer = get_pyg_renderer(chart_data,vis_spec)
renderer.render_pure_chart(0)
```
or
```
renderer = get_pyg_renderer(chart_data)
renderer.render_explore()
```
but it got error VM1227:1 Uncaught (in promise) SyntaxError: Unexpected token '<', "<html><tit"... is not valid JSON
when I try the different method of showing chart it works but I can't show chart only by using this method
```
pyg_html = pyg.to_html(chart_data,spec=vis_spec)
components.html(pyg_html,scrolling=True,height=500)
```



| closed | 2024-02-07T02:49:58Z | 2024-02-07T03:13:11Z | https://github.com/Kanaries/pygwalker/issues/429 | [] | Renaldy002 | 2 |
huggingface/datasets | nlp | 6,810 | Allow deleting a subset/config from a no-script dataset | As proposed by @BramVanroy, it would be neat to have this functionality through the API. | closed | 2024-04-15T07:53:26Z | 2025-01-11T18:40:40Z | https://github.com/huggingface/datasets/issues/6810 | [
"enhancement"
] | albertvillanova | 3 |
ranaroussi/yfinance | pandas | 1,356 | Exception when do ticker.info | I'm using Ubuntu 20.04, Python 3.8.10, yfinance-0.2.6
>>> import yfinance as yf
>>> ticker = yf.Ticker("TSLA")
>>> ticker.info
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.local/lib/python3.8/site-packages/yfinance/ticker.py", line 142, in info
return self.get_info()
File "/home/user/.local/lib/python3.8/site-packages/yfinance/base.py", line 1220, in get_info
data = self._quote.info
File "/home/user/.local/lib/python3.8/site-packages/yfinance/scrapers/quote.py", line 96, in info
self._scrape(self.proxy)
File "/home/user/.local/lib/python3.8/site-packages/yfinance/scrapers/quote.py", line 125, in _scrape
json_data = self._data.get_json_data_stores(proxy=proxy)
File "/home/user/.local/lib/python3.8/site-packages/yfinance/data.py", line 40, in wrapped
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/yfinance/data.py", line 256, in get_json_data_stores
stores = decrypt_cryptojs_aes_stores(data)
File "/home/user/.local/lib/python3.8/site-packages/yfinance/data.py", line 190, in decrypt_cryptojs_aes_stores
raise Exception("yfinance failed to decrypt Yahoo data response with hardcoded keys, contact developers")
Exception: yfinance failed to decrypt Yahoo data response with hardcoded keys, contact developers
What am I doing wrong here? | closed | 2023-01-26T17:10:12Z | 2023-01-26T17:11:13Z | https://github.com/ranaroussi/yfinance/issues/1356 | [] | robinhftw | 1 |
serengil/deepface | machine-learning | 501 | Precision problem | Hi, thanks for your work. I have two questions to ask.
1. After L2 norm normalization, the Euclidean distance of a set of vectors is linearly related to their cosine similarity. In fact, the distance in the frame conforms to cosine=(euclidean_l2*euclidean_l2)/2. However, their thresholds are not in this equivalence relation. For example OpenFace': {'cosine': 0.10, 'euclidean': 0.55, 'euclidean_l2': 0.55}, where the threshold of cosine is a bit low and should be 0.15. For example, Akhmed_Zakayev_0001.jpg and Akhmed_Zakayev_0003.jpg are true under euclidean_l2, however It is false under cosine.
2. When I tested the lfw dataset with different models, I found that the accuracy was not so high. On the contrary, the accuracy rate is even a bit low. | closed | 2022-06-28T12:48:31Z | 2022-06-28T13:31:21Z | https://github.com/serengil/deepface/issues/501 | [
"question"
] | yyq-GitHub | 4 |
python-visualization/folium | data-visualization | 1,730 | SideBySideLayers plugin not working | **Describe the bug**
The SideBySideLayers plugin for folium v0.14.0 is not working properly. The slider can't be moved.
@Conengmo @fralc
**To Reproduce**
https://colab.research.google.com/github/python-visualization/folium/blob/main/examples/Plugins.ipynb
```
```bash
!pip install -U folium
```
```python
import folium
from folium import plugins
m = folium.Map(location=(30, 20), zoom_start=4)
layer_right = folium.TileLayer('openstreetmap')
layer_left = folium.TileLayer('cartodbpositron')
sbs = plugins.SideBySideLayers(layer_left=layer_left, layer_right=layer_right)
layer_left.add_to(m)
layer_right.add_to(m)
sbs.add_to(m)
m
```

It should work like this:
https://ipyleaflet.readthedocs.io/en/latest/controls/split_map_control.html

| closed | 2023-02-18T19:47:59Z | 2023-09-08T16:08:15Z | https://github.com/python-visualization/folium/issues/1730 | [] | giswqs | 7 |
zappa/Zappa | flask | 927 | [Migrated] Bump boto3/botocore versions | Originally from: https://github.com/Miserlou/Zappa/issues/2193 by [ian-whitestone](https://github.com/ian-whitestone)
# Description
In support of #2188, this PR bumps the versions of boto3/botocore, so that we have access to the new Docker image functionality.
# GitHub Issues
Related #2188
# Testing
I created a new virtual env with the new dependencies and ran several Zappa workflows: `deploy`, `update`, `status`, and `undeploy`.


Any other tests you'd recommend running? | closed | 2021-02-20T13:24:38Z | 2024-04-13T19:36:43Z | https://github.com/zappa/Zappa/issues/927 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
explosion/spaCy | nlp | 13,753 | Compiler cl cannot compile programs. |
### Discussed in https://github.com/explosion/spaCy/discussions/8226
## Environment
- Windows 11
- Python 3.13.2
- Pip 25.0.1
- Visual Studio 2022
- SDK do Windows 11
- MSVC v143 - VS 2022 C++ x64/x86 build tools
## How to reproduce the problem
1. python -m venv venv
2. pip install spacy
## Output of the error
```sh
Collecting spacy
Using cached spacy-3.8.2.tar.gz (1.3 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [96 lines of output]
Ignoring numpy: markers 'python_version < "3.9"' don't match your environment
Collecting setuptools
Using cached setuptools-75.8.0-py3-none-any.whl.metadata (6.7 kB)
Collecting cython<3.0,>=0.25
Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting cymem<2.1.0,>=2.0.2
Using cached cymem-2.0.11-cp313-cp313-win_amd64.whl.metadata (8.8 kB)
Collecting preshed<3.1.0,>=3.0.2
Using cached preshed-3.0.9.tar.gz (14 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting murmurhash<1.1.0,>=0.28.0
Using cached murmurhash-1.0.12-cp313-cp313-win_amd64.whl.metadata (2.2 kB)
Collecting thinc<8.4.0,>=8.3.0
Using cached thinc-8.3.2.tar.gz (193 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'error'
error: subprocess-exited-with-error
pip subprocess to install build dependencies did not run successfully.
exit code: 1
[58 lines of output]
Ignoring numpy: markers 'python_version < "3.9"' don't match your environment
Collecting setuptools
Using cached setuptools-75.8.0-py3-none-any.whl.metadata (6.7 kB)
Collecting cython<3.0,>=0.25
Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting murmurhash<1.1.0,>=1.0.2
Using cached murmurhash-1.0.12-cp313-cp313-win_amd64.whl.metadata (2.2 kB)
Collecting cymem<2.1.0,>=2.0.2
Using cached cymem-2.0.11-cp313-cp313-win_amd64.whl.metadata (8.8 kB)
Collecting preshed<3.1.0,>=3.0.2
Using cached preshed-3.0.9.tar.gz (14 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting blis<1.1.0,>=1.0.0
Using cached blis-1.0.2-cp313-cp313-win_amd64.whl.metadata (7.8 kB)
Collecting numpy<2.1.0,>=2.0.0
Using cached numpy-2.0.2.tar.gz (18.9 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Preparing metadata (pyproject.toml) did not run successfully.
exit code: 1
[12 lines of output]
+ C:\Users\M361-1\Developer\TRT23\nlp\venv_spacy\Scripts\python.exe C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5\vendored-meson\meson\meson.py setup C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5 C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5\.mesonpy-gf7b5yu8 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5\.mesonpy-gf7b5yu8\meson-python-native-file.ini
The Meson build system
Version: 1.4.99
Source dir: C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5
Build dir: C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5\.mesonpy-gf7b5yu8
Build type: native build
Project name: NumPy
Project version: 2.0.2
..\meson.build:1:0: ERROR: Compiler cl cannot compile programs.
A full log can be found at C:\Users\M361-1\AppData\Local\Temp\pip-install-mlt_1vm3\numpy_59db6106a8f04eba871fd92c04f756d5\.mesonpy-gf7b5yu8\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
pip subprocess to install build dependencies did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
```
| closed | 2025-02-17T14:03:03Z | 2025-03-21T00:03:07Z | https://github.com/explosion/spaCy/issues/13753 | [] | MullerEsposito | 2 |
amdegroot/ssd.pytorch | computer-vision | 544 | AttributeError: 'Namespace' object has no attribute 'COCO_ROOT' | Hey guys! I got this issue:
Traceback (most recent call last):
File "test.py", line 102, in <module>
test_voc()
File "test.py", line 92, in test_voc
testset = COCODetection(args.COCO_ROOT, 'trainval35k', None, COCOAnnotationTransform)
AttributeError: 'Namespace' object has no attribute 'COCO_ROOT'
Could you plz give me some advices how to fix it | closed | 2021-05-14T03:03:13Z | 2021-05-14T03:09:44Z | https://github.com/amdegroot/ssd.pytorch/issues/544 | [] | Zhezhefufu | 1 |
plotly/plotly.py | plotly | 4,655 | px.line erratic and wrong for large x value ranges | # Problem
Plotting a straight line from ($-X$, 0) to ($+X$, 1) for various (large) $X$ within a much smaller window results in wrong and inconsistent behavior, as demonstrated in below minimum non-working examples (using a Jupyter notebook with plotly.js v2.27.0).
I doubt that this behavior can be explained by intrinsic rounding problems, since
```python
>>> np.interp([-0.1,0.1],[-1e17,+1e17],[0,1])
array([0.5, 0.5])
>>> np.interp([-0.1,0.1],[-1e18,+1e18],[0,1])
array([0.5, 0.5])
```
# Examples
## Data frame
```python
df = pd.DataFrame(dict(x=[-1e17,+1e17,-1e18,+1e18],
y=[0,1,0,1],
xlimit=['±1e17','±1e17','±1e18','±1e18']))
```
## Very tight plotting range (symmetric 0.1)
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-.1,.1],
range_y=[0.4,0.6],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

## Slightly wider plotting range (symmetric 0.3)
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-.3,.3],
range_y=[0.4,0.6],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

## Another plotting range (symmetric 1)
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-1,1],
range_y=[0.4,0.6],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

## Another even wider plotting range (symmetric 3)
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-3,3],
range_y=[0.4,0.6],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

## Apparently wide enough plotting range (symmetric 3.2)
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-3.2,3.2],
range_y=[0.4,0.6],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

As far as my tests go, for any `range_x` that is wider than about ±3, the plotting is rendered correctly:
```python
fig = px.line(df,x='x',y='y',color='xlimit',
range_x=[-1e17,1e17],
range_y=[0,1],
width=400,
height=400,)
fig.update_traces(opacity=0.5)
fig.show()
```

| open | 2024-07-05T15:54:53Z | 2024-08-13T13:21:51Z | https://github.com/plotly/plotly.py/issues/4655 | [
"bug",
"P3"
] | eisenlohr | 0 |
strawberry-graphql/strawberry | django | 2,889 | Improve support for `Info` and similar when used with `TYPE_CHECKING` blocks | Basically a proper way to fix this: #2858 and #2857
We decided to check for `Info` via string comparison and throw a warning when `Info` is imported inside a `TYPE_CHECKING` block
I decided not to do that in #2858 because I wanted to ship the removal of the warning as soon as possible and also make sure the warning is triggered in the right place 😊 | open | 2023-06-25T20:35:15Z | 2025-03-20T15:56:15Z | https://github.com/strawberry-graphql/strawberry/issues/2889 | [] | patrick91 | 0 |
deepspeedai/DeepSpeed | pytorch | 6,972 | [BUG] libaio on amd node | Hi, I installed libaio as
`apt install libaio-dev`
And I can see both .so and .h exist
```
root@b6410ec8bb69:/code/DeepSpeed# find / -name "libaio.so*" 2>/dev/null
/usr/lib/x86_64-linux-gnu/libaio.so.1
/usr/lib/x86_64-linux-gnu/libaio.so
/usr/lib/x86_64-linux-gnu/libaio.so.1.0.1
root@b6410ec8bb69:/code/DeepSpeed# find / -name "libaio.h" 2>/dev/null
/usr/include/libaio.h
```
And I did setup flags as:
```
echo 'export CFLAGS="-I/usr/include"' >> ~/.bashrc
echo 'export LDFLAGS="-L/usr/lib"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"' >> ~/.bashrc
source ~/.bashrc
```
but when I do ds_report, it says async_io is not compatible
```
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
[WARNING] gds is not compatible with ROCM
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn is not compatible with ROCM
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch']
torch version .................... 2.3.0a0+gitd2f9472
deepspeed install path ........... ['/code/DeepSpeed/deepspeed']
deepspeed info ................... 0.16.2+unknown, unknown, unknown
torch cuda version ............... None
torch hip version ................ 6.2.41134-65d174c3e
nvcc version ..................... None
deepspeed wheel compiled w. ...... torch 2.3, hip 6.2
shared memory (/dev/shm) size .... 910.48 GB
``` | open | 2025-01-25T01:59:00Z | 2025-02-05T16:54:27Z | https://github.com/deepspeedai/DeepSpeed/issues/6972 | [
"bug",
"training"
] | GuanhuaWang | 3 |
wkentaro/labelme | deep-learning | 890 | [BUG] Missing `self.update()` function in the end of `canvas.addPointToEdge` and `canvas.removeSelectedPoint` | **Describe the bug**
When using the shortcut of `add_point_to_line` (Ctrl+Shift+P), the polygon not update immediately until have mousemove event.
**Expected behavior**
Fixed by adding `self.update()` in the end of both functions:
```python
# labelme\widgets\canvas.py:
# line: 304
def addPointToEdge(self):
shape = self.prevhShape
index = self.prevhEdge
point = self.prevMovePoint
if shape is None or index is None or point is None:
return
shape.insertPoint(index, point)
shape.highlightVertex(index, shape.MOVE_VERTEX)
self.hShape = shape
self.hVertex = index
self.hEdge = None
self.movingShape = True
+++++++++++++++++
+ self.update() +
+++++++++++++++++
def removeSelectedPoint(self):
shape = self.prevhShape
point = self.prevMovePoint
if shape is None or point is None:
return
index = shape.nearestVertex(point, self.epsilon)
shape.removePoint(index)
# shape.highlightVertex(index, shape.MOVE_VERTEX)
self.hShape = shape
self.hVertex = None
self.hEdge = None
self.movingShape = True # Save changes
+++++++++++++++++
+ self.update() +
+++++++++++++++++
```
| open | 2021-07-18T05:21:10Z | 2022-09-26T14:33:47Z | https://github.com/wkentaro/labelme/issues/890 | [
"issue::bug",
"priority: medium"
] | HowcanoeWang | 0 |
microsoft/unilm | nlp | 847 | Question about DiT | I want to train on my own custom table dataset, where to modify the data category.
Looking forward to your reply. | open | 2022-09-01T08:12:19Z | 2022-09-01T08:13:06Z | https://github.com/microsoft/unilm/issues/847 | [] | ganggang233 | 0 |
graphistry/pygraphistry | jupyter | 208 | [ENH] Warning cleanup | * Good news: The new GHA will reject on flake8 errors, and a bunch are on by default via the cleanup in https://github.com/graphistry/pygraphistry/pull/206 !
* Less good news: Skip list has some bigger warning count ones still on the skiplist:
- [ ] E121
- [ ] E123
- [ ] E128
- [ ] E144
- [ ] E201
- [ ] E202
- [ ] E203
- [ ] E231
- [ ] E251
- [ ] E265
- [ ] E301
- [ ] E302
- [ ] E303
- [ ] E401
- [ ] E501
- [ ] E722
- [ ] F401
- [ ] W291
- [ ] W293 | open | 2021-02-08T06:55:21Z | 2021-02-08T06:55:36Z | https://github.com/graphistry/pygraphistry/issues/208 | [
"enhancement",
"help wanted",
"good-first-issue"
] | lmeyerov | 0 |
RobertCraigie/prisma-client-py | asyncio | 473 | Ignore certain directories / files when copying the package | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
https://github.com/RobertCraigie/prisma-client-py/discussions/470#discussioncomment-3517691
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
We should ignore `__pycache__` directories and `*.pyc` files. | closed | 2022-09-01T21:11:02Z | 2022-09-03T07:36:23Z | https://github.com/RobertCraigie/prisma-client-py/issues/473 | [
"bug/0-needs-info",
"kind/bug",
"priority/high",
"level/unknown",
"topic: generation"
] | RobertCraigie | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 273 | Web界面对Alpaca plug 7b 推理异常 | 感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。
*提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。*
### 详细描述问题
*请尽量具体地描述您遇到的问题。这将有助于我们更快速地定位问题所在。*
web界面可以启动,但是推理的时候答非所问。
使用的是python server.py --model llama-7b-hf --lora chinese_alpaca_plus_lora_7b
CLI里显示的token生成都很正常
### 运行截图或log

*(如有必要)请提供文本log或者运行截图,以便我们更好地了解问题详情。*
### 必查项目
- [x] 哪个模型的问题:Alpaca **(只保留你要问的)**
- [x] 问题类型:**(只保留你要问的)**
- 效果问题
- [ ] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-08T13:25:31Z | 2023-05-18T22:02:15Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/273 | [
"stale"
] | jazzlee008 | 3 |
huggingface/datasets | deep-learning | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb
Traceback (most recent call last):
File "/home/trainer/sft_train.py", line 22, in <module>
from datasets import load_dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_reader import ArrowReader
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module>
from ..utils import tqdm as hf_tqdm
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module>
from .info_utils import VerificationMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module>
from huggingface_hub.utils import insecure_hashlib
ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py)
```
### Steps to reproduce the bug
Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`.
### Expected behavior
The dataset should be (downloaded - if needed - and) returned.
### Environment info
```text
trainer@a311ae86939e:/mnt$ pip show datasets
Name: datasets
Version: 2.16.1
Summary: HuggingFace community-driven open-source library of datasets
Home-page: https://github.com/huggingface/datasets
Author: HuggingFace Inc.
Author-email: thomas@huggingface.co
License: Apache 2.0
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub
Required-by: trl, lm-eval, evaluate
trainer@a311ae86939e:/mnt$ pip show huggingface_hub
Name: huggingface-hub
Version: 0.17.3
Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
Home-page: https://github.com/huggingface/huggingface_hub
Author: Hugging Face, Inc.
Author-email: julien@huggingface.co
License: Apache
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec
Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate
trainer@a311ae86939e:/mnt$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.17.3
- Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29
- Python version: 3.8.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/trainer/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: wasertech
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.2
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 10.2.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.4
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
``` | closed | 2024-01-06T02:28:54Z | 2024-03-14T02:59:42Z | https://github.com/huggingface/datasets/issues/6563 | [] | wasertech | 7 |
andrew-hossack/dash-tools | plotly | 44 | [BUG] dashtools heroku --update error | Error creating requirements.txt with a cp950 undecode error(I examine and there isn't any cp950 syntax in file)"while running "dashtools heroku --deploy" | closed | 2022-07-22T01:52:06Z | 2022-07-23T00:48:57Z | https://github.com/andrew-hossack/dash-tools/issues/44 | [] | andrew-hossack | 2 |
apify/crawlee-python | web-scraping | 347 | add_request does not support the label argument | The "Add Requests" does not support the label argument, so doing things like stated in the [refactoring](https://crawlee.dev/python/docs/introduction/refactoring) introduction is not possible if you have to use information from the site in order to generate links, rather than them being present already on the page.
For example, if I use a regex to pull out a bunch of hashes that are on a page, and then use those hashes to construct another URL and want to scrape that, it is not possible to do it with the enqueue_links, since that only works for links that are present on the page, and you cannot add a label to it with add_requests, so all of the splitting logic for them has to be in the `@crawler.router.default_handler` function. | closed | 2024-07-23T14:11:39Z | 2024-07-23T15:18:56Z | https://github.com/apify/crawlee-python/issues/347 | [] | MrTyton | 3 |
marcomusy/vedo | numpy | 1,203 | Reading DXF files. | Hi Marco!
Sorry if i asked this before, but, it's possible to read 2d/3d meshes from a dxf file directly into vedo?
I'm currently using `ezdxf` and manually converting it into a `vedo` mesh, but if it could be read directly, it'd be great!
Thanks in advance! | closed | 2024-11-26T10:17:45Z | 2024-12-02T20:19:00Z | https://github.com/marcomusy/vedo/issues/1203 | [] | ManuGraiph | 4 |
autogluon/autogluon | scikit-learn | 4,978 | Support PyArrow data types | ## Description
Since Pandas 2.0, Pandas supports PyArrow data types.
In local benchmarks, this significantly improved read times for a large parquet from 757 ms to 139 ms.
```python
data = pd.read_parquet(
"validation.parquet",
dtype_backend= 'pyarrow' # uncomment for default numpy backend
)
```
However, the subsequent TabularPredictor.fit() throws warnings and fails to process these features:
```
Cannot interpret 'int8[pyarrow]' as a data type
Warning: dtype int8[pyarrow] is not recognized as a valid dtype by numpy! AutoGluon may incorrectly handle this feature...
```
Can Autogluon support Pyarrow dtypes in feature pre-processing and training/prediction?
## References
- https://pandas.pydata.org/docs/user_guide/pyarrow.html
- https://medium.com/@santiagobasulto/pandas-2-0-performance-comparison-3f56b4719f58
| open | 2025-03-12T14:32:50Z | 2025-03-12T14:34:33Z | https://github.com/autogluon/autogluon/issues/4978 | [
"enhancement"
] | james-yun | 0 |
milesmcc/shynet | django | 199 | Shynet on same server with Nginx and SSL | Is there an example or documentation on how to setup Shynet on the same server with Nginx? I installed Shynet (docker) and it is working (I can access it on http://domain:8080). But I need a proxy config so that traffic on https://domain:8080 (secure) will be forwarded to the working http Shynet.
In my 443 ssl serverblock I added several combinations of:
location /shynet/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://localhost:8080;
}
But that doesn't work (404 resource not found).
I did see requests coming in while testing:
ERROR Invalid HTTP_HOST header: '127.0.0.1:8080'. You may need to add '127.0.0.1' to ALLOWED_HOSTS.
ERROR Invalid HTTP_HOST header: '0.0.0.0:8080'. You may need to add '0.0.0.0' to ALLOWED_HOSTS.
I added these to ALLOWED_HOSTS and the errors are gone but I keep getting 404's
Is there an example somewhere of how to use Shynet on the same machine as the Nginx webserver?
| closed | 2022-01-29T23:34:48Z | 2022-02-05T12:27:07Z | https://github.com/milesmcc/shynet/issues/199 | [] | majodi | 2 |
quantumlib/Cirq | api | 7,091 | [Question]: Usage of Graph Algorithms and Any Slowdowns? | Hi there,
I'm interested in understanding if `cirq-core` depends on any graph algorithms from its usage of NetworkX? If so,
- What algorithms are used for what purpose?
- What graph sizes are they being used with?
- Have users experienced any slowdowns or issues with algorithms provided by NetworkX? (Speed, algorithm availability, etc)
Furthermore, would users be interested in accelerated nx algorithms via a GPU backend? This would involve zero code change.
Any insight into this topic would be greatly appreciated! Thank you. | open | 2025-02-24T17:17:03Z | 2025-03-19T17:44:17Z | https://github.com/quantumlib/Cirq/issues/7091 | [
"kind/question"
] | nv-rliu | 0 |
hankcs/HanLP | nlp | 763 | 运行DemoTextClassificationFMeasure. java,看不懂输出。 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
master分支
当前最新版本号是:hanlp-1.5.4.jar
我使用的版本是:hanlp-1.5.3.jar
<!--以上属于必填项,以下可自由发挥-->
【问题】
运行开源项目中的DemoTextClassificationFMeasure. java(演示了分割训练集和测试集,进行更严谨的测试)
可是我看不懂输出,不知道这个测试和这些数据意味着什么,可以解释一下吗?
【实际输出】
模式:训练集
文本编码:UTF-8
根目录:data/test/ChnSentiCorp情感分析酒店评论
加载中...
[正面]...100.00% 1800 篇文档
[负面]...100.00% 1800 篇文档
耗时 21797 ms 加载了 2 个类目,共 3600 篇文档
原始数据集大小:3600
使用卡方检测选择特征中...耗时 2504 ms,选中特征数:5486 / 14103 = 38.90%
贝叶斯统计结束
模式:测试集
文本编码:UTF-8
根目录:data/test/ChnSentiCorp情感分析酒店评论
加载中...
[正面]...100.00% 200 篇文档
[负面]...100.00% 200 篇文档
耗时 1541 ms 加载了 2 个类目,共 400 篇文档
P R F1 A
82.63 88.00 85.23 84.75 正面
87.17 81.50 84.24 84.75 负面
84.90 84.75 84.82 84.75 avg.
data size = 400, speed = 44444.44 doc/s
请问下面这部分怎么理解,有什么意义?
P------R-----F1----A
82.63 88.00 85.23 84.75 正面
87.17 81.50 84.24 84.75 负面
84.90 84.75 84.82 84.75 avg.
data size = 400, speed = 44444.44 doc/s
| closed | 2018-02-22T09:42:49Z | 2018-02-22T10:00:32Z | https://github.com/hankcs/HanLP/issues/763 | [] | 0311sr | 0 |
opengeos/leafmap | plotly | 638 | error with pmtiles_metadata (not compatible with v3?) | https://github.com/opengeos/leafmap/blob/cbbb32c20d66a9baabe69303597626b5e12b8d5b/leafmap/common.py#L11351
does not work with current PMTiles (metadata does not have the "json" key)
These are the keys present in the header and metadata from one of the current PMTiles extracted from [protomaps daily builds](https://docs.protomaps.com/guide/getting-started#_2-find-the-latest-daily-planet):
```
{'header': ['addressed_tiles_count',
'center_lat_e7',
'center_lon_e7',
'center_zoom',
'clustered',
'internal_compression',
'leaf_directory_length',
'leaf_directory_offset',
'max_lat_e7',
'max_lon_e7',
'max_zoom',
'metadata_length',
'metadata_offset',
'min_lat_e7',
'min_lon_e7',
'min_zoom',
'root_length',
'root_offset',
'tile_compression',
'tile_contents_count',
'tile_data_length',
'tile_data_offset',
'tile_entries_count',
'tile_type',
'version'],
'metadata': ['attribution',
'description',
'name',
'planetiler:buildtime',
'planetiler:githash',
'planetiler:osm:osmosisreplicationseq',
'planetiler:osm:osmosisreplicationtime',
'planetiler:osm:osmosisreplicationurl',
'planetiler:version',
'type',
'vector_layers']}
```
Related links:
* https://github.com/protomaps/PMTiles/blob/main/python/bin/pmtiles-show
* https://github.com/protomaps/PMTiles/issues/111 | closed | 2023-12-14T18:39:16Z | 2023-12-17T13:58:54Z | https://github.com/opengeos/leafmap/issues/638 | [] | prusswan | 0 |
jschneier/django-storages | django | 633 | S3Boto3Storage: Getting rid of orphaned files when not overwriting | Has anyone come up with a good solution (when AWS_S3_FILE_OVERWRITE = False) to automatically get rid of the otherwise orphaned previous files? I like the random char appending feature (this way the client knows when a file has changed), but can't afford to have my S3 bucket full of old, unused images. Thanks in advance! | closed | 2018-12-11T18:15:31Z | 2019-03-27T03:32:21Z | https://github.com/jschneier/django-storages/issues/633 | [] | sushifan | 1 |
python-gino/gino | sqlalchemy | 112 | JOIN queries do not work | * GINO version: 0.5.5
* Python version: 3.6.3
* Operating System: macOS 10.13
### Description
I expected something along the following lines to work (please note that I am using the Sanic extension):
```python
await Model1.query.where(Model1.id == id).join(Model2).select().gino.first_or_404()
```
However it did not. Could you please point me to a working example on writing `JOIN` queries involving Gino models?
### What I Did
Traceback:
```
File "/usr/local/lib/python3.6/site-packages/sanic/app.py", line 556, in handle_request
response = await response
File "/data/dev/project/src/api/routes/model1.py", line 16, in model1
s = await Model1.query.where(Model1.id == id).join(Model2).select().gino.first_or_404()
File "/usr/local/lib/python3.6/site-packages/gino/ext/sanic.py", line 23, in first_or_404
rv = await self.first(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/gino/api.py", line 121, in first
conn, self._query, *multiparams, **params)
File "/usr/local/lib/python3.6/site-packages/gino/dialect.py", line 243, in do_first
connection, clause, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/gino/dialect.py", line 225, in _execute_clauseelement
prepared = await context.prepare()
File "/usr/local/lib/python3.6/site-packages/gino/dialect.py", line 118, in prepare
return await self.connection.prepare(self.statement)
File "/usr/local/lib/python3.6/site-packages/gino/dialect.py", line 63, in prepare
rv = self._stmt = await self._conn.prepare(statement)
File "/usr/local/lib/python3.6/site-packages/gino/pool.py", line 94, in wrapper
return await getattr(conn, attr)(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 332, in prepare
stmt = await self._get_statement(query, timeout, named=True)
File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 286, in _get_statement
statement = await self._protocol.prepare(stmt_name, query, timeout)
File "asyncpg/protocol/protocol.pyx", line 168, in prepare
asyncpg.exceptions.PostgresSyntaxError: subquery in FROM must have an alias
HINT: For example, FROM (SELECT ...) [AS] foo.
```
| closed | 2017-11-21T12:02:26Z | 2019-09-22T19:21:23Z | https://github.com/python-gino/gino/issues/112 | [] | brncsk | 10 |
roboflow/supervision | deep-learning | 724 | counter by line crossing | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hello, would you have any sample .py script for people counting that can be run in VSCode? I have been trying to adapt the scripts from Google Colab, but it's been difficult and I'm not succeeding. Could you assist me? Happy 2024 to the entire Supervision team, and thank you very much for making this incredible product.
### Additional
_No response_ | closed | 2024-01-13T04:16:06Z | 2024-01-18T21:19:57Z | https://github.com/roboflow/supervision/issues/724 | [
"question"
] | Rasantis | 2 |
trevismd/statannotations | seaborn | 118 | Plotting of significance happens beneath Seaborn Violin Plots | Hello!
Full disclosure, I am currently using Seaborn v 0.12, which I know is not fully supported as of yet.
Currently, I have had no issues running statannotations up until trying to get the placement of the significance lines above my violin plots.
Here is how they look when loc='inside'

My question is if anyone else has been broached by something similar and has a solution, even if it involves digging into the Annotator.py file?
There is also one, unrelated to my current issue, but still relevant, thing that I took notice of. This is an issue mainly with Seaborn, but it might tie into those who try to use statannotations with Seaborn 0.12 if their numpy is later than 1.20: A lot of the old np classes have been deprecated (e.g. np.float no longer works, you must call 'float', 'float32', or 'float64' as strings wherever calling a numpy data type using the old syntax). So, hopefully seaborn updates their scripts categorical.py and utils.py where they're called
Thank you for any assistance! I love the scripts by the way, they're a life-saver! | open | 2023-04-04T01:17:34Z | 2023-04-04T01:17:34Z | https://github.com/trevismd/statannotations/issues/118 | [] | greenmna | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.