repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | machine-learning | 36,208 | `modular_model_converter` cannot handle local imports with `return` | ### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.31
- Python version: 3.11.9
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.1.1 (True)
- Tensorflow version (GPU?): 2.15.1 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Create a new folder named `xxx_model ` in `src/transformers/models/`
2. Inside this folder, create a new Python file called `modular_xxx.py ` with the following content:
```python
from transformers.models.detr.image_processing_detr import DetrImageProcessor
class TmpImageProcessor(DetrImageProcessor):
pass
```
3. Run the following command to execute the model converter:
```shell
python utils/modular_model_converter.py --files_to_parse src/transformers/models/xxx_model/modular_xxx.py
```
### Expected behavior
The expected behavior is that it creates a file `src/transformers/models/xxx_model/image_processing_xxx.py`. However, the script fails with the following traceback:
```shell
Traceback (most recent call last):
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1726, in <module>
converted_files = convert_modular_file(file_name)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1663, in convert_modular_file
for file, module in create_modules(cst_transformers).items():
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1643, in create_modules
needed_imports = get_needed_imports(body, all_imports)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1151, in get_needed_imports
append_new_import_node(stmt_node, unused_imports, added_names, new_statements)
File "/Users/houxiuquan/Downloads/transformers/utils/modular_model_converter.py", line 1111, in append_new_import_node
for name in import_node.names:
AttributeError: 'Return' object has no attribute 'names'
```
I found the error is caused by the local imports with `return` in the following function of `transformer.models.detr.image_processing_detr`:
```python
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
import tensorflow as tf
return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
import torch
return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
import jax.numpy as jnp
return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
```
When removing the `return` row, the script works:
```python
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
import tensorflow as tf
# return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
import torch
# return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
import jax.numpy as jnp
# return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
```
If moving the import outside the function (global import), the script also works:
```python
import tensorflow as tf
import torch
import jax.numpy as jnp
def get_numpy_to_framework_fn(arr) -> Callable:
"""
Returns a function that converts a numpy array to the framework of the input array.
Args:
arr (`np.ndarray`): The array to convert.
"""
if isinstance(arr, np.ndarray):
return np.array
if is_tf_available() and is_tf_tensor(arr):
return tf.convert_to_tensor
if is_torch_available() and is_torch_tensor(arr):
return torch.tensor
if is_flax_available() and is_jax_tensor(arr):
return jnp.array
raise ValueError(f"Cannot convert arrays of type {type(arr)}")
``` | closed | 2025-02-15T03:46:51Z | 2025-02-25T09:29:48Z | https://github.com/huggingface/transformers/issues/36208 | [
"bug",
"Modular"
] | xiuqhou | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,400 | Flask MQTT on_connect is never called when used with SocketIO | I am trying to implement an MQTT to Web Socket bridge on the lines of the example
https://flask-mqtt.readthedocs.io/en/latest/usage.html#interact-with-socketio
In the above example, subscribing to the MQTT topic is triggered by the socket client. But I want my MQTT channel to keep communicating even if there is no socket client.
So I tried to subscribe in the @mqtt.on_connect() event. But that callback is never invoked. But once subscription is initiated using a socket, MQTT messages start flowing in all right. Does SocketIO in some way interfere with MQTT life cycle events ?
I have posted the sample code here:
https://stackoverflow.com/questions/64592277/flask-mqtt-on-connect-is-never-called-when-used-with-socketio | closed | 2020-10-30T15:53:03Z | 2021-04-06T13:19:21Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1400 | [
"question"
] | ramanraja | 2 |
koxudaxi/datamodel-code-generator | pydantic | 1,584 | Support `x-propertyNames` in OpenAPI 3.0 | **Is your feature request related to a problem? Please describe.**
OpenAPI 3.0 does not natively support the full set of the JSON-Schema specification; notably, `patternProperties` and `propertyNames` are absent. Some tools instead use the `x-` prefix to support `patternProperties` and `propertyNames` in OpenAPI 3.0.
**Describe the solution you'd like**
Would it be possible for `datamodel-code-generator` to support `x-patternProperties` and `x-propertyNames` as well? It would be great as we currently have no way of upgrading to 3.1 soon, but we would still like to benefit from pattern properties and properly typed property names.
**Describe alternatives you've considered**
There is no real alternative; the model currently generated is just empty, which is _kind of_ valid, as in it will ignore any patterned properties.
**Additional context**
https://github.com/hashintel/hash/blob/990b911d4e9a166c4aff23844cf928945cd40159/apps/hash-graph/openapi/models/shared.json#L119-L149
A real-world snippet where we make use of both.
| open | 2023-10-02T09:00:06Z | 2025-02-18T22:29:08Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1584 | [
"answered"
] | indietyp | 3 |
scikit-image/scikit-image | computer-vision | 7,144 | Reported ValueError: win_size exceeds image extent. Given the win_size = 7 and Image shape (592,400,3) | ### Description:
Hi,
I am using SSIM to evaluate the image with the following code
```
from skimage.metrics import structural_similarity as compare_ssim
def _ssim(tf_img1, tf_img2):
# NOTE: see multichannel=True for RGB images
return compare_ssim(tf_img1, tf_img2, multichannel=True, data_range=255)
```
Given the image tf_img1 (both in 0-255):

and tf_img2:

The error reported as
```
ValueError: win_size exceeds image extent. Either ensure that your images are at least 7x7; or pass win_size explicitly in the function call, with an odd value less than or equal to the smaller side of your images. If your images are multichannel (with color channels), set channel_axis to the axis number corresponding to the channels.
```
However, the image size is clearly larger than 7*7 and multichannel=True.
### Way to reproduce:
_No response_
### Version information:
```Shell
3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:34:09) [GCC 12.3.0]
Linux-6.2.0-33-generic-x86_64-with-glibc2.35
scikit-image version: 0.21.0
numpy version: 1.26.0
```
| closed | 2023-09-22T07:07:54Z | 2023-09-26T05:02:36Z | https://github.com/scikit-image/scikit-image/issues/7144 | [
":bug: Bug"
] | allanchan339 | 2 |
noirbizarre/flask-restplus | flask | 704 | Documentation error for Scaling Your Project > Use With Blueprints | In this section:
https://flask-restplus.readthedocs.io/en/stable/scaling.html#use-with-blueprints
the documentation suggests that that `Api` object should be passed to `register_blueprint()` rather than the `Blueprint` object. The code as documented gives:
```
app.register_blueprint(api, url_prefix='/api/v1')
File "/Users/rjs/.virtualenvs/acquire/lib/python3.7/site-packages/flask/app.py", line 98, in wrapper_func
return f(self, *args, **kwargs)
File "/Users/rjs/.virtualenvs/acquire/lib/python3.7/site-packages/flask/app.py", line 1167, in register_blueprint
blueprint.register(self, options, first_registration)
File "/Users/rjs/.virtualenvs/acquire/lib/python3.7/site-packages/flask_restplus/api.py", line 217, in __getattr__
raise AttributeError('Api does not have {0} attribute'.format(name))
AttributeError: Api does not have register attribute
``` | closed | 2019-08-26T20:10:10Z | 2019-08-27T14:59:36Z | https://github.com/noirbizarre/flask-restplus/issues/704 | [] | rob-smallshire | 2 |
X-PLUG/MobileAgent | automation | 59 | FileNotFoundError: [Errno 2] No such file or directory: '/home/wanghaikuan/.cache/modelscope/hub/._____temp/AI-ModelScope/GroundingDINO/groundingdino/__init__.py' | 运行的时候报错 | open | 2024-09-11T08:56:18Z | 2024-09-11T09:08:00Z | https://github.com/X-PLUG/MobileAgent/issues/59 | [] | whk6688 | 1 |
QingdaoU/OnlineJudge | django | 363 | [Tip] How to add python numpy. | I solved the problem can not import python package(my case is numpy).
First, Add 'libatlas-base-dev' to 'apt install' line in dockerfile in JudgerServer git files. That is required library to use numpy.
Seconds, Add 'python3-numpy' in same line.
That's solution for numpy.
For other case, You can read error log in docker.
Go into JudgeServer Docker with image name 'judge-server' as command 'docker exec -it judge-server bash'.
And 'cd' to '/judger/run/{Hash code}'.
There is key to solve your problem in '{number}.out' file. | open | 2021-03-11T02:53:44Z | 2021-03-11T04:01:24Z | https://github.com/QingdaoU/OnlineJudge/issues/363 | [] | gompanghee | 0 |
jupyter/nbgrader | jupyter | 1,225 | Why should the root of the exchange be writable by anyone? | If the root directory of the exchange is not writable by all, nbgrader submit fails with an error `Unwritable directory, please contact your instructor:`. It seems to be a security hazard to actually have this directory writable by anyone (a malicious user could e.g. rename a course directory, making it unavailble to other users). And for the submission itself, the user only need write permissions on the `inbound` subdirectory.
Is there a use case? Shouldn't this be turned off by default? | open | 2019-09-18T21:13:15Z | 2019-11-02T10:04:16Z | https://github.com/jupyter/nbgrader/issues/1225 | [
"question"
] | nthiery | 8 |
piccolo-orm/piccolo | fastapi | 727 | Join via primary key | Let's say I have
```python
class Band(Table):
id = Integer(primary_key=True)
name = Varchar()
class BandExtra(Table):
id = Integer(primary_key=True)
extra_info = Varchar()
```
Hwo can I select `BandExtra.extra_info` from a `Band.select` within a single query? I know I could add an additional `ForeignKey` to the `Band` Table, but if possible I would like to avoid additional columns. Is there a way? Thank you
| closed | 2022-12-18T22:56:44Z | 2023-03-05T10:23:06Z | https://github.com/piccolo-orm/piccolo/issues/727 | [] | powellnorma | 6 |
opengeos/leafmap | streamlit | 75 | Converts a pandas dataframe to geojson | closed | 2021-07-07T01:10:12Z | 2021-07-07T01:18:22Z | https://github.com/opengeos/leafmap/issues/75 | [
"Feature Request"
] | giswqs | 1 | |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,417 | [Bug]: Couldn't install clip, no setuptools module, sysinfo error | ### Checklist
- [x] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I'm trying to install webui for the first time. I'm using windows 10 with an nvidia geforce gtx 1660 ti.
I downloaded sd.webui.zip, extracted it, ran update.bat, all without issue. When I ran run.bat, it told me torch couldn't use my gpu, and suggested adding --skip-torch-cuda-test to the command line args, so I did.
Now when I run run.bat, it gives me a couple of errors. First, it tells me it couldn't install clip. Then it tells me there is no setuptools module. It also warns me that pip is slightly out of date and gives me the command to upgrade it, but the command also fails when I run it. So I manually upgraded pip for my python 3.12 installation, but that seems to have no effect on the version of pip that webui's python installation is using.
For this report, I also tried to follow the troubleshooting steps at https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting under "torch cannot use the gpu", because I figured maybe that was the real root of my problems. However, my webui installation has no "venv" folder that I can find. So I figured maybe "venv/Scripts" was referring to my folder "sd.webui\system\python\Scripts" and I ran the command "python -m torch.utils.collect_env" from there. It gave me an error, saying there is no torch module.

### Steps to reproduce the problem
See above.
### What should have happened?
WebUI should have installed.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
I ran run.bat after adding the command line argument you said to add. It gave me even more errors, and when it was done, the only sysinfo file I could find was "E:\sd.webui\webui\modules\sysinfo.py", which probably isn't what you wanted. Instead, I'm attaching the full terminal log of the attempt, titled sysinfo.txt so nobody dismisses this report prematurely. I'm so confused. Please help.
[sysinfo.txt](https://github.com/user-attachments/files/16705041/sysinfo.txt)
Also, if you want system information, I'll try attaching some screenshots of my computer's specs.



### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing clip
Traceback (most recent call last):
File "E:\sd.webui\webui\launch.py", line 48, in <module>
main()
File "E:\sd.webui\webui\launch.py", line 39, in main
prepare_environment()
File "E:\sd.webui\webui\modules\launch_utils.py", line 394, in prepare_environment
run_pip(f"install {clip_package}", "clip")
File "E:\sd.webui\webui\modules\launch_utils.py", line 144, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
File "E:\sd.webui\webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install clip.
Command: "E:\sd.webui\system\python\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
Error code: 1
stdout: Collecting https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
Using cached https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip (4.3 MB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
stderr: ERROR: Command errored out with exit status 1:
command: 'E:\sd.webui\system\python\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Jesse Rose\\AppData\\Local\\Temp\\pip-req-build-lb9pwdsj\\setup.py'"'"'; __file__='"'"'C:\\Users\\Jesse Rose\\AppData\\Local\\Temp\\pip-req-build-lb9pwdsj\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Jesse Rose\AppData\Local\Temp\pip-pip-egg-info-51itrskz'
cwd: C:\Users\Jesse Rose\AppData\Local\Temp\pip-req-build-lb9pwdsj\
Complete output (3 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'setuptools'
----------------------------------------
WARNING: Discarding https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
WARNING: You are using pip version 21.3.1; however, version 24.2 is available.
You should consider upgrading via the 'E:\sd.webui\system\python\python.exe -m pip install --upgrade pip' command.
Press any key to continue . . .
```
### Additional information
I have a few different versions of Python installed, and I don't think I've ever changed the PATH variable myself, but here's what it contains: "PATH=C:\Program Files\Java\jdk-17.0.2\bin;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Program Files\Python310\Scripts\;C:\Program Files\Python310\;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\Microsoft VS Code\bin;C:\Program Files\Git\cmd;C:\Program Files\dotnet\;C:\Users\Jesse Rose\AppData\Local\Programs\Python\Python312\Scripts\;C:\Users\Jesse Rose\AppData\Local\Programs\Python\Python312\;C:\Users\Jesse Rose\AppData\Local\Microsoft\WindowsApps;"
Also, I checked the two output files in sd.webui\webui\tmp. stderr was empty, but I've attached stdout, because it seems like evidence that WebUI was trying to use pip incorrectly.
[stdout.txt](https://github.com/user-attachments/files/16703734/stdout.txt)
| closed | 2024-08-22T04:13:24Z | 2024-08-30T21:26:18Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16417 | [
"bug-report"
] | GreenCauldron08 | 1 |
pytest-dev/pytest-cov | pytest | 306 | Deleted working directory in pytest-cov 2.6.0+ | For `pytest-cov>=2.6`, the following error results at the end of execution:
```
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/_pytest/main.py", line 213, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/_pytest/main.py", line 257, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pluggy/hooks.py", line 289, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pluggy/manager.py", line 87, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pluggy/manager.py", line 81, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pluggy/callers.py", line 203, in _multicall
INTERNALERROR> gen.send(outcome)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pytest_cov/plugin.py", line 229, in pytest_runtestloop
INTERNALERROR> self.cov_controller.finish()
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/pytest_cov/engine.py", line 171, in finish
INTERNALERROR> self.cov.stop()
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/coverage/control.py", line 675, in load
INTERNALERROR> self._init()
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/coverage/control.py", line 223, in _init
INTERNALERROR> set_relative_directory()
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/coverage/files.py", line 28, in set_relative_directory
INTERNALERROR> RELATIVE_DIR = os.path.normcase(abs_file(os.curdir) + os.sep)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/site-packages/coverage/files.py", line 163, in abs_file
INTERNALERROR> path = os.path.realpath(path)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/posixpath.py", line 389, in realpath
INTERNALERROR> return abspath(path)
INTERNALERROR> File "/usr/local/miniconda/lib/python3.7/posixpath.py", line 376, in abspath
INTERNALERROR> cwd = os.getcwd()
INTERNALERROR> FileNotFoundError: [Errno 2] No such file or directory
```
This appears to indicate that somewhere we change directories, then that directory gets deleted, and then `os.getcwd()` gets called via `coverage.coverage(...).stop()` before any `chdir` to an existing directory.
Downgrading to 2.5.1 resolves the problem. For reference, our call looks like:
```
pytest --junit-xml=/tmp/pytest.xml \
--cov niworkflows --cov-report xml:/tmp/unittests.xml \
--ignore=/src/niworkflows/niworkflows/tests/ \
--ignore=/src/niworkflows/niworkflows/interfaces/ants.py \
/src/niworkflows/niworkflows
```
Related: nedbat/coveragepy#750 | closed | 2019-07-11T18:56:21Z | 2020-05-22T17:08:58Z | https://github.com/pytest-dev/pytest-cov/issues/306 | [
"bug"
] | effigies | 9 |
dpgaspar/Flask-AppBuilder | rest-api | 1,756 | related_fields example problem | Tell me please.
Using the example related_fields. When I do this




The subgroup field cannot find a value. How do you make it work?
| closed | 2021-12-06T09:25:36Z | 2022-01-31T14:24:39Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1756 | [] | vash-sa | 0 |
PeterL1n/RobustVideoMatting | computer-vision | 88 | 如果使用场景是低分辨率,去掉DGF没什么影响把? 低分辨率训练是不是只需要stage1/2/4就可以了? | closed | 2021-10-19T10:20:00Z | 2021-10-21T03:17:13Z | https://github.com/PeterL1n/RobustVideoMatting/issues/88 | [] | pxEkin | 2 | |
seleniumbase/SeleniumBase | pytest | 2,501 | One of the CAPTCHA test sites disabled their CAPTCHA (or exceeded a limit) | ## One of the CAPTCHA test sites disabled their CAPTCHA (or exceeded a limit)
This morning, https://nowsecure.nl/#relax is no longer throwing a Turnstile/CAPTCHA. I was able to reach it with regular Selenium. That site is used in several tests for verifying UC Mode.
It's unclear if the owner of that site disabled their Cloudflare Turnstile CAPTCHA service, or if a CAPTCHA-serving limit was exceeded.
If the CAPTCHA still hasn't returned in 5 days, I'll update the tests to use a different site with a Cloudflare Turnstile CAPTCHA.
In the meantime, people can test with https://top.gg/ - which has a Cloudflare Turnstile enabled. | closed | 2024-02-16T15:21:37Z | 2024-03-05T22:46:09Z | https://github.com/seleniumbase/SeleniumBase/issues/2501 | [
"external",
"tests",
"UC Mode / CDP Mode"
] | mdmintz | 4 |
robotframework/robotframework | automation | 4,828 | TypeError: WebDriver.__init__() got an unexpected keyword argument 'service_log_path' | I'm also facing the same issue
WebDriver.__init__() got an unexpected keyword argument 'service_log_path'
Python 3.11.4
Selenium Version: 4.9.1
Robot Framework Version: 6.1
| closed | 2023-07-20T08:50:29Z | 2023-07-23T21:29:15Z | https://github.com/robotframework/robotframework/issues/4828 | [] | ZindagiH | 1 |
gevent/gevent | asyncio | 1,589 | Update bundled config.sub/config.guess | closed | 2020-04-27T10:54:00Z | 2020-04-27T11:39:28Z | https://github.com/gevent/gevent/issues/1589 | [] | jamadden | 0 | |
netbox-community/netbox | django | 18,610 | Easy to copy source for system information for bug reporting | Consider adding a section to the `/netbox/core/system/` page and `manage.py` suitable for copying system information to paste into bug reports.
Feel free to close this if the current NetBox version + Python version is sufficient, I guess.
It may be interesting to make this extensible to the extent that netbox-docker can be listed as the deployment style. I have had issues that ended up being specific to something I did with netbox-docker. | closed | 2025-02-09T01:57:02Z | 2025-02-10T13:11:32Z | https://github.com/netbox-community/netbox/issues/18610 | [] | deliciouslytyped | 1 |
gevent/gevent | asyncio | 1,172 | _close_fds on linux takes relatively long time on Python 3 | https://github.com/gevent/gevent/blob/e3e555e2309120a488bbbc8b4f965791a5f15eed/src/gevent/subprocess.py#L1148
This is true especially when the host has a high limit of file-descriptors (`MAXFD`), which seems to be the case inside docker containers. This implementation leads to a lot (~2^20 in my case) of unnecessary calls to the `close` syscall, which slows down any subprocess invocation.
In contrast, the native python implementation uses `_posixsubprocess.fork_exec` [which in turn uses](https://github.com/python/cpython/blob/master/Modules/_posixsubprocess.c#L273) `/dev/fd` to identify fds that are indeed open.
| closed | 2018-04-10T15:49:43Z | 2018-04-16T16:38:32Z | https://github.com/gevent/gevent/issues/1172 | [
"Type: Enhancement",
"PyVer: python3",
"Platform: POSIX"
] | koreno | 5 |
sinaptik-ai/pandas-ai | pandas | 930 | TypeError: sequence item 0: expected str instance, int found | ### System Info
pandasai==1.5.19
Darwin ------ 23.3.0 Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020 arm64
Python 3.11.7
### 🐛 Describe the bug
If you use a DataFrame that has a column of float64's with caching enabled you will get:
```shell
TypeError: sequence item 0: expected str instance, int found
```
When SmartDataFrame tries to hash such column.
smart_dataframe/__init__.py:337
```py
columns_str = "".join(self.dataframe.columns)
```
Should be:
```py
columns_str = "".join(str(column) for column in self.dataframe.columns)
```
`self._sdf.enable_cache = False` bypasses for now until we get a fix. | closed | 2024-02-10T06:56:15Z | 2024-06-01T00:02:13Z | https://github.com/sinaptik-ai/pandas-ai/issues/930 | [] | Falven | 1 |
allure-framework/allure-python | pytest | 695 | Duplicated pytest fixtures | Here's an example - the first test has unexpected fixture from another test
```
@pytest.fixture
def fixture(request):
with allure.step(request.node.name):
pass
def test_first(fixture):
pass
def test_second(fixture):
pass
```
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.1.0
- Test framework: pytest@3.0
- Allure adaptor: allure-pytest@2.11.0
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| closed | 2022-10-11T15:32:55Z | 2022-10-12T08:14:54Z | https://github.com/allure-framework/allure-python/issues/695 | [
"theme:pytest"
] | skhomuti | 0 |
ageitgey/face_recognition | machine-learning | 875 | GPU or CPU for face comparation | Hi!
For extracting templates the GPU are used, but my question is for compare templates what is in use? the GPU or the CPU?
Thanks! | open | 2019-07-05T15:20:59Z | 2019-07-05T15:44:18Z | https://github.com/ageitgey/face_recognition/issues/875 | [] | neumartin | 1 |
huggingface/transformers | deep-learning | 36,410 | Conflicting Keras 3 mitigations | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-6.13.4-zen1-1-zen-x86_64-with-glibc2.41
- Python version: 3.13.2
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
I was attempting to create a BART pipeline but it failed with the errors:
```
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/transformers/activations_tf.py", line 22, in <module>
import tf_keras as keras
ModuleNotFoundError: No module named 'tf_keras'
```
```
File "/usr/lib/python3.13/site-packages/transformers/models/bart/modeling_tf_bart.py", line 25, in <module>
from ...activations_tf import get_tf_activation
File "/usr/lib/python3.13/site-packages/transformers/activations_tf.py", line 27, in <module>
raise ValueError(
...<3 lines>...
)
ValueError: Your currently installed version of Keras is Keras 3, but this is not yet supported in Transformers. Please install the backwards-compatible tf-keras package with `pip install tf-keras`.
```
```
File "/usr/lib/python3.13/site-packages/transformers/utils/import_utils.py", line 1865, in _get_module
raise RuntimeError(
...<2 lines>...
) from e
RuntimeError: Failed to import transformers.models.bart.modeling_tf_bart because of the following error (look up to see its traceback):
Your currently installed version of Keras is Keras 3, but this is not yet supported in Transformers. Please install the backwards-compatible tf-keras package with `pip install tf-keras`.
```
Now, I did install the relevant package and it fixed it and that's fine. But, the problem I wished to highlight is that there are two different methods of dealing with Keras 3 implemented: in PRs #28588 and #29598.
And theoretically, setting the environment variable `TF_USE_LEGACY_KERAS=1` should force tensorflow to use Keras 2 only and fix the issue without needing the `tf_keras` package. Unless I'm misunderstanding something.
If so, then the `try: import tf_keras` block should be inside the `elif os.environ["TF_USE_LEGACY_KERAS"] != "1":` block, I think, shouldn't it? Or a refactor of the whole thing.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour (hopefully):
1. Set `export TF_USE_LEGACY_KERAS=1`
2. Don't have `tf_keras` installed.
3. Try to set up a BART pipeline like so:
```
from transformers import pipeline
# 1. Set up the Hugging Face summarization pipeline using BART model
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
```
or in any other suitable manner.
I understand my Step 2 fixes the problem, but it's not the problem I'm reporting, rather the approach to mitigating usage of Keras 3.
### Expected behavior
I expect that just setting the environment variable would force the use of Keras 2, and `tf_keras` not be needed. | closed | 2025-02-26T05:40:32Z | 2025-02-26T14:38:42Z | https://github.com/huggingface/transformers/issues/36410 | [
"bug"
] | mistersmee | 2 |
autokey/autokey | automation | 181 | keyboard.send_keys sends Hotkey too. | ## Classification: Bug?
## Reproducibility: Always
## Summary
Version : Autokey Qt v0.95.3, on Kubuntu 18.04.
I've installed the deb package you've uploaded here.
I've been using autokey for Short-cut changer. (I'm a Dvorak keyboard user, and Autokey's this function really helps me.)
For example, I change the usual "Ctrl + F" to "Ctrl + U".
So,
```
Hotkey : <ctrl>+u
Command : keyboard.send_keys("<ctrl>+f")
```
This worked good before v0.95.3. If I press Ctrl + U, only Ctrl + F signal is out.
However, on v0.95.3, Ctrl + U and Ctrl + F keys out simultaneously.
On Google Chrome, pressing Ctrl + U triggers 'Page Source' and 'Find'.
## Expected Results
Above example, only Ctrl + F has to be out.
| closed | 2018-08-23T04:35:20Z | 2024-08-02T07:55:07Z | https://github.com/autokey/autokey/issues/181 | [
"bug",
"autokey-qt"
] | nemonein | 13 |
freqtrade/freqtrade | python | 11,477 | Broken journald log format since 2025.1 | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: `Oracle Linux Server 9.5`
* Python Version: `Python 3.9.21`
* CCXT version: `4.4.50`
* Freqtrade Version: `2025.1`
## Describe the problem:
I run freqtrade using a systemd service and after upgrading from 2024.12 to 2025.1 I noticed that log lines are now being truncated and split into multiple lines.
### Steps to reproduce:
1. Install freqtrade 2025.1
2. Run freqtrade using the following systemd unit
```
[Unit]
Description=Freqtrade daemon
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=notify
User=freqtrade
WorkingDirectory=/srv/freqtrade
ExecStart=/srv/freqtrade/.venv/bin/freqtrade trade --sd-notify --no-color --logfile journald --config config.json
Restart=always
[Install]
WantedBy=default.target
```
3. View logs using `journalctl -u freqtrade`
### Observed Results:

 | closed | 2025-03-08T21:27:32Z | 2025-03-13T08:34:52Z | https://github.com/freqtrade/freqtrade/issues/11477 | [
"Bug"
] | TheoBrigitte | 11 |
yt-dlp/yt-dlp | python | 12,077 | Support for https://content-static.cctvnews.cctv.com/snow-book/video.html | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Worldwide
### Example URLs
https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=14085236271167952285
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=12379996551342441886
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=7331185547682467513
### Provide a description that is worded well enough to be understood
Hope to support downloading this type of URL:
https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=14085236271167952285
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=12379996551342441886
https://content-static.cctvnews.cctv.com/snow-book/index.html?item_id=7331185547682467513
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.12 from yt-dlp/yt-dlp [dade5e35c] (pip)
[debug] Python 3.9.19 (CPython x86_64 64bit) - Linux-5.4.119-19.0009.37-x86_64-with-glibc2.28 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.28)
[debug] exe versions: ffmpeg 6.0.1-static (setts), ffprobe N-112747-g67a2571a55
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.12 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.12 from yt-dlp/yt-dlp)
[CCTV] Extracting URL: https://content-static.cctvnews.cctv.com/snow-book/video.html?item_id=15184105708774284671
[CCTV] video: Downloading webpage
ERROR: [CCTV] video: Unable to extract video id; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/cctv.py", line 142, in _real_extract
video_id = self._search_regex(
File "/home/root/.conda/envs/faster-whisper-webui/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
| open | 2025-01-14T08:41:00Z | 2025-01-15T01:47:01Z | https://github.com/yt-dlp/yt-dlp/issues/12077 | [
"site-request",
"triage"
] | ueiyang2 | 0 |
aio-libs/aiopg | sqlalchemy | 321 | How to `metadata.create_all`? | I have this code (more or less):
```python
import sqlalchemy as sa
from aiopg.sa import create_engine
dsn = '...'
metadata = sa.MetaData(schema="test_schema")
tbl = Table("name", metadata, ....)
tables = [tbl]
async with create_engine(dsn) as engine:
metadata.create_all(engine, tables=tables, checkfirst=True)
```
And I get this exception:
```
File "/Users/daenyth/Curata/observer/obs/run/create_schema/__main__.py", line 19, in main
metadata.create_all(engine, tables=tables, checkfirst=True)
File "/Users/daenyth/.pyenv/versions/obs-venv/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 3882, in create_all
bind._run_visitor(ddl.SchemaGenerator,
AttributeError: 'Engine' object has no attribute '_run_visitor'
```
Should I use sqlalchemy's built-in engine for this instead of aiopg? Is this a bug, or something that's not supposed to work? | closed | 2017-05-12T19:21:15Z | 2018-09-20T22:25:34Z | https://github.com/aio-libs/aiopg/issues/321 | [] | Daenyth | 3 |
darrenburns/posting | rest-api | 62 | Some colors don't change when using a custom theme | Hey, I've tried my own custom theme, I can see some colors are breaking the palette, probably they aren't linked to the custom theme colors provided by the file.
`../themes/custom-theme.yml`
```yml
name: neofusion
primary: "#fd5e3a"
secondary: "#35b5ff"
accent: "#66def9"
background: "#06101e"
surface: "#052839"
error: "#fd5e3a"
success: "#35b5ff"
warning: "#e8e5b5"
```
**Screenshot:**
<img width="1440" alt="Screenshot 4" src="https://github.com/user-attachments/assets/6df3074b-36c3-4281-a20d-05c5a3aa6030">
| closed | 2024-07-25T10:31:50Z | 2024-08-01T22:02:07Z | https://github.com/darrenburns/posting/issues/62 | [] | diegoulloao | 13 |
tqdm/tqdm | pandas | 680 | tqdm.write under multiprocessing.Pool make bars overlapped | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
---
### Env
4.26.0 3.7.2 (default, Dec 29 2018, 00:00:04)
[Clang 4.0.1 (tags/RELEASE_401/final)] darwin . (MacOS . 10.13.4)
### Reproduce
```python
from time import sleep
from tqdm import tqdm
from multiprocessing import Pool
def foo(a):
tqdm.write("Working on %d" % a)
sleep(0.5)
return a
if __name__ == '__main__':
with Pool(2) as p:
list(tqdm(p.imap(foo, range(20)), total=20))
```
<img width="1298" alt="screen shot 2019-02-21 at 5 43 07 pm" src="https://user-images.githubusercontent.com/13440386/53214100-2dee4280-3600-11e9-9167-460b0ae2d804.png">
This might be relevant to https://github.com/tqdm/tqdm/issues/407 | open | 2019-02-22T01:44:24Z | 2019-02-25T19:39:13Z | https://github.com/tqdm/tqdm/issues/680 | [
"p2-bug-warning ⚠",
"synchronisation ⇶"
] | yihengli | 0 |
django-import-export/django-import-export | django | 1,386 | Branch release-3-x error when upgrading | **Describe the bug**
Got this when trying to import after upgrading to the `release-3-x` branch
```
...venv/src/django-import-export/import_export/mixins.py", line 20, in check_resource_classes
raise Exception("The resource_classes field type must be subscriptable (list, tuple, ...)")
```
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade to release-3-x from 2.5.0
| closed | 2022-01-31T06:21:05Z | 2022-02-02T10:15:39Z | https://github.com/django-import-export/django-import-export/issues/1386 | [
"bug"
] | manelclos | 6 |
Kav-K/GPTDiscord | asyncio | 17 | Semantic Search in conversation history using pinecone db | Alongside summarizations, we want to embed summarizations and save them inside pinecone. Then, when users send prompts within a conversation to the bot, we want to search pinecone's vectors for the most similar embeddings closest to the user prompt. We then append this found context to the prompt before sending to GPT3. This, effectively simulates long and permanent term memory.
Of course, there are tons of things to think about, such as the "forget" conditions (the conditions upon which embeddings should be removed as they are deemed irrelevant, just like human brains), and then "save" conditions (when and depending on what policy do we store embeddings as permanent data, also like the human brain, we need to choose a time to consolidate information and filter and policy that information. | closed | 2022-12-27T18:46:14Z | 2023-01-09T04:30:21Z | https://github.com/Kav-K/GPTDiscord/issues/17 | [
"enhancement",
"help wanted",
"high-prio"
] | Kav-K | 5 |
jupyter/nbviewer | jupyter | 1,025 | Post #1015 task. | Once #1015 is done, there are many remaining tasks.
1) LXML parsing fails on newer nbconvert when running notebooks that have SVG included.
2) slides do not hide the menubar.
3) nbconvert should have finer grained templates without headers.
4) security : gh api v3 have changed, one test is skipped.
5) Mock gh requests when testing, for speed and consistency. This also could let us check that the reply don't change.
| open | 2022-11-01T10:03:33Z | 2022-11-01T16:41:16Z | https://github.com/jupyter/nbviewer/issues/1025 | [] | Carreau | 1 |
graphql-python/graphene | graphql | 1,043 | Schema generation in alphabetical order | Is there anyway I can get the queries and mutations be sorted in alphabetical order so that they show up sorted in GraphiQL browser? | closed | 2019-07-23T08:31:28Z | 2019-09-22T19:17:13Z | https://github.com/graphql-python/graphene/issues/1043 | [
"wontfix"
] | dan-klasson | 2 |
tortoise/tortoise-orm | asyncio | 1,549 | Decimal field represented in scientific notation when using pydantic_model_creator | **Describe the bug**
I use pydantic_model_creator for creating schema from model which I use for serialization in the response. I have a decimal field, which is shown in scientific notation in the response instead of being Decimal.
**To Reproduce**
- Create a model containing a decimal field
- Add a schema to with `pydantic_model_creator`
- Add a view to create a record of this model and return this model using the schema you created
- You will see the decimal field in scientific notation
**Expected behavior**
I normally expect to get the defined value as it is, for example, if I created this record's decimal field as 200.00 i expect it as 200.00 not as 2E+2
**Additional context**
Add any other context about the problem here.
<img width="1010" alt="image" src="https://github.com/tortoise/tortoise-orm/assets/54992849/778f76d1-1715-4b26-b43d-989129f96bd9">
| open | 2024-01-23T13:24:55Z | 2024-12-09T10:45:41Z | https://github.com/tortoise/tortoise-orm/issues/1549 | [] | Yalchin403 | 2 |
pytest-dev/pytest-cov | pytest | 612 | Issue with coverage[toml] when installing with require-hashes. | # Summary
There is an issue when installing pytest-cov with require hashes mode.
We install from pip with all hahes provided on target machine. requirements.txt is generated via pipenv.
## Expected vs actual result
It should install coverage and pytest-cov without an error. Now this happens everytime there is new coverage release and we are behind latest. It happens always after every new release of coverage, before we can update all the projects, breaking the deployment.
Instead of working we get:
```
ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==. These do not:
coverage[toml]>=5.2.1 from
```
# Reproducer
```
python3.11 -m venv .venv
pip install pipenv --index-url https://pypi.python.org/simple
```
Create standard pipenv file:
```
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[requires]
python_version = "3.11"
[packages]
coverage = "==7.3.1"
pytest-cov = "==4.1.0"
```
```
PIPENV_IGNORE_VIRTUALENVS=1 pipenv lock
PIPENV_IGNORE_VIRTUALENVS=1 pipenv requirements --hash > requirements.txt
```
Now that I have requiremernts:
```
-i https://pypi.org/simple
coverage==7.3.1; python_version >= '3.8' --hash=sha256:025ded371f1ca280c035d91b43252adbb04d2aea4c7105252d3cbc227f03b375 --hash=sha256:04312b036580ec505f2b77cbbdfb15137d5efdfade09156961f5277149f5e344 --hash=sha256:0575c37e207bb9b98b6cf72fdaaa18ac909fb3d153083400c2d48e2e6d28bd8e --hash=sha256:07d156269718670d00a3b06db2288b48527fc5f36859425ff7cec07c6b367745 --hash=sha256:1f111a7d85658ea52ffad7084088277135ec5f368457275fc57f11cebb15607f --hash=sha256:220eb51f5fb38dfdb7e5d54284ca4d0cd70ddac047d750111a68ab1798945194 --hash=sha256:229c0dd2ccf956bf5aeede7e3131ca48b65beacde2029f0361b54bf93d36f45a --hash=sha256:245c5a99254e83875c7fed8b8b2536f040997a9b76ac4c1da5bff398c06e860f --hash=sha256:2829c65c8faaf55b868ed7af3c7477b76b1c6ebeee99a28f59a2cb5907a45760 --hash=sha256:4aba512a15a3e1e4fdbfed2f5392ec221434a614cc68100ca99dcad7af29f3f8 --hash=sha256:4c96dd7798d83b960afc6c1feb9e5af537fc4908852ef025600374ff1a017392 --hash=sha256:50dd1e2dd13dbbd856ffef69196781edff26c800a74f070d3b3e3389cab2600d --hash=sha256:5289490dd1c3bb86de4730a92261ae66ea8d44b79ed3cc26464f4c2cde581fbc --hash=sha256:53669b79f3d599da95a0afbef039ac0fadbb236532feb042c534fbb81b1a4e40 --hash=sha256:553d7094cb27db58ea91332e8b5681bac107e7242c23f7629ab1316ee73c4981 --hash=sha256:586649ada7cf139445da386ab6f8ef00e6172f11a939fc3b2b7e7c9082052fa0 --hash=sha256:5ae4c6da8b3d123500f9525b50bf0168023313963e0e2e814badf9000dd6ef92 --hash=sha256:5b4ee7080878077af0afa7238df1b967f00dc10763f6e1b66f5cced4abebb0a3 --hash=sha256:5d991e13ad2ed3aced177f524e4d670f304c8233edad3210e02c465351f785a0 --hash=sha256:614f1f98b84eb256e4f35e726bfe5ca82349f8dfa576faabf8a49ca09e630086 --hash=sha256:636a8ac0b044cfeccae76a36f3b18264edcc810a76a49884b96dd744613ec0b7 --hash=sha256:6407424621f40205bbe6325686417e5e552f6b2dba3535dd1f90afc88a61d465 --hash=sha256:6bc6f3f4692d806831c136c5acad5ccedd0262aa44c087c46b7101c77e139140 --hash=sha256:6cb7fe1581deb67b782c153136541e20901aa312ceedaf1467dcb35255787952 --hash=sha256:74bb470399dc1989b535cb41f5ca7ab2af561e40def22d7e188e0a445e7639e3 --hash=sha256:75c8f0df9dfd8ff745bccff75867d63ef336e57cc22b2908ee725cc552689ec8 --hash=sha256:770f143980cc16eb601ccfd571846e89a5fe4c03b4193f2e485268f224ab602f --hash=sha256:7eb0b188f30e41ddd659a529e385470aa6782f3b412f860ce22b2491c89b8593 --hash=sha256:7eb3cd48d54b9bd0e73026dedce44773214064be93611deab0b6a43158c3d5a0 --hash=sha256:87d38444efffd5b056fcc026c1e8d862191881143c3aa80bb11fcf9dca9ae204 --hash=sha256:8a07b692129b8a14ad7a37941a3029c291254feb7a4237f245cfae2de78de037 --hash=sha256:966f10df9b2b2115da87f50f6a248e313c72a668248be1b9060ce935c871f276 --hash=sha256:a6191b3a6ad3e09b6cfd75b45c6aeeffe7e3b0ad46b268345d159b8df8d835f9 --hash=sha256:aab8e9464c00da5cb9c536150b7fbcd8850d376d1151741dd0d16dfe1ba4fd26 --hash=sha256:ac3c5b7e75acac31e490b7851595212ed951889918d398b7afa12736c85e13ce --hash=sha256:ac9ad38204887349853d7c313f53a7b1c210ce138c73859e925bc4e5d8fc18e7 --hash=sha256:b9c0c19f70d30219113b18fe07e372b244fb2a773d4afde29d5a2f7930765136 --hash=sha256:c397c70cd20f6df7d2a52283857af622d5f23300c4ca8e5bd8c7a543825baa5a --hash=sha256:c6601a60318f9c3945be6ea0f2a80571f4299b6801716f8a6e4846892737ebe4 --hash=sha256:c6f55d38818ca9596dc9019eae19a47410d5322408140d9a0076001a3dcb938c --hash=sha256:ca70466ca3a17460e8fc9cea7123c8cbef5ada4be3140a1ef8f7b63f2f37108f --hash=sha256:ca833941ec701fda15414be400c3259479bfde7ae6d806b69e63b3dc423b1832 --hash=sha256:cd0f7429ecfd1ff597389907045ff209c8fdb5b013d38cfa7c60728cb484b6e3 --hash=sha256:cd694e19c031733e446c8024dedd12a00cda87e1c10bd7b8539a87963685e969 --hash=sha256:cdd088c00c39a27cfa5329349cc763a48761fdc785879220d54eb785c8a38520 --hash=sha256:de30c1aa80f30af0f6b2058a91505ea6e36d6535d437520067f525f7df123887 --hash=sha256:defbbb51121189722420a208957e26e49809feafca6afeef325df66c39c4fdb3 --hash=sha256:f09195dda68d94a53123883de75bb97b0e35f5f6f9f3aa5bf6e496da718f0cb6 --hash=sha256:f12d8b11a54f32688b165fd1a788c408f927b0960984b899be7e4c190ae758f1 --hash=sha256:f1a317fdf5c122ad642db8a97964733ab7c3cf6009e1a8ae8821089993f175ff --hash=sha256:f2781fd3cabc28278dc982a352f50c81c09a1a500cc2086dc4249853ea96b981 --hash=sha256:f4f456590eefb6e1b3c9ea6328c1e9fa0f1006e7481179d749b3376fc793478e
iniconfig==2.0.0; python_version >= '3.7' --hash=sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3 --hash=sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374
packaging==23.2; python_version >= '3.7' --hash=sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5 --hash=sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7
pluggy==1.3.0; python_version >= '3.8' --hash=sha256:cf61ae8f126ac6f7c451172cf30e3e43d3ca77615509771b3a984a0730651e12 --hash=sha256:d89c696a773f8bd377d18e5ecda92b7a3793cbe66c87060a6fb58c7b6e1061f7
pytest==7.4.2; python_version >= '3.7' --hash=sha256:1d881c6124e08ff0a1bb75ba3ec0bfd8b5354a01c194ddd5a0a870a48d99b002 --hash=sha256:a766259cfab564a2ad52cb1aae1b881a75c3eb7e34ca3779697c23ed47c47069
pytest-cov==4.1.0; python_version >= '3.7' --hash=sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6 --hash=sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a
```
```
pip install -r requirements.txt --require-hashes
```
```
(.venv) ➜ testbug pip install -r requirements.txt --require-hashes
Collecting coverage==7.3.1 (from -r requirements.txt (line 2))
Using cached coverage-7.3.1-cp311-cp311-macosx_10_9_x86_64.whl (201 kB)
Collecting iniconfig==2.0.0 (from -r requirements.txt (line 3))
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Collecting packaging==23.2 (from -r requirements.txt (line 4))
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Collecting pluggy==1.3.0 (from -r requirements.txt (line 5))
Using cached pluggy-1.3.0-py3-none-any.whl (18 kB)
Collecting pytest==7.4.2 (from -r requirements.txt (line 6))
Using cached pytest-7.4.2-py3-none-any.whl (324 kB)
Collecting pytest-cov==4.1.0 (from -r requirements.txt (line 7))
Using cached pytest_cov-4.1.0-py3-none-any.whl (21 kB)
Collecting coverage[toml]>=5.2.1 (from pytest-cov==4.1.0->-r requirements.txt (line 7))
ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==. These do not:
coverage[toml]>=5.2.1 from https://files.pythonhosted.org/packages/a9/6b/4d3b9ce8b79378f960e3b74bea4569daf6bd3e1d562a15c9ce4d40be182c/coverage-7.3.2-cp311-cp311-macosx_10_9_x86_64.whl (from pytest-cov==4.1.0->-r requirements.txt (line 7))
(.venv) ➜ testbug
```
## Versions
(.venv) ➜ testbug pip --version
pip 23.2.1 from /Users/myuser/temp/testbug/.venv/lib/python3.11/site-packages/pip (python 3.11)
(.venv) ➜ testbug python --version
Python 3.11.5
| open | 2023-10-13T07:12:27Z | 2024-07-13T12:23:04Z | https://github.com/pytest-dev/pytest-cov/issues/612 | [] | matejsp | 6 |
flaskbb/flaskbb | flask | 540 | makeconfig flask-allows dist not found? | I'm on MacOS
$ flaskbb makeconfig
Traceback (most recent call last):
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 583, in _build_master
ws.require(__requires__)
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 791, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (flask-allows 0.4 (/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages), Requirement.parse('flask-allows>=0.6.0'), {'FlaskBB'})
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/anthche/Projects/flaskbb/.venv/bin/flaskbb", line 6, in <module>
from pkg_resources import load_entry_point
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3250, in <module>
@_call_aside
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3234, in _call_aside
f(*args, **kwargs)
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3263, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 585, in _build_master
return cls._build_from_requirements(__requires__)
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/Users/anthche/Projects/flaskbb/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'flask-allows>=0.6.0' distribution was not found and is required by FlaskBB
| closed | 2019-10-22T20:04:31Z | 2019-10-25T15:38:49Z | https://github.com/flaskbb/flaskbb/issues/540 | [] | toekneechin777 | 4 |
mouredev/Hello-Python | fastapi | 182 | 我在网上赌提款一直被拒绝怎么办 | 我在网上赌提款一直被拒绝怎么办遇到网上赌提款注单延迟不给提现怎么办专业解决薇:mu20009 say99877 扣扣: 3841101686
保持冷静,收集证据:首先,保持冷静,不要因情绪激动而做出不理智的决定。迅速收集所有相关证据,包括交易记录、提现请求记录、与平台客服的聊天记录等。这些证据在后续维权时将起到至关重要的作用

与平台沟通协商:尝试与平台客服沟通,礼貌地询问提款失败的具体原因,并要求解决方案。记录下所有的沟通内容,如果平台未能给出合理解释或解决方案,可以要求其以书面形式(如邮件)给出正式回应。
查阅平台服务协议和政策:仔细阅读平台的服务协议和政策中关于提现的条款和条件,了解自己的权利和责任。有时候,平台可能会以账户验证问题、提现时间限制等为由拒绝提现。通过了解服务协议,可以更好地判断平台是否存在违规行为,并据此采取相应的行动。

投诉和举报:如果平台拒绝解决问题或没有明确的答复,可以向相关监管机构或消费者保护组织投诉。例如,可以向中国互联网金融协会、银保监会等金融监管机构提交投诉,请求其介入调查。也可以向网络诈骗举报平台或国家互联网应急中心等机构举报此类行为34。
寻求法律援助:如果涉及的金额较大或平台的行为涉嫌欺诈或非法活动,应考虑咨询律师,了解自己所享有的合法权益,并通过法律途径维护自身利益。可以联系精通金融诈骗和消费者保护的律师,咨询是否可以通过法律手段追回损失。
![Uploading photo_2024-10-10_12-25-01.jpg…]()
加强账户安全:除了寻求外部帮助外,还需加强自身账户的安全防护措施。定期更换密码、启用二次验证等措施对于保障账户安全至关重要12。
通过以上措施,可以有效应对在网上赌博中被黑提款延迟不给提现的情况,保护自身权益。
| closed | 2024-11-07T12:09:31Z | 2024-11-28T13:34:27Z | https://github.com/mouredev/Hello-Python/issues/182 | [] | mu20009 | 0 |
gradio-app/gradio | data-visualization | 10,557 | Add an option to remove line numbers in gr.Code | - [X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
`gr.Code()` always displays line numbers.
**Describe the solution you'd like**
I propose to add an option `show_line_numbers = True | False` to display or hide the line numbers. The default should be `True` for compatibility with the current behaviour.
| closed | 2025-02-10T11:38:07Z | 2025-02-21T22:11:43Z | https://github.com/gradio-app/gradio/issues/10557 | [
"enhancement",
"good first issue"
] | altomani | 1 |
mwouts/itables | jupyter | 246 | Transition ITables to the `src` layout | In prevision of #245 I would like to transition ITables to the [`src` layout](https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/).
@mahendrapaipuri do you think you could give me a hand on this?
This is what I am wondering:
- Where should the `dt_for_itables` node package go? Currently it is at `itables/dt_for_itables`, should I move it at `src/dt_for_itables`? Or at the root of the project? Note that the Python package uses `dt_bundle.js` and `dt_bundle.css` that are created by that package
- Can I create a `src/itables_for_dash` folder with more or less [this content](https://github.com/plotly/dash-component-boilerplate/tree/master/%7B%7Bcookiecutter.project_shortname%7D%7D) (e.g. a node package, a Python package, and optionally a R and Julia package)? Or should I move that somewhere else? I would much prefer **not to** create another repo for that.
- Should I move the `html` folder currently within `itables` (with the html table template) at some other location, or is the current location fine? | closed | 2024-03-20T23:12:24Z | 2024-05-26T15:48:19Z | https://github.com/mwouts/itables/issues/246 | [] | mwouts | 6 |
microsoft/MMdnn | tensorflow | 765 | Mxnet to IR error | Platform (like ubuntu 16.04/win10):
win10
Python version:
3.5
Source framework with version (like Tensorflow 1.4.1 with GPU):
mxnet1.5.0 without GPU
Destination framework with version (like CNTK 2.3 with GPU):
Tensorflow 1.4.1 without GPU
Pre-trained model path (webpath or webdisk path):
json:https://github.com/deepinsight/insightface/blob/master/deploy/mtcnn-model/det4-symbol.json
param:https://github.com/deepinsight/insightface/blob/master/deploy/mtcnn-model/det4-0001.params
Running scripts:
python -m mmdnn.conversion._script.convertToIR -f mxnet -n det4-symbol.json -w det4-0001.params -d Lnet --inputShape 15,24,24
I got error:
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Warning: MXNet Parser has not supported operator SliceChannel with name slice.
Traceback (most recent call last):
File "E:\Anaconda3\envs\python3.5\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "E:\Anaconda3\envs\python3.5\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\_script\convertToIR.py", line 197, in <module>
_main()
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\_script\convertToIR.py", line 192, in _main
ret = _convert(args)
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\_script\convertToIR.py", line 115, in _convert
parser.run(args.dstPath)
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\common\DataStructure\parser.py", line 22, in run
self.gen_IR()
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\mxnet\mxnet_parser.py", line 265, in gen_IR
self.rename_UNKNOWN(current_node)
File "E:\Anaconda3\envs\python3.5\lib\site-packages\mmdnn\conversion\mxnet\mxnet_parser.py", line 376, in rename_UNKNOWN
raise NotImplementedError()
NotImplementedError
my mmdnn version is 0.2.5.
thank you for help | closed | 2019-10-30T08:52:47Z | 2019-11-01T06:39:00Z | https://github.com/microsoft/MMdnn/issues/765 | [] | LjwPanda | 2 |
scikit-image/scikit-image | computer-vision | 7,206 | Proposal: sort draw.ellipse coordinates by default to make sure they are all contiguous | ### Description:
Someone just upvoted this old SO answer of mine:
https://stackoverflow.com/questions/62339802/skimage-draw-ellipse-generates-two-undesired-lines
Basically, using plt.plot or any other line drawing software to draw an ellipse from our ellipse coordinates fails because of how the coordinates are sorted (or rather, not sorted):

The solution is to sort the coordinates based on the angle around the centroid. It's easy/fast enough and would make downstream work simpler, plus I think it's what most users would expect. | open | 2023-10-16T00:45:58Z | 2024-03-19T06:50:51Z | https://github.com/scikit-image/scikit-image/issues/7206 | [
":beginner: Good first issue",
":pray: Feature request"
] | jni | 7 |
keras-rl/keras-rl | tensorflow | 383 | which version the tensorflow and numpy it is inDQN_carpole? | hello,I want know which version the tensorflow and numpy it is | closed | 2021-12-20T01:54:31Z | 2022-04-27T22:30:38Z | https://github.com/keras-rl/keras-rl/issues/383 | [
"wontfix"
] | Saber-xxf | 1 |
CTFd/CTFd | flask | 2,489 | Exports should happen in background and be stored by CTFd as an upload | Exports should not be a process that requires the worker should live. Isntead it should be kicked off as a kind of async job and we should have a list of exports that got generated by CTFd that can be downloaded by admins.
This may require the need for files that can only be downloaded by admins. | open | 2024-03-07T17:11:51Z | 2024-03-07T17:11:55Z | https://github.com/CTFd/CTFd/issues/2489 | [] | ColdHeat | 0 |
django-cms/django-cms | django | 7,138 | [feat] Open external toolbar links in a new tab/window | #7101 introduced a new menu in the toolbar for link to external resources. I believe this is the first time there have been external links in the toolbar.
In #7137 we're looking to identify external links. The remaining thing would be to have these external links open in a new tab or window.
Adding `target"_blank"` to the links doesn't work here. Likely due to the way the toolbar renders. Therefore this may require some javascript to get links to open in a new tab or window. | closed | 2021-10-15T22:19:50Z | 2022-10-31T01:55:09Z | https://github.com/django-cms/django-cms/issues/7138 | [
"component: frontend",
"component: menus",
"stale"
] | marksweb | 4 |
huggingface/transformers | deep-learning | 36,194 | AutoProcessor loading error | ### System Info
Related Issues and PR: #34307 https://github.com/huggingface/transformers/pull/36184
- `transformers` version: 4.49.0.dev0
- Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@Rocketknight1
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here are the reprodution steps
1. choose a mllm like qwen2.5vl, and download it's config file
2. derive its images processor, processor and model
3. modify the config file and try to use AutoProcessor to load_from_pretrain
4. and the error occurs like #34307
```python
from transformers import Qwen2_5_VLProcessor, Qwen2_5_VLImageProcessor, Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLConfig
class NewProcessor(Qwen2_5_VLProcessor):
image_processor_class = "NewImageProcessor"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class NewImageProcessor(Qwen2_5_VLImageProcessor):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
class NewConfig(Qwen2_5_VLConfig):
model_type = "new_model"
class NewModel(Qwen2_5_VLForConditionalGeneration):
config_class = NewConfig
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
from transformers import AutoModel, AutoImageProcessor, AutoProcessor
AutoImageProcessor.register(NewModel.config_class, NewImageProcessor)
AutoProcessor.register(NewModel.config_class, NewProcessor)
AutoModel.register(NewModel.config_class, NewModel)
if __name__ == "__main__":
processor = NewProcessor.from_pretrained("path/to/NewModel_config/")
```
modified config
```
config.json:
"architectures": [
"NewModel"
],
"model_type": "new_model",
preprocessor_config.json:
"image_processor_type": "NewImageProcessor",
"processor_class": "NewProcessor"
```
I also check the pr https://github.com/huggingface/transformers/pull/36184, it didn't work, because the func _get_class_from_class_name use mapping but the key is string rather than Config class
### Expected behavior
None | closed | 2025-02-14T14:40:51Z | 2025-02-17T16:48:04Z | https://github.com/huggingface/transformers/issues/36194 | [
"bug"
] | JJJYmmm | 1 |
explosion/spaCy | deep-learning | 12,761 | AssertionError: [E923] It looks like there is no proper sample data to initialize the Model of component 'tok2vec'. | ## How to reproduce the behaviour
Run `spacy train config.cfg`
## Your Environment
* Operating System: Windows 10
* Python Version Used: 3.11.0
* spaCy Version Used: 3.5.3
The error message from running `spacy train config.cfg`:
```
ℹ No output directory provided
ℹ Using CPU
=========================== Initializing pipeline ===========================
[2023-06-27 08:49:53,939] [INFO] Set up nlp object from config
[2023-06-27 08:49:53,958] [INFO] Pipeline: ['tok2vec', 'tagger', 'parser', 'ner']
[2023-06-27 08:49:53,962] [INFO] Created vocabulary
[2023-06-27 08:49:53,963] [INFO] Finished initializing nlp object
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "c:\users\salt lick\appdata\roaming\python\python311\scripts\spacy.exe\__main__.py", line 7, in <module>
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\_util.py", line 74, in setup_cli
command(prog_name=COMMAND)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\typer\core.py", line 778, in main
return _main(
^^^^^^
File "C:\Python311\Lib\site-packages\typer\core.py", line 216, in _main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\typer\main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\train.py", line 45, in train_cli
train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\train.py", line 72, in train
nlp = init_nlp(config, use_gpu=use_gpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\training\initialize.py", line 85, in init_nlp
nlp.initialize(lambda: train_corpus(nlp), sgd=optimizer)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\language.py", line 1308, in initialize
proc.initialize(get_examples, nlp=self, **p_settings)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\pipeline\tok2vec.py", line 216, in initialize
assert doc_sample, Errors.E923.format(name=self.name)
AssertionError: [E923] It looks like there is no proper sample data to initialize the Model of component 'tok2vec'. To check your input data paths and annotation, run: python -m spacy debug data config.cfg and include the same config override values you would specify for the 'spacy train' command.
```
The output from `spacy debug data config.cfg`:
```
============================ Data file validation ============================
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "c:\users\salt lick\appdata\roaming\python\python311\scripts\spacy.exe\__main__.py", line 7, in <module>
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\_util.py", line 74, in setup_cli
command(prog_name=COMMAND)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\typer\core.py", line 778, in main
return _main(
^^^^^^
File "C:\Python311\Lib\site-packages\typer\core.py", line 216, in _main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\typer\main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\debug_data.py", line 77, in debug_data_cli
debug_data(
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\cli\debug_data.py", line 117, in debug_data
nlp.initialize(lambda: train_corpus(nlp))
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\language.py", line 1308, in initialize
proc.initialize(get_examples, nlp=self, **p_settings)
File "C:\Users\Salt lick\AppData\Roaming\Python\Python311\site-packages\spacy\pipeline\tok2vec.py", line 216, in initialize
assert doc_sample, Errors.E923.format(name=self.name)
AssertionError: [E923] It looks like there is no proper sample data to initialize the Model of component 'tok2vec'. To check your input data paths and annotation, run: python -m spacy debug data config.cfg and include the same config override values you would specify for the 'spacy train' command.
```
The format for the JSON file I converted (using `spacy convert`):
```
[
[
"Blah blah blah\r",
{
"entities": [
[
25,
38,
"label"
],
[
46,
61,
"label"
]
]
}
], ...
```
I'm not sure if this is a bug or user error, but any help is very much appreciated. | closed | 2023-06-27T13:54:59Z | 2023-06-28T07:22:59Z | https://github.com/explosion/spaCy/issues/12761 | [] | butterflyhigh | 0 |
MaartenGr/BERTopic | nlp | 1,241 | BERTopic with cuML UMAP: ValueError: Must specify dtype when data is passed as a <class 'list'> exception | Hello! I am using guided topic modeling with cuML UMAP and HDBScan, and running with this error. please find the code and stack below
the texts are a list of strings
```python
umap_model = UMAP(n_components=5, n_neighbors=15, min_dist=0.0)
hdbscan_model = HDBSCAN(min_samples=10, gen_min_span_tree=True)
arabert_model = TransformerDocumentEmbeddings(arabert_model_path)
topic_model = BERTopic(
embedding_model=arabert_model,
umap_model=umap_model,
hdbscan_model=hdbscan_model,
nr_topics=64,
n_gram_range=(1, 2),
seed_topic_list=seed_topic_list,
)
sample = texts[:1000]
topics, probs = topic_model.fit_transform(sample)
```
```bash
File "main.py", line 77, in <module>
topics, probs = topic_model.fit_transform(sample)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/bertopic/_bertopic.py", line 356, in fit_transform
umap_embeddings = self._reduce_dimensionality(embeddings, y)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/bertopic/_bertopic.py", line 2837, in _reduce_dimensionality
self.umap_model.fit(embeddings, y=y)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/api_decorators.py", line 188, in wrapper
ret = func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/api_decorators.py", line 393, in dispatch
return self.dispatch_func(func_name, gpu_func, *args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/api_decorators.py", line 190, in wrapper
return func(*args, **kwargs)
File "base.pyx", line 665, in cuml.internals.base.UniversalBase.dispatch_func
File "umap.pyx", line 592, in cuml.manifold.umap.UMAP.fit
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/nvtx/nvtx.py", line 101, in inner
result = func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/input_utils.py", line 369, in input_to_cuml_array
arr = CumlArray.from_input(
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/memory_utils.py", line 87, in cupy_rmm_wrapper
return func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/nvtx/nvtx.py", line 101, in inner
result = func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/array.py", line 1075, in from_input
arr = cls(X, index=index, order=requested_order, validate=False)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/memory_utils.py", line 87, in cupy_rmm_wrapper
return func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/nvtx/nvtx.py", line 101, in inner
result = func(*args, **kwargs)
File "/home/hadi/topic_modeling/venv/lib/python3.8/site-packages/cuml/internals/array.py", line 239, in __init__
raise ValueError(
ValueError: Must specify dtype when data is passed as a <class 'list'>
```
cuml_cu11==23.4.1
numpy==1.23.0
numba==0.56.4
any help is much appreciated. Thank you!
| closed | 2023-05-08T19:45:42Z | 2023-07-11T08:01:45Z | https://github.com/MaartenGr/BERTopic/issues/1241 | [] | hadikhamoud | 2 |
huggingface/pytorch-image-models | pytorch | 1,366 | [Optimizer] Can you implement SAM Optimizer? | In the field of Fine-grained visual classification, Sharpness-Aware Minimization or SAM just become a very powerful optimizer, so I hope you can intergrate this.
Btw, many thanks to your work! | closed | 2022-07-25T06:41:38Z | 2022-07-25T16:58:41Z | https://github.com/huggingface/pytorch-image-models/issues/1366 | [
"enhancement"
] | khiemledev | 2 |
huggingface/diffusers | deep-learning | 11,006 | Broken video output with Wan 2.1 I2V pipeline + quantized transformer | ### Describe the bug
Since there is no proper documentation yet, I'm not sure if there is a difference to other video pipelines that I'm unaware of – but with the code below, the video results are reproducibly broken.
There is a warning:
`Expected types for image_encoder: (<class 'transformers.models.clip.modeling_clip.CLIPVisionModel'>,), got <class 'transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection'>.`
which I assume I'm expected to ignore.
Init image:

Result:
https://github.com/user-attachments/assets/c2e591e7-4cd5-4849-bec4-5938058c0775
Result with different seed:
https://github.com/user-attachments/assets/7006e400-3018-4891-9c4f-06d44ebc704f
Result with different prompt:
https://github.com/user-attachments/assets/42f15f68-bd2b-4b22-b6da-6d5182bc6b22
### Reproduction
```
# Tested on Google Colab with an A100 (40GB).
# Uses ~21 GB VRAM, takes ~150 sec per step, ~75 min in total.
!pip install git+https://github.com/huggingface/diffusers.git
!pip install -U bitsandbytes
!pip install ftfy
import os
import torch
from diffusers import (
BitsAndBytesConfig,
WanImageToVideoPipeline,
WanTransformer3DModel
)
from diffusers.utils import export_to_video
from PIL import Image
model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
transformer = WanTransformer3DModel.from_pretrained(
model_id,
subfolder="transformer",
quantization_config=quantization_config
)
pipe = WanImageToVideoPipeline.from_pretrained(
model_id,
transformer=transformer
)
pipe.enable_model_cpu_offload()
def render(
filename,
image,
prompt,
seed=0,
width=832,
height=480,
num_frames=81,
num_inference_steps=30,
guidance_scale=5.0,
fps=16
):
video = pipe(
image=image,
prompt=prompt,
generator=torch.Generator(device=pipe.device).manual_seed(seed),
width=width,
height=height,
num_frames=num_frames,
num_inference_steps=num_inference_steps,
guidance_scale=guidance_scale
).frames[0]
os.makedirs(os.path.dirname(filename), exist_ok=True)
export_to_video(video, filename, fps=fps)
render(
filename="/content/test.mp4",
image=Image.open("/content/test.png"),
prompt="a woman in a yellow coat is dancing in the desert",
seed=42
)
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Running on Google Colab?: Yes
- Python version: 3.11.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.4 (gpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Huggingface_hub version: 0.28.1
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- PEFT version: 0.14.0
- Bitsandbytes version: 0.45.3
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-40GB, 40960 MiB
### Who can help?
_No response_ | open | 2025-03-07T17:25:50Z | 2025-03-23T17:37:13Z | https://github.com/huggingface/diffusers/issues/11006 | [
"bug"
] | rolux | 6 |
PaddlePaddle/ERNIE | nlp | 410 | 取ernie 2.0 embedding的相关问题 | 想问下如何取ernie 2.0 的token -level embedding? readme中给出的[FAQ1: How to get sentence/tokens embedding of ERNIE?](https://github.com/PaddlePaddle/ERNIE#faq1-how-to-get-sentencetokens-embedding-of-ernie)是针对ernie 1.0的
方便给个demo吗 | closed | 2020-02-20T02:40:17Z | 2020-02-20T05:10:29Z | https://github.com/PaddlePaddle/ERNIE/issues/410 | [] | Akeepers | 3 |
Farama-Foundation/PettingZoo | api | 576 | All agents get the same reward in simple_spread_v2 | In this example, I found that all agents got the same reward even though the `local_ratio` was set to 0.5 .
```python
env = simple_spread_v2.env(N=3,local_ratio=0.5,max_cycles=30,continuous_actions=False)
for epoch in range(1):
env.reset()
for agent in env.agent_iter():
obs,reward,done,_ = env.last()
print(f'{agent} reward {reward}')
action = random.randint(0,4) if not done else None
env.step(action)
env.close()
```
Below is the output of the code.
```
agent_0 reward 0.0
agent_1 reward 0.0
agent_2 reward 0.0
agent_0 reward -1.9431845824115361
agent_1 reward -1.9431845824115361
agent_2 reward -1.9431845824115361
agent_0 reward -2.0186656215712047
agent_1 reward -2.0186656215712047
agent_2 reward -2.0186656215712047
agent_0 reward -2.02316213239735
agent_1 reward -2.02316213239735
agent_2 reward -2.02316213239735
agent_0 reward -1.9784184746432458
agent_1 reward -2.478418474643246
agent_2 reward -2.478418474643246
agent_0 reward -1.898511321718054
agent_1 reward -1.898511321718054
agent_2 reward -1.898511321718054
agent_0 reward -1.8691525388907912
agent_1 reward -1.8691525388907912
agent_2 reward -1.8691525388907912
agent_0 reward -1.8909981202978936
agent_1 reward -1.8909981202978936
agent_2 reward -1.8909981202978936
agent_0 reward -1.910515441784764
agent_1 reward -1.910515441784764
agent_2 reward -1.910515441784764
agent_0 reward -1.9249452577291088
agent_1 reward -1.9249452577291088
agent_2 reward -1.9249452577291088
agent_0 reward -1.9365356738465527
agent_1 reward -1.9365356738465527
agent_2 reward -1.9365356738465527
agent_0 reward -1.9456694142339164
agent_1 reward -1.9456694142339164
agent_2 reward -1.9456694142339164
agent_0 reward -1.9907358846441714
agent_1 reward -1.9907358846441714
agent_2 reward -1.9907358846441714
agent_0 reward -2.0239905919667582
agent_1 reward -2.0239905919667582
agent_2 reward -2.0239905919667582
agent_0 reward -2.05079101437944
agent_1 reward -2.05079101437944
agent_2 reward -2.05079101437944
agent_0 reward -2.069194106064267
agent_1 reward -2.069194106064267
agent_2 reward -2.069194106064267
agent_0 reward -2.059820269770431
agent_1 reward -2.059820269770431
agent_2 reward -2.059820269770431
agent_0 reward -2.03554268829292
agent_1 reward -2.03554268829292
agent_2 reward -2.03554268829292
agent_0 reward -2.066117729473234
agent_1 reward -2.066117729473234
agent_2 reward -2.066117729473234
agent_0 reward -2.0417353527270317
agent_1 reward -2.0417353527270317
agent_2 reward -2.0417353527270317
agent_0 reward -1.9580246588542702
agent_1 reward -1.9580246588542702
agent_2 reward -1.9580246588542702
agent_0 reward -1.9042194085358735
agent_1 reward -1.9042194085358735
agent_2 reward -1.9042194085358735
agent_0 reward -1.834690151159382
agent_1 reward -1.834690151159382
agent_2 reward -1.834690151159382
agent_0 reward -1.764785825707044
agent_1 reward -1.764785825707044
agent_2 reward -1.764785825707044
agent_0 reward -1.7440609141758305
agent_1 reward -1.7440609141758305
agent_2 reward -1.7440609141758305
agent_0 reward -1.7000633194660981
agent_1 reward -1.7000633194660981
agent_2 reward -1.7000633194660981
agent_0 reward -1.6965411220402724
agent_1 reward -1.6965411220402724
agent_2 reward -1.6965411220402724
agent_0 reward -1.646414579712677
agent_1 reward -1.646414579712677
agent_2 reward -1.646414579712677
agent_0 reward -1.5417935406618044
agent_1 reward -1.5417935406618044
agent_2 reward -1.5417935406618044
agent_0 reward -1.4359175751395865
agent_1 reward -1.4359175751395865
agent_2 reward -1.4359175751395865
agent_0 reward -1.3918633502134519
agent_1 reward -1.3918633502134519
agent_2 reward -1.3918633502134519
``` | closed | 2021-12-10T06:11:55Z | 2021-12-22T05:10:48Z | https://github.com/Farama-Foundation/PettingZoo/issues/576 | [] | Weissle | 1 |
2noise/ChatTTS | python | 132 | 执行完无任何输出 | WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
INFO:ChatTTS.core:All initialized.
0%| | 0/384 [00:00<?, ?it/s]DEBUG CHECK FAILED: /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/dynamo/cpython_defs.c:124
Process finished with exit code 134 (interrupted by signal 6:SIGABRT)
执行完就是以上信息了。大家有遇到这种情况吗
| closed | 2024-05-31T06:54:47Z | 2024-07-18T04:01:50Z | https://github.com/2noise/ChatTTS/issues/132 | [
"stale"
] | youzeliang | 4 |
httpie/cli | api | 565 | BlockingIOError: [Errno 35] write could not complete without blocking | httpie version 0.9.8 installed on macos using `brew install httpie` reportts this error
Version of pythin used 3.6
```
http https://start.spring.io/
HTTP/1.1 200 OK
CF-RAY: 33934ec27b514463-BRU
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/plain
Date: Thu, 02 Mar 2017 09:25:15 GMT
Etag: W/"fbb270434e42fe999924581cd52aab20"
Server: cloudflare-nginx
Set-Cookie: __cfduid=db25749d6d8d8fa5487fa77a8e609442a1488446715; expires=Fri, 02-Mar-18 09:25:15 GMT; path=/; domain=.spring.io; HttpOnly
Transfer-Encoding
ttp: error: BlockingIOError: [Errno 35] write could not complete without blocking
m: chunked
X-Application-Context: start:cloud:1
X-Vcap-Request-Id: 4a3f150b-a469-4efb-6d64-7312dd2a238fTraceback (most recent call last):
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/core.py", line 227, in main
log_error=log_error,
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/core.py", line 138, in program
write_stream(**write_stream_kwargs)
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/output/streams.py", line 38, in write_stream
outfile.flush()
BlockingIOError: [Errno 35] write could not complete without blocking
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/http", line 11, in <module>
load_entry_point('httpie==0.9.8', 'console_scripts', 'http')()
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/__main__.py", line 11, in main
sys.exit(main())
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/core.py", line 255, in main
log_error('%s: %s', type(e).__name__, msg)
File "/usr/local/Cellar/httpie/0.9.8_2/libexec/lib/python3.6/site-packages/httpie/core.py", line 189, in log_error
env.stderr.write('\nhttp: %s: %s\n' % (level, msg))
BlockingIOError: [Errno 35] write could not complete without blocking
``` | closed | 2017-03-02T09:27:36Z | 2020-06-18T23:02:12Z | https://github.com/httpie/cli/issues/565 | [] | cmoulliard | 3 |
keras-team/keras | data-science | 20,036 | Inputs has to be named after the first entry of the dict for models with multiple inputs | Keras version: "3.4.1"
For model with multiple inputs, inputs has to be named after the first entry of the dictionary which I think is a bug.
Lets say I have a Keras model with two inputs named `a_name` and `other_name`. Error will be raise if I try to train such model with those named inputs
```python
model.fit({"a_name": x_train_1, "other_name": x_train_2}, y_train)
```
but would be fine if their names are `a_name` and `a_name_whatever` or `other_name` and `other_name_whatever`
Here is a minimal example on Google Colab to demostrate the issue (https://colab.research.google.com/drive/1oJ5QGMYV7GlHimkquu1SJWd4vyn9-ok0?usp=sharing).
All cells work without raising error if I use Keras 3.3.3 | closed | 2024-07-23T19:51:29Z | 2024-08-08T13:39:45Z | https://github.com/keras-team/keras/issues/20036 | [
"type:Bug"
] | henrysky | 6 |
whitphx/streamlit-webrtc | streamlit | 1,160 | streamlit-webrtc does not work on chrome (tested on Android and macOS) | `streamlit-webrtc` seems to not work on chrome browser, I tested in on both Android and macOS with multiple apps.
On both devices it works with Firefox though.
Any idea why? | open | 2023-01-02T07:07:29Z | 2023-03-17T17:16:44Z | https://github.com/whitphx/streamlit-webrtc/issues/1160 | [] | gustavz | 3 |
pallets-eco/flask-sqlalchemy | flask | 475 | Don't store extension data in g | closed | 2017-02-27T21:37:23Z | 2020-12-05T21:18:04Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/475 | [] | davidism | 1 | |
wkentaro/labelme | computer-vision | 414 | Grouping segmentations | Hi there,
sorry I am new to the repo and wondering whether I can group segmentation, like e.g. draw polygons for
* person
* shoes
but then also connect (or group) the shoes with the corresponding person so that the output structure contains the information of who is wearing which shoes.
Best regards | closed | 2019-05-22T18:17:51Z | 2019-05-29T15:06:58Z | https://github.com/wkentaro/labelme/issues/414 | [] | ghost | 2 |
modAL-python/modAL | scikit-learn | 98 | SVR regerssion | Hello, when I use SVR for regression task, it will prompt:
'SVR' object has no attribute 'predict_proba'
I think it should be that there is no query_strategy.
So how can I design a query that matches regression? | open | 2020-08-19T09:36:03Z | 2022-01-04T09:18:38Z | https://github.com/modAL-python/modAL/issues/98 | [] | zt823793279 | 6 |
django-import-export/django-import-export | django | 2,017 | Warn for unused fields declared on class only | **Describe the bug**
The warning added in #1930 has lots of false positives in my project. It turns out there are many resource base classes with specialized subclasses that use a subset of fields.
**To Reproduce**
Something like this:
```python
from import_export.fields import Field
from import_export.resources import ModelResource
class BaseBookResource(ModelResource):
isbn = Field(
attribute="isbn",
)
title = Field(
attribute="title",
)
catalogue_number = Field(
attribute="catalogue_number",
)
class Meta:
model = WellHeader
fields = (
"isbn",
"title",
"catalogue_number",
)
class Export1BookResource(BaseBookResource):
class Meta:
fields = (
"isbn",
"title",
)
class Export2BookResource(BaseBookResource):
class Meta:
fields = (
"well_name",
"catalogue_number",
)
```
**Versions (please complete the following information):**
- Django Import Export: [e.g. 1.2, 2.0] 2.3.3
- Python [e.g. 3.6, 3.7] 3.12
- Django [e.g. 1.2, 2.0] 5.0
**Expected behavior**
Warn only for fields declared within the class but missed within `fields`. That would mean no warnings in the above example.
Given the current setup I think that would mean either moving the warning up into the metaclass or tracking which resource a field is attached to (like Django model fields track which model class they are attached to).
**Screenshots**
n/a
**Additional context**
n/a
| open | 2024-12-02T22:51:38Z | 2024-12-10T10:27:22Z | https://github.com/django-import-export/django-import-export/issues/2017 | [
"bug"
] | adamchainz | 1 |
Textualize/rich | python | 3,107 | [BUG] Console.clear not handled for CMD and Powershell | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
`Console.clear` has not been handled (properly) for cmd and powershell since Rich 12.0.0.
Based on #2055 and #2066, rich 12.0.0 introduced using native windows console API for legacy terminals.
I'm posting this under [BUG] since rich 11.2 is able to clear console without problems. Apologies if this is a duplicate. Here's an external link from Microsoft for [Clearing the Screen](https://learn.microsoft.com/en-us/windows/console/clearing-the-screen) but something like this will work for me personally:
```python
class LegacyWindowsTerm:
...
def clear(self) -> None:
"""Clear screen."""
FillConsoleOutputCharacter(
self._handle,
" ",
self.screen_size.row * self.screen_size.col,
WindowsCoordinates(0, 0)
)
# or
# os.system("cls")
```
Provide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot.
```python
import this
from rich.console import Console
Console().clear()
```
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
Windows Powershell and CMD
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```bash
┌───────────────────────── <class 'rich.console.Console'> ─────────────────────────┐
│ A high level console interface. │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────┐ │
│ │ <console width=119 ColorSystem.WINDOWS> │ │
│ └──────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ color_system = 'windows' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 43 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = True │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=119, height=43), │
│ legacy_windows=True, │
│ min_width=1, │
│ max_width=119, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=43, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=119, height=43) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 119 │
└──────────────────────────────────────────────────────────────────────────────────┘
┌─── <class 'rich._windows.WindowsConsoleFeatures'> ────┐
│ Windows features available. │
│ │
│ ┌───────────────────────────────────────────────────┐ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ └───────────────────────────────────────────────────┘ │
│ │
│ truecolor = False │
│ vt = False │
└───────────────────────────────────────────────────────┘
┌────── Environment Variables ───────┐
│ { │
│ 'TERM': None, │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
└────────────────────────────────────┘
platform="Windows"
```
```bash
rich==13.5.2
```
</details>
| open | 2023-08-26T17:38:03Z | 2024-02-23T13:25:13Z | https://github.com/Textualize/rich/issues/3107 | [
"Needs triage"
] | marvintensuan | 1 |
erdewit/ib_insync | asyncio | 246 | Expected functionality of ib.tickers()? | According to the [documentation](https://ib-insync.readthedocs.io/api.html?highlight=tickers#ib_insync.ib.IB.tickers), the `tickers()` function is supposed to return a list of all tickers on IB, but for me this always returns an empty list. Is this a bug, or how is this function intended to be used?
I was looking for a method to obtain all forex pairs from IB, if there is one, but other methods returning all tradable stocks, etc, on IB would be convenient to know as well if they exist. Thanks. | closed | 2020-05-01T12:39:01Z | 2020-05-01T13:06:22Z | https://github.com/erdewit/ib_insync/issues/246 | [] | Shellcat-Zero | 1 |
AirtestProject/Airtest | automation | 1,194 | 执行完double_click()之后在报告里没有那个点击后的红圈显示 | 执行完double_click()之后在报告里没有那个点击后的红圈显示

| open | 2024-02-02T07:46:49Z | 2024-02-02T07:49:20Z | https://github.com/AirtestProject/Airtest/issues/1194 | [
"bug"
] | fishfish-yu | 0 |
recommenders-team/recommenders | machine-learning | 1,333 | [FEATURE] make tqdm optional | ### Description
currently tqdm is required to download datasets, given this is just for a visual progress indicator it would be preferable if there was no dependency on tqdm, or at least a graceful fallback to continuing to download data without that library
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
this should run without tqdm installed
```
from reco_utils.dataset.movielens import load_pandas_df
df = load_pandas_df()
```
### Other Comments
| closed | 2021-03-02T02:03:13Z | 2021-12-17T13:23:10Z | https://github.com/recommenders-team/recommenders/issues/1333 | [
"enhancement"
] | gramhagen | 2 |
dmlc/gluon-cv | computer-vision | 955 | GPU inference too slow | gluoncv's CPU inference speed is relatively fast, but GPU inference seems too slow. mobilenet1.0_yolo3 test in GTX1080ti(cuda9.0) only have 16FPS (416*416). the code is like the tutorials. | closed | 2019-09-23T20:49:43Z | 2021-06-07T07:04:16Z | https://github.com/dmlc/gluon-cv/issues/955 | [
"Stale"
] | TomMao23 | 10 |
jina-ai/serve | fastapi | 6,142 | Fix code scanning alert - Information exposure through an exception | <!-- Warning: The suggested title contains the alert rule name. This can expose security information. -->
Tracking issue for:
- [ ] https://github.com/jina-ai/jina/security/code-scanning/6
| closed | 2024-02-22T13:24:03Z | 2024-06-06T00:18:51Z | https://github.com/jina-ai/serve/issues/6142 | [
"Stale"
] | JoanFM | 1 |
litestar-org/litestar | api | 3,730 | Enhancement: after_startup and after_shutdown hook | ### Summary
Issue #2375 mentions, that after_startup and after_shutdown hooks have been removed. However, it is a feature, which I need.
Right now, I'm developing a service for receiving web hooks. After application start, I need to inform remote service, that my server started accepting requests.
### Basic Example
```python
app = Litestar(
[index],
after_startup=[lambda: print("Application started up!")]
)
```
### Drawbacks and Impact
_No response_
### Unresolved questions
My current workaround is running desired function after `asyncio.sleep` together with ASGI server in `asyncio.gather`. Is there a better alternative? | closed | 2024-09-13T21:31:43Z | 2025-03-20T15:54:55Z | https://github.com/litestar-org/litestar/issues/3730 | [
"Enhancement"
] | m3nowak | 2 |
ludwig-ai/ludwig | data-science | 3,979 | Token-level Probability Always 0.0 When Fine-tuning Llama2-7b Model on Single GPU | **Describe the bug**
The token-level probabilities consistently appear as 0.0 when fine-tuning the Llama2-7b model using "Ludwig + DeepLearning.ai: Efficient Fine-Tuning for Llama2-7b on a Single GPU.ipynb".
https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing
below thing is my code that has a problem...
https://colab.research.google.com/drive/1OmbCKlPzlxm4__iThYqB9PSLUWZZVptz?usp=sharing
**To Reproduce**
Steps to reproduce the behavior:
1. Fine-tune the Llama2-7b model using the provided notebook.
2. Execute the model's predictions using the `predict` function with modified parameters, including setting `skip_save_unprocessed_output` to `False` and providing a specific `output_directory`.
3. Despite modifications, the token-level probabilities remain 0.0.
```python
ludwig.predict(
dataset=None,
data_format=None,
split='full',
batch_size=128,
skip_save_unprocessed_output=True,
skip_save_predictions=True,
output_directory='results',
return_type=<class 'pandas.core.frame.DataFrame'>,
debug=False
)
```
**Expected behavior**
Token-level probabilities should reflect the model's confidence in predicting each token's output.
**Screenshots**
N/A
**Environment:**
- OS: Ubuntu 20.04
- Python version: 3.8.10
- Ludwig version: 0.3.3
**Additional context**
The logger within the predict function does not seem to function as expected.
<img width="933" alt="스크린샷 2024-04-02 오후 4 45 28" src="https://github.com/ludwig-ai/ludwig/assets/87891501/1947e429-f1dc-4149-ba64-0c98b094ec53">
| closed | 2024-04-02T07:46:11Z | 2024-10-21T11:30:57Z | https://github.com/ludwig-ai/ludwig/issues/3979 | [
"llm"
] | MoOo2mini | 1 |
d2l-ai/d2l-en | deep-learning | 1,893 | Chapter 13.11 -Fully Convolutional Networks File not found in Error in Colab (PyTorch) | <code> img = torchvision.transforms.ToTensor()(d2l.Image.open('../img/catdog.jpg')) </code>
I've already installed <code> !pip install d2l==0.17.0</code>
While executing this line, I'm getting
FileNotFoundError: [Errno 2] No such file or directory: '../img/catdog.jpg' | closed | 2021-08-24T03:47:16Z | 2021-08-28T10:37:59Z | https://github.com/d2l-ai/d2l-en/issues/1893 | [] | AbhinandanRoul | 2 |
mwaskom/seaborn | matplotlib | 3,265 | seaborn.matrix.clustermap Runtime error in Jupyter Notebook | When running the following sample code in a Jupyter Notebook, there is a RuntimeError.
However, it is highly dependent on the cell order within the Jupyter Notebook.
**Environment:**
Jupyter notebook server: 6.5.2
Python 3.10.8
IPython 8.8.0
Seaborn: 0.11.2
**Sample Code:**
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from seaborn.matrix import clustermap
data = pd.DataFrame(np.random.randn(300, 300))
grid = clustermap(data)
fig = plt.gcf()
fig.show()
```
**Error Stack**
> ---------------------------------------------------------------------------
> RuntimeError Traceback (most recent call last)
> Cell In[23], line 8
> 4 from seaborn.matrix import clustermap
> 6 data = pd.DataFrame(np.random.randn(300, 300))
> ----> 8 grid = clustermap(data)
> 9 fig = plt.gcf()
> 10 fig.show()
>
> File ~\anaconda3\envs\dev\lib\site-packages\seaborn\_decorators.py:46, in _deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
> 36 warnings.warn(
> 37 "Pass the following variable{} as {}keyword arg{}: {}. "
> 38 "From version 0.12, the only valid positional argument "
> (...)
> 43 FutureWarning
> 44 )
> 45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
> ---> 46 return f(**kwargs)
>
> File ~\anaconda3\envs\dev\lib\site-packages\seaborn\matrix.py:1406, in clustermap(data, pivot_kws, method, metric, z_score, standard_scale, figsize, cbar_kws, row_cluster, col_cluster, row_linkage, col_linkage, row_colors, col_colors, mask, dendrogram_ratio, colors_ratio, cbar_pos, tree_kws, **kwargs)
> 1248 """
> 1249 Plot a matrix dataset as a hierarchically-clustered heatmap.
> 1250
> (...)
> 1398 >>> g = sns.clustermap(iris, z_score=0, cmap="vlag")
> 1399 """
> 1400 plotter = ClusterGrid(data, pivot_kws=pivot_kws, figsize=figsize,
> 1401 row_colors=row_colors, col_colors=col_colors,
> 1402 z_score=z_score, standard_scale=standard_scale,
> 1403 mask=mask, dendrogram_ratio=dendrogram_ratio,
> 1404 colors_ratio=colors_ratio, cbar_pos=cbar_pos)
> -> 1406 return plotter.plot(metric=metric, method=method,
> 1407 colorbar_kws=cbar_kws,
> 1408 row_cluster=row_cluster, col_cluster=col_cluster,
> 1409 row_linkage=row_linkage, col_linkage=col_linkage,
> 1410 tree_kws=tree_kws, **kwargs)
>
> File ~\anaconda3\envs\dev\lib\site-packages\seaborn\matrix.py:1232, in ClusterGrid.plot(self, metric, method, colorbar_kws, row_cluster, col_cluster, row_linkage, col_linkage, tree_kws, **kws)
> 1229 yind = np.arange(self.data2d.shape[0])
> 1231 self.plot_colors(xind, yind, **kws)
> -> 1232 self.plot_matrix(colorbar_kws, xind, yind, **kws)
> 1233 return self
>
> File ~\anaconda3\envs\dev\lib\site-packages\seaborn\matrix.py:1203, in ClusterGrid.plot_matrix(self, colorbar_kws, xind, yind, **kws)
> 1198 else:
> 1199 # Turn the colorbar axes off for tight layout so that its
> 1200 # ticks don't interfere with the rest of the plot layout.
> 1201 # Then move it.
> 1202 self.ax_cbar.set_axis_off()
> -> 1203 self._figure.tight_layout(**tight_params)
> 1204 self.ax_cbar.set_axis_on()
> 1205 self.ax_cbar.set_position(self.cbar_pos)
>
> File ~\anaconda3\envs\dev\lib\site-packages\matplotlib\figure.py:3444, in Figure.tight_layout(self, pad, h_pad, w_pad, rect)
> 3441 engine = TightLayoutEngine(pad=pad, h_pad=h_pad, w_pad=w_pad,
> 3442 rect=rect)
> 3443 try:
> -> 3444 self.set_layout_engine(engine)
> 3445 engine.execute(self)
> 3446 finally:
>
> File ~\anaconda3\envs\dev\lib\site-packages\matplotlib\figure.py:2586, in Figure.set_layout_engine(self, layout, **kwargs)
> 2584 self._layout_engine = new_layout_engine
> 2585 else:
> -> 2586 raise RuntimeError('Colorbar layout of new layout engine not '
> 2587 'compatible with old engine, and a colorbar '
> 2588 'has been created. Engine not changed.')
>
> RuntimeError: Colorbar layout of new layout engine not compatible with old engine, and a colorbar has been created. Engine not changed.
| open | 2023-02-17T10:56:35Z | 2024-08-12T04:14:37Z | https://github.com/mwaskom/seaborn/issues/3265 | [
"ux"
] | mucmch | 4 |
influxdata/influxdb-client-python | jupyter | 656 | string field gets inserted as integer :( | ### Specifications
* Client Version: 1.41.0
* InfluxDB Version: 2.7-alpine container
* Platform: Ubuntu 22.04
### Code sample to reproduce problem
```python
test1 = Point(measurement_name="pullrequest")
test1.tag("key1","value1")
**test1.field("status", "value2")**
test2 = Point(measurement_name="pullrequest")
test2.tag("key3","value3")
**test2.field("status", "value4")**
self._client = InfluxDBClient(
url=self._full_url, ssl=self._ssl, token=self._token, org=self._org, *args, **kwargs
)
write_api = self._client.write_api(write_options=write_options)
write_api.write(bucket=self._bucket, **record=test1**, write_precision=time_precision, **kwargs)
write_api.write(bucket=self._bucket, **record=test2**, write_precision=time_precision, **kwargs)
```
### Expected behavior
<img width="841" alt="image" src="https://github.com/influxdata/influxdb-client-python/assets/138435075/9f6722f8-0348-453c-8bc2-c4841fd35ac1">
### Actual behavior
## Instead of "value2" string I get 0, and instead of "value4" string I get 1, and so on.
<img width="803" alt="image" src="https://github.com/influxdata/influxdb-client-python/assets/138435075/f50dadd2-50cc-4e02-a257-64f9f23ccbc7">
### Additional info
_No response_ | closed | 2024-05-17T07:23:40Z | 2024-05-17T12:03:16Z | https://github.com/influxdata/influxdb-client-python/issues/656 | [
"bug"
] | rwader-swi | 1 |
widgetti/solara | fastapi | 465 | pyright complain "component" is not exported from module "solara" | pyright complains `"component" is not exported from module "solara"` for the following code, line 3
```python
import solara
@solara.component
class Page
pass
```
turns out it is not a false alarm, see https://github.com/microsoft/pyright/issues/5929
> By default, all imports in a py.typed library are considered private unless they are explicitly re-exported. To indicate that an imported symbol is intended to be re-exported, the maintainers of this library would need to use one of the techniques documented [here](https://microsoft.github.io/pyright/#/typed-libraries?id=library-interface)
so I guess instead of
```python
from reacton import component
```
it could be updated to re-exported using the redundant form to make pyright happy
```python
from reacton import component as component
```
| open | 2024-01-13T03:59:33Z | 2024-08-13T04:37:25Z | https://github.com/widgetti/solara/issues/465 | [] | zhuoqiang | 3 |
thtrieu/darkflow | tensorflow | 884 | Layer [yolo] not implemented (yolov3-tiny.cfg) | Python:
```
from darkflow.net.build import TFNet
import cv2
import Z_folder_scann
import time
options = {"model": "cfg/yolov3-tiny.cfg", "load": "tiny-yolov3_51200.weights", "threshold": 0.1}
tfnet = TFNet(options)
```
Log:
```
C:\ProgramData\Anaconda3.5.2.0\python.exe C:/Users/leekw/PycharmProjects/cpos_darkflow_application/T_Car_positioning.py
C:\ProgramData\Anaconda3.5.2.0\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
C:\ProgramData\Anaconda3.5.2.0\lib\site-packages\darkflow\dark\darknet.py:54: UserWarning: ./cfg/yolov3-tiny.cfg not found, use D:\MachineLearning\darknet\alexey_darknet\darknet-master\build\darknet\x64\cfg\yolov3-tiny.cfg instead
D:\MachineLearning\darknet\alexey_darknet\darknet-master\build\darknet\x64\cpos\cfg\yolov3-tiny.cfg
cfg_path, FLAGS.model))
D:\MachineLearning\darknet\alexey_darknet\darknet-master\build\darknet\x64\tiny-yolov3_51200.weights
Layer [yolo] not implemented
Parsing D:\MachineLearning\darknet\alexey_darknet\darknet-master\build\darknet\x64\cfg\yolov3-tiny.cfg
Process finished with exit code 1
```
I have [yolo] layer
```
[net]
# Testing
# batch=1
# subdivisions=1
# Training
batch=64
subdivisions=8
width=224
height=224
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
# burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=1
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
###########
[convolutional]
batch_normalize=1
filters=21
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=21
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=21
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=2
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=21
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 8
[convolutional]
batch_normalize=1
filters=21
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=21
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=2
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
``` | open | 2018-08-30T07:28:02Z | 2019-04-19T19:00:18Z | https://github.com/thtrieu/darkflow/issues/884 | [] | leekwunfung817 | 9 |
rthalley/dnspython | asyncio | 1,114 | win32api DLL load error after upgrading to 2.6.1 from 2.4.2 | **Describe the bug**
I have a miniconda env where I install both pywin32(==306) and dnspython.
Previously I was using dnspython 2.4.2 and everything was working fine.
Few days ago, I tried to upgrade to 2.6.1 (to avoid CVE-2023-29483) and I ran into the following error while launching my application:
"ImportError: DLL load failed while importing win32api: The specified module could not be found."
please note this issue is only on windows (using 10).
If I revert back to 2.4.2, everything works fine
also if I add <conda_env>\Lib\site-packages\pywin32_system32 to PATH then 2.6.1 works fine
**To Reproduce**
Create a miniconda env with pywin32(==306) and dnspython(2.6.1) and try to run some code with "import win32api"
**Context (please complete the following information):**
- dnspython version [2.6.1]
- Python version [3.9.13]
- OS: [Windows 10]
| closed | 2024-08-01T18:58:39Z | 2024-08-18T13:35:36Z | https://github.com/rthalley/dnspython/issues/1114 | [
"Cannot Reproduce"
] | tanmoypalit | 6 |
MagicStack/asyncpg | asyncio | 689 | Using async iterator in copy_records_to_table | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.22.0
* **PostgreSQL version**: 13
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: no
* **Python version**: 3.8, 3.9
* **Platform**: Windows, Linux
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: git
* **If you built asyncpg locally, which version of Cython did you use?**: 0.29.21
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: yes
<!-- Enter your issue details below this comment. -->
I want to stream data to base with COPY, but i have async code, that produce data. I can pass regular iterator to `records`, but it ends then Queue empty. So reconnect needed.
```
queue = asyncio.Queue(2<<10)
loop.create_task(feed(queue))
async def get_records(queue):
while True:
r = await queue.get()
yield r
self.q.task_done()
async with pg.acquire() as connection:
await connection.copy_records_to_table(
'radarlog2',
records = get_records(queue)
)
``` | closed | 2021-01-14T09:14:50Z | 2021-08-10T00:16:28Z | https://github.com/MagicStack/asyncpg/issues/689 | [] | alex-eri | 3 |
open-mmlab/mmdetection | pytorch | 12,000 | Does Co-DETR supports automatic-mixed-precision training? | When training using Co-DETR (using the co_dino_5scale_swin_l_16xb1_16e_o365tococo.py config) in an environment with 6 RTX4090 GPUs, MMCV==2.2.0, and MMdetection==3.3.0, the following error occurs:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.78 GiB. GPU 5 has a total capacity of 23.65 GiB of which 38.25 MiB is free. Process 4916 has 390.70 MiB memory in use. Including non-PyTorch memory, this process has 22.62 GiB memory in use. Of the allocated memory 16.82 GiB is allocated by PyTorch, and 5.21 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
While looking for solutions to this problem, I found the following section in tools/train.py:
Copyparser.add_argument(
'--amp',
action='store_true',
default=False,
help='enable automatic-mixed-precision training')
I want to try training by activating this option. Is this possible with Co-DETR? | open | 2024-10-16T02:53:41Z | 2024-10-16T02:54:00Z | https://github.com/open-mmlab/mmdetection/issues/12000 | [] | taemmini | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,170 | Mobile Mode modernization | ## Mobile Mode modernization
The default values for user_agent and device metrics are getting out-of-date. Need to switch those to something more recent.
Based on tests, using "Android WebView 110" (In-App Browser) appears to get the best results. The user_agent should match this device. (Note that this will only used for setting **default values** when the user doesn't specify a user_agent with device metrics.)
--------
Info about "Android WebView":
"Android System WebView lets applications display browser windows in an app instead of transporting the user to another browser. Android developers use WebView when they want to display webpages in a Google app or other application."
| closed | 2023-10-09T19:22:39Z | 2023-10-10T19:56:14Z | https://github.com/seleniumbase/SeleniumBase/issues/2170 | [
"enhancement"
] | mdmintz | 1 |
sczhou/CodeFormer | pytorch | 396 | Missing or damaged File from central directory -- any fix? | It started loading and after about 25% it stopped reporting that Error:
File "C:\Users\Zammorta\.conda\envs\codeformer\lib\site-packages\torch\serialization.py", line 480, in __init__
super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
Any remedy?
| open | 2024-08-23T20:15:07Z | 2024-08-26T12:33:57Z | https://github.com/sczhou/CodeFormer/issues/396 | [] | Ziggozaur | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 629 | Support for OpenAI Assistants API | The Assistants API has support for uploading files into a server-side vector store (https://platform.openai.com/docs/assistants/tools/file-search). This would eliminate the need for chunking files while scraping.
On the other hand, I don't know if other systems support server-side RAG so this might be OpenAI specific functionality. | closed | 2024-09-04T08:44:40Z | 2025-01-20T16:00:11Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/629 | [] | matthewgertner | 3 |
blockchain-etl/bitcoin-etl | dash | 8 | pip install bitcoin-etl error | This is the error:
$ pip install bitcoin-etl
Collecting bitcoin-etl
Could not find a version that satisfies the requirement bitcoin-etl (from versions: )
No matching distribution found for bitcoin-etl | closed | 2019-01-03T09:01:27Z | 2019-01-12T17:53:42Z | https://github.com/blockchain-etl/bitcoin-etl/issues/8 | [] | aitianxiang | 8 |
InstaPy/InstaPy | automation | 6,013 | Attempting to find user ID: Track: post, Username | im using this template https://github.com/InstaPy/instapy-quickstart/blob/master/quickstart_templates/good_commenting_strategy_and_new_qs_system.py
but error Attempting to find user ID: Track: post, Username | open | 2021-01-07T22:48:45Z | 2021-07-21T02:19:16Z | https://github.com/InstaPy/InstaPy/issues/6013 | [
"wontfix"
] | metapodcod | 4 |
flasgger/flasgger | api | 215 | Change UI for swagger docs | Hi, I'm new to this. How do I change the look of the swagger UI docs page. Right now the swagger UI doesn't look very good to me. It appears different from the example screenshots in the README. Am I doing something wrong? This is my Swagger UI:

I would like my UI to look more like this:

| open | 2018-07-20T16:07:48Z | 2018-10-01T17:31:20Z | https://github.com/flasgger/flasgger/issues/215 | [
"hacktoberfest"
] | babunoel | 3 |
ray-project/ray | tensorflow | 50,723 | slow torch.distributed with non-default CUDA_VISIBLE_DEVICES | ### What happened + What you expected to happen
Hi, I am working on deploy distributed torch model with ray. I found that the performance of first distributed op (all_reduce in my case) changes after I set CUDA_VISIBLE_DEVICES. `dist.all_reduce` might cost 30s+.
### Versions / Dependencies
- ray: 2.42.1
- python: 3.9.16 / 3.10.0
- os: CentOS7
- pytorch: 2.4.0+cuda12.1 / 2.5.1+cuda12.1
### Reproduction script
`ray_dist.py`
```python
import os
import time
import ray
import torch
import torch.distributed as dist
from contextlib import contextmanager
@ray.remote(num_gpus=1)
class DistActor:
def __init__(self, rank, world_size):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend='nccl', rank=rank, world_size=world_size)
def all_reduce(self):
a = torch.rand([1], device='cuda')
dist.all_reduce(a)
@contextmanager
def timeit(msg):
print(msg)
start = time.time()
yield
end = time.time()
duration = (end - start)
print(f'Take time: {duration:.3f} s')
if __name__ == '__main__':
ray.init()
world_size = 2
actors = [DistActor.remote(rank, world_size) for rank in range(world_size)]
with timeit('start first all_reduce'):
ray.get([actor.all_reduce.remote() for actor in actors])
with timeit('start second all_reduce'):
ray.get([actor.all_reduce.remote() for rank, actor in enumerate(actors)])
```
good without `CUDA_VISIBLE_DEVICES` or with `CUDA_VISIBLE_DEVICES=0,1` or with `CUDA_VISIBLE_DEVICES=1,0`
```bash
python ray/ray_dist.py
# start first all_reduce
# Take time: 4.486 s
# start second all_reduce
# Take time: 0.002 s
```
bad with `CUDA_VISIBLE_DEVICES=6,7`
```bash
CUDA_VISIBLE_DEVICES=6,7 python ray/ray_dist.py
# start first all_reduce
# Take time: 63.014 s
# start second all_reduce
# Take time: 0.002 s
```
good with `docker run -it --gpus '"device=6,7"' ..`
```bash
python ray/ray_dist.py
# start first all_reduce
# Take time: 3.183 s
# start second all_reduce
# Take time: 0.001 s
```
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-02-19T13:00:23Z | 2025-02-20T06:34:36Z | https://github.com/ray-project/ray/issues/50723 | [
"question",
"P1",
"core"
] | grimoire | 2 |
explosion/spaCy | machine-learning | 12,806 | PEX documentation and Makefile is unclear on supported Python versions | I am unable to build a PEX file as described in the documentation.
I am attempting to build the PEX file in a docker container for my version of Python (3.6). According to the [Makefile](https://github.com/explosion/spaCy/blob/ddffd096024004f27a0dee3701dc248c4647b3a7/Makefile#L8), that version of Python is the default `$PYVER`. I come across multiple errors from the default `$SPACY_EXTRAS` and final packaging that point to Python 3.6 being incompatible.
My docker-compose.yml
```yaml
version: '3.3'
services:
build:
image: python:3.6-bullseye
volumes:
- ./spaCy:/spacy
working_dir: /spacy
command: tail -f /dev/null
```
The docker file exists in a directory with the SpaCy repo cloned into a folder `spaCy` ex:
```
- docker-compose.yml
- spaCy
| --- ... SpaCy repo root
```
After launching through a `docker compose up -d` I have to install the rust compiler following their instructions [here](https://www.rust-lang.org/tools/install).
Once the rust compiler is installed, run `make` in the working directory. This will result in the `$SPACY_EXTRAS` failing to build properly. For example, when attempting to build sudachipy, it fails with `error: the configured Python interpreter version (3.6) is lower than PyO3's minimum supported version (3.7)`
Even dropping the extras that I don't need still results in a failed build at the 'packaging' stage saying that Python3.6 is not supported.
If there is a different version of SpaCy that supports python3.6, or a different method of packaging, I would greatly appreciate being pointed in the correct direction.
## Which page or section is this issue related to?
[documentation source](https://spacy.io/usage#executable)
| closed | 2023-07-07T19:00:00Z | 2023-07-10T10:43:57Z | https://github.com/explosion/spaCy/issues/12806 | [
"scaling"
] | Cagrosso | 2 |
plotly/dash-core-components | dash | 250 | Support for rendering links inside `dcc.Markdown` as `dcc.Link` for single page dash apps | I use `dcc.Markdown` really extensively in `dash-docs`. It's great! However, a few things would make my life a lot easier:
1. (Done!) GitHub style language tags, that is:
```
``python
def dash():
pass
```
**Edit - This has been done!**
2. Ability for the hyper links to use `dcc.Link` instead of the HTML link
Currently, I have to break out my `dcc.Link` from `dcc.Markdown`, which is pretty tedious:
https://github.com/plotly/dash-docs/blob/58b6f84f2d8012d1ae686f1379f326a292370ee3/tutorial/getting_started_part_2.py#L260-L269
3. (Done!) Automatic dedenting. Right now, I use `textwrap.dedent` everwhere in my text. If I don't use `dedent`, then the markdown is formatted as code (4 indents in markdown is code). It would be nice if I could just pass in `dedent=True` or something
**Edit - This has been done!** | closed | 2018-08-01T19:55:50Z | 2020-01-09T13:55:17Z | https://github.com/plotly/dash-core-components/issues/250 | [
"dash-type-enhancement",
"Status: Discussion Needed",
"size: 3",
"dash-meta-prioritized"
] | chriddyp | 8 |
scikit-image/scikit-image | computer-vision | 7,543 | Back-references for sphinx gallery examples are missing in dev docs | I noticed that the back-references to sphinx gallery examples seem to be missing from our docs. E.g. compare the [docs dev version](https://scikit-image.org/docs/dev/api/skimage.data.html#skimage.data.coins) with the [stable one](https://scikit-image.org/docs/stable/api/skimage.data.html#skimage.data.coins).
| closed | 2024-09-16T07:35:16Z | 2024-10-04T15:12:33Z | https://github.com/scikit-image/scikit-image/issues/7543 | [
":page_facing_up: type: Documentation",
":question: Needs info",
":bug: Bug"
] | lagru | 1 |
pytest-dev/pytest-html | pytest | 752 | Adding a search in the column header that allows filtering by test name. | Is there any example or method to filter tests based on the test name or any user-defined column on the HTML report? This will help in looking at tests from different files or hierarchies separately. | closed | 2023-10-27T18:10:06Z | 2023-11-14T16:36:27Z | https://github.com/pytest-dev/pytest-html/issues/752 | [] | kkunal1408 | 2 |
LibrePhotos/librephotos | django | 557 | New user "deleted" is in the database | An unknown user "deleted" started to appear after #553. This could also be related to the scanning no longer working. | closed | 2022-07-21T12:06:32Z | 2022-07-21T14:26:24Z | https://github.com/LibrePhotos/librephotos/issues/557 | [
"bug"
] | derneuere | 3 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 504 | Extended vocab successfully but pre-training posed: piece id is out of range. | ### Describe the issue in detail
I have extended vocab of 7B model successfully. But once I started training, the error was
**IndexError: piece id is out of range.**
#### Dependencies (code-related issues)
*Please provide transformers, peft, torch, etc. versions.*
Everything was installed using requirements.txt
#### Log or Screenshot
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/oobabooga/Chinese-LLaMA-Alpaca/scripts/run_clm_pt_with_peft.py:632 in <module> │
│ │
│ 629 │
│ 630 │
│ 631 if __name__ == "__main__": │
│ ❱ 632 │ main() │
│ 633 │
│ │
│ /data/oobabooga/Chinese-LLaMA-Alpaca/scripts/run_clm_pt_with_peft.py:504 in main │
│ │
│ 501 │ │ │ train_dataset = train_dataset.select(range(max_train_samples)) │
│ 502 │ │ logger.info(f"Num train_samples {len(train_dataset)}") │
│ 503 │ │ logger.info("training example:") │
│ ❱ 504 │ │ logger.info(tokenizer.decode(train_dataset[0]['input_ids'])) │
│ 505 │ if training_args.do_eval: │
│ 506 │ │ eval_dataset = lm_datasets["test"] │
│ 507 │ │ if data_args.max_eval_samples is not None: │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/transformers/tokenization_utils_ │
│ base.py:3486 in decode │
│ │
│ 3483 │ │ # Convert inputs to python lists │
│ 3484 │ │ token_ids = to_py_obj(token_ids) │
│ 3485 │ │ │
│ ❱ 3486 │ │ return self._decode( │
│ 3487 │ │ │ token_ids=token_ids, │
│ 3488 │ │ │ skip_special_tokens=skip_special_tokens, │
│ 3489 │ │ │ clean_up_tokenization_spaces=clean_up_tokenization_spaces, │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/transformers/tokenization_utils. │
│ py:931 in _decode │
│ │
│ 928 │ ) -> str: │
│ 929 │ │ self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False) │
│ 930 │ │ │
│ ❱ 931 │ │ filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip │
│ 932 │ │ │
│ 933 │ │ # To avoid mixing byte-level and unicode for byte-level BPT │
│ 934 │ │ # we need to build string separately for added tokens and byte-level tokens │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/transformers/tokenization_utils. │
│ py:912 in convert_ids_to_tokens │
│ │
│ 909 │ │ │ if index in self.added_tokens_decoder: │
│ 910 │ │ │ │ tokens.append(self.added_tokens_decoder[index]) │
│ 911 │ │ │ else: │
│ ❱ 912 │ │ │ │ tokens.append(self._convert_id_to_token(index)) │
│ 913 │ │ return tokens │
│ 914 │ │
│ 915 │ def _convert_id_to_token(self, index: int) -> str: │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/transformers/models/llama/tokeni │
│ zation_llama.py:129 in _convert_id_to_token │
│ │
│ 126 │ │
│ 127 │ def _convert_id_to_token(self, index): │
│ 128 │ │ """Converts an index (integer) in a token (str) using the vocab.""" │
│ ❱ 129 │ │ token = self.sp_model.IdToPiece(index) │
│ 130 │ │ return token │
│ 131 │ │
│ 132 │ def convert_tokens_to_string(self, tokens): │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/sentencepiece/__init__.py:501 in │
│ _batched_func │
│ │
│ 498 │ if type(arg) is list: │
│ 499 │ return [_func(self, n) for n in arg] │
│ 500 │ else: │
│ ❱ 501 │ return _func(self, arg) │
│ 502 │
│ 503 setattr(classname, name, _batched_func) │
│ 504 │
│ │
│ /data/oobabooga/Chinese-Vicuna/env/lib/python3.10/site-packages/sentencepiece/__init__.py:494 in │
│ _func │
│ │
│ 491 func = getattr(classname, name, None) │
│ 492 def _func(v, n): │
│ 493 │ if type(n) is int and (n < 0 or n >= v.piece_size()): │
│ ❱ 494 │ raise IndexError('piece id is out of range.') │
│ 495 │ return func(v, n) │
│ 496 │
│ 497 def _batched_func(self, arg): │
╰─────────────────────────────────────────────────────────────────────────────────
IndexError: piece id is out of range.
### Checklist
*Fill in the [ ] with an x to mark it as checked. Delete any option that is not related to this issue.*
- [x] **Base model**: LLaMA 7B (hf)
- [x] **Operating System**: Linux WSL
- [x] **Issue type**: Pretraining
- [x] I have read the [FAQ section](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/FAQ) AND searched for similar issues and did not find a similar problem or solution | closed | 2023-06-04T18:48:58Z | 2023-06-04T19:17:15Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/504 | [] | thusinh1969 | 1 |
Ehco1996/django-sspanel | django | 130 | 好多元素缺失,还有最后一部配置的时候出现问题 | uwsgi.ini配置这个的时候,没有发现和安装说明内一样的文件,后台也不知为什么变成下图的样子[https://s1.ax1x.com/2018/06/02/Co2zp4.png](url) | closed | 2018-06-02T14:47:51Z | 2018-07-30T03:19:32Z | https://github.com/Ehco1996/django-sspanel/issues/130 | [] | 497131664 | 1 |
mwaskom/seaborn | pandas | 2,973 | Rename layout(algo=) to layout(engine=) | Matplotlib has settled on this term with the new `set_layout_engine` method in 3.6 so might as well be consistent with them.
The new API also ha some implications for how the parameter should be documented / typed. | closed | 2022-08-23T22:47:53Z | 2022-09-05T00:36:45Z | https://github.com/mwaskom/seaborn/issues/2973 | [
"api",
"objects-plot"
] | mwaskom | 0 |
davidsandberg/facenet | tensorflow | 387 | training issues: lfw_classifier.plk does not exist | When I use the classifier.py to train a classifier on my own datasets just as the wiki says, use the command:
python src/classifier.py TRAIN /home/david/datasets/lfw/lfw_mtcnnalign_160 /home/david/models/model-20170216-091149.pb ~/models/lfw_classifier.pkl --batch_size 1000 --min_nrof_images_per_class 40 --nrof_train_images_per_class 35 --use_split_dataset
the program exit with "the file or dictionary does not exist:/model/lfw_classifier.plk ". But the file is what I will get through running the program, how can I have it existed? Thanks for everyone who can give me a little suggestions! | closed | 2017-07-21T01:07:45Z | 2017-07-21T02:10:14Z | https://github.com/davidsandberg/facenet/issues/387 | [] | MasterofPLM | 0 |
AutoViML/AutoViz | scikit-learn | 71 | Does not work with DataFrame | . | closed | 2022-06-08T13:05:05Z | 2022-06-08T13:10:35Z | https://github.com/AutoViML/AutoViz/issues/71 | [] | emsi | 0 |
2noise/ChatTTS | python | 751 | ChatTTS如果要支持16路实时语音需要什么配置,能否有一个性能说明 | 我的是一个3050显卡,测试发现下来ChatTTS文本转中文很慢,大概需要10秒,不知道是否有一个说明书来描述清楚这个对性能的要求 | closed | 2024-09-12T01:18:05Z | 2024-10-28T04:01:20Z | https://github.com/2noise/ChatTTS/issues/751 | [
"stale"
] | dyjiangjh | 1 |
dsdanielpark/Bard-API | api | 1 | Problem when executing Bard().get_answer(...) | Bard-API/bardapi/core.py", line 32, in _get_snim0e
return re.search(r"SNlM0e\":\"(.*?)\"", resp.text).group(1)
AttributeError: 'NoneType' object has no attribute 'group' | closed | 2023-05-14T20:47:01Z | 2024-03-05T08:22:29Z | https://github.com/dsdanielpark/Bard-API/issues/1 | [] | vipin211 | 44 |
vimalloc/flask-jwt-extended | flask | 403 | Flask_jwt_extended is not recognized | I'm trying to import Flask-JWT-Extended but I keep facing this error on my vs-code:
Import "flask_jwt_extended" could not be resolved (PylancereportMissingImports)
I'm running a virtual env and there are these packages installed:
click 7.1.2
Flask 1.1.2
Flask-Cors 3.0.10
Flask-JWT-Extended 4.1.0
Flask-SQLAlchemy 2.4.4
itsdangerous 1.1.0
Jinja2 2.11.3
MarkupSafe 1.1.1
pip 21.0.1
PyJWT 2.0.1
setuptools 47.1.0
six 1.15.0
SQLAlchemy 1.3.23
Werkzeug 1.0.1
`from flask_jwt_extended import create_access_token
`
Every time I try to use it it turns into an error. I don't know if it's an issue related to an extension on my vs-code or a compatibility problem.
Thanks
Pedro
| closed | 2021-03-11T16:24:04Z | 2024-05-27T19:34:03Z | https://github.com/vimalloc/flask-jwt-extended/issues/403 | [] | pmopedro | 5 |
PaddlePaddle/PaddleHub | nlp | 2,249 | fastspeech2_baker运行hub install fastspeech2_baker错误 | 欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本
1) paddlepaddle-gpu==2.3.2 paddlehub==2.1.0
2)win10 conda 22.9.0 python 3.8
使用命令:
conda install paddlepaddle-gpu==2.3.2 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/ -c conda-forge
环境下执行:
hub install fastspeech2_baker
报错信息:
`[nltk_data] Error loading averaged_perceptron_tagger: <urlopen error
[nltk_data] [Errno 11004] getaddrinfo failed>
[nltk_data] Error loading cmudict: <urlopen error [Errno 11004]
[nltk_data] getaddrinfo failed>
Traceback (most recent call last):
File "e:\anaconda3\envs\paddle_env\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "e:\anaconda3\envs\paddle_env\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "e:\anaconda3\envs\paddle_env\Scripts\hub.exe\__main__.py", line 7, in <module>
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\commands\utils.py", line 78, in execute
status = 0 if com['_entry']().execute(sys.argv[idx:]) else 1
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\commands\install.py", line 55, in execute
manager.install(name=name, version=version, ignore_env_mismatch=args.ignore_env_mismatch)
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\manager.py", line 190, in install
return self._install_from_name(name, version, ignore_env_mismatch)
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\manager.py", line 265, in _install_from_name
return self._install_from_url(item['url'])
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\manager.py", line 258, in _install_from_url
return self._install_from_archive(file)
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\manager.py", line 380, in _install_from_archive
return self._install_from_directory(directory)
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\manager.py", line 364, in _install_from_directory
hub_module_cls = HubModule.load(self._get_normalized_path(module_info.name))
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\module\module.py", line 418, in load
py_module = utils.load_py_module(dirname, '{}.module'.format(basename))
File "e:\anaconda3\envs\paddle_env\lib\site-packages\paddlehub\utils\utils.py", line 248, in load_py_module
py_module = importlib.import_module(py_module_name)
File "e:\anaconda3\envs\paddle_env\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\cjl84\.paddlehub\modules\fastspeech2_baker\module.py", line 24, in <module>
from parakeet.frontend.zh_frontend import Frontend
File "e:\anaconda3\envs\paddle_env\lib\site-packages\parakeet\__init__.py", line 20, in <module>
from . import models
File "e:\anaconda3\envs\paddle_env\lib\site-packages\parakeet\models\__init__.py", line 15, in <module>
from .fastspeech2 import *
File "e:\anaconda3\envs\paddle_env\lib\site-packages\parakeet\models\fastspeech2\__init__.py", line 15, in <module>
from .fastspeech2 import *
File "e:\anaconda3\envs\paddle_env\lib\site-packages\parakeet\models\fastspeech2\fastspeech2.py", line 22, in <module> from typeguard import check_argument_types
ImportError: cannot import name 'check_argument_types' from 'typeguard' (e:\anaconda3\envs\paddle_env\lib\site-packages\typeguard\__init__.py)`
| open | 2023-04-22T00:26:09Z | 2024-02-26T04:59:39Z | https://github.com/PaddlePaddle/PaddleHub/issues/2249 | [] | cjl84914 | 0 |
nolar/kopf | asyncio | 898 | Validating Admission Webhook Fails with Example Files | ### Long story short
I'm unable to successfully run a Validation Webhook using code heavily based on that provided in the examples directory.
### Kopf version
1.35.3
### Kubernetes version
1.23.1
### Python version
3.9.10
### Code
[crd.yaml](https://github.com/nolar/kopf/blob/main/examples/crd.yaml)
rbac.yaml
```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: kopfexample-account
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kopfexample-role-cluster
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: kopfexample-role-namespaced
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kopfexample-rolebinding-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kopfexample-role-cluster
subjects:
- kind: ServiceAccount
name: kopfexample-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: default
name: kopfexample-rolebinding-namespaced
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kopfexample-role-namespaced
subjects:
- kind: ServiceAccount
name: kopfexample-account
```
example.py
```python
import kopf
@kopf.on.startup()
def config(settings: kopf.OperatorSettings, **_):
settings.admission.server = kopf.WebhookServer(
certfile="local.crt", pkeyfile="local.key", port=1234
)
settings.admission.managed = "auto.kopf.dev"
@kopf.on.validate(kopf.EVERYTHING)
def validate(sslpeer, **_):
raise kopf.AdmissionError("I'm too lazy anyway. Go away!", code=555)
```
obj.json
```json
{
"apiVersion": "kopf.dev/v1",
"kind": "KopfExample",
"metadata": {
"name": "kopf-example-2",
"labels": {
"somelabel": "somevalue"
},
"annotations": {
"someannotation": "somevalue"
}
},
"spec": {
"duration": "1m",
"field": "value",
"items": [
"item1",
"item2"
]
}
}
```
### Logs
```none
$ kopf run example.py -A
[2022-03-02 10:38:31,728] kopf.activities.star [INFO ] Activity 'config' succeeded.
[2022-03-02 10:38:31,728] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2022-03-02 10:38:31,738] kopf.activities.auth [INFO ] Activity 'login_with_kubeconfig' succeeded.
[2022-03-02 10:38:31,738] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2022-03-02 10:38:31,834] kopf._core.engines.a [INFO ] Reconfiguring the validating webhook auto.kopf.dev.
[2022-03-02 10:38:31,840] kopf._core.engines.a [INFO ] Reconfiguring the mutating webhook auto.kopf.dev.
[2022-03-02 10:38:33,657] aiohttp.server [ERROR ] Error handling request
Traceback (most recent call last):
File "/home/andy/.local/lib/python3.9/site-packages/aiohttp/web_protocol.py", line 435, in _handle_request
resp = await request_handler(request)
File "/home/andy/.local/lib/python3.9/site-packages/aiohttp/web_app.py", line 504, in _handle
resp = await handler(request)
File "/home/andy/.local/lib/python3.9/site-packages/kopf/_kits/webhooks.py", line 154, in _serve_fn
return await self._serve(fn, request)
File "/home/andy/.local/lib/python3.9/site-packages/kopf/_kits/webhooks.py", line 213, in _serve
response = await fn(data, webhook=webhook, sslpeer=sslpeer, headers=headers)
File "/home/andy/.local/lib/python3.9/site-packages/kopf/_core/engines/admission.py", line 115, in serve_admission_request
resource = find_resource(request=request, insights=insights)
File "/home/andy/.local/lib/python3.9/site-packages/kopf/_core/engines/admission.py", line 186, in find_resource
request_payload: reviews.RequestPayload = request['request']
KeyError: 'request'
[2022-03-02 10:38:33,658] aiohttp.access [INFO ] 127.0.0.1 [02/Mar/2022:15:38:33 +0000] "POST / HTTP/1.1" 500 244 "-" "curl/7.81.0"
```
### Additional information
Essentially, I'm trying to get [Example 17](https://github.com/nolar/kopf/tree/main/examples/17-admission) running to no avail. I'm running locally in Minikube. I've got past all the startup, TLS, and RBAC issues I previously experienced. However, in trying to prove that a basic "Deny all" validation controller works, I get the error shown above.
Steps to recreate:
```sh
minikube start
kubectl apply -f kopf/examples/crd.yaml
kubectl apply -f rbac.yaml
kopf run example.py -A
curl -k -X POST https://localhost:1234/ -d @obj.json
```
| open | 2022-03-02T15:57:42Z | 2022-03-02T15:59:28Z | https://github.com/nolar/kopf/issues/898 | [
"bug"
] | agnias-stratagem | 0 |
mwaskom/seaborn | pandas | 2,821 | Calling `sns.heatmap()` changes matplotlib rcParams | See the following example
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
mpl.rcParams["figure.dpi"] = 120
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["figure.figsize"] = (9, 6)
data = sns.load_dataset("iris")
print(mpl.rcParams["figure.dpi"])
print(mpl.rcParams["figure.facecolor"])
print(mpl.rcParams["figure.figsize"])
#120.0
#white
#[9.0, 6.0]
fig, ax = plt.subplots()
sns.heatmap(data.corr(), vmin=-1, vmax=1, center=0, annot=True, linewidths=4, ax=ax);
print(mpl.rcParams["figure.dpi"])
print(mpl.rcParams["figure.facecolor"])
print(mpl.rcParams["figure.figsize"])
#72.0
#(1, 1, 1, 0)
#[6.0, 4.0]
```
If I call again
```python
mpl.rcParams["figure.dpi"] = 120
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["figure.figsize"] = (9, 6)
```
then it works fine, but I don't know why it changes the rcParams.
**Edit** These are the versions being used
```
Last updated: Wed May 25 2022
Python implementation: CPython
Python version : 3.9.12
IPython version : 8.3.0
matplotlib: 3.5.2
seaborn : 0.11.2
sys : 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:59)
[GCC 10.3.0]
Watermark: 2.3.0
``` | closed | 2022-05-25T19:16:45Z | 2022-05-27T11:13:29Z | https://github.com/mwaskom/seaborn/issues/2821 | [] | tomicapretto | 2 |
iperov/DeepFaceLab | deep-learning | 867 | AttributeError: function 'cuCtxCreate_v2' not found | Traceback (most recent call last):
File "main.py", line 7, in <module>
nn.initialize_main_env()
File "C:\Users\Татьяна\Desktop\DeepFake\DeepFaceLab-master\core\leras\nn.py",
line 122, in initialize_main_env
Devices.initialize_main_env()
File "C:\Users\Татьяна\Desktop\DeepFake\DeepFaceLab-master\core\leras\device.p
y", line 126, in initialize_main_env
if cuda.cuCtxCreate_v2(ctypes.byref(context), 0, device) == 0:
File "C:\Users\Татьяна\AppData\Local\Programs\Python\Python37\lib\ctypes\__ini
t__.py", line 377, in __getattr__
func = self.__getitem__(name)
File "C:\Users\Татьяна\AppData\Local\Programs\Python\Python37\lib\ctypes\__ini
t__.py", line 382, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'cuCtxCreate_v2' not found
How to fix?
Windows 7, Nvidia GeForce GT 335M. | open | 2020-08-20T08:11:04Z | 2023-06-08T21:21:30Z | https://github.com/iperov/DeepFaceLab/issues/867 | [] | Tim4ik77 | 1 |
autokey/autokey | automation | 200 | WIndow Filter Not Working | ## Classification:
Bug
## Reproducibility:
My previous window filter works fine. But since 2 days ago, the window filter is not working at all.
## Summary
I set the window filter for chrome google-chrome.Google-chrome. But now in other program, e.g. vscode, the chrome specific combination will also be detected.
My OS: Ubuntu 18.04
Autokey Version: 0.90.4
| open | 2018-10-25T15:37:32Z | 2024-01-19T12:21:33Z | https://github.com/autokey/autokey/issues/200 | [
"help-wanted",
"autokey triggers"
] | praenubilus | 16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.