repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pallets-eco/flask-sqlalchemy | flask | 1,182 | PDF docs on readthedocs? | Hi,
On readthedocs there is no option to downlod PDF?

Could you enable this?
It should look something like this:

Thanks! | closed | 2023-03-25T20:55:07Z | 2023-04-09T01:04:45Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1182 | [] | falloutphil | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,849 | [Feature Request]: CMP 50HX support | ### Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Nvidia CMP 50HX support. It is not detected by webui.sh
### Proposed workflow
Nvidia CMP 50HX works peprfectly but it is not detected automaticaly because lspci output does not include VGA or Display:
`04:00.0 3D controller: NVIDIA Corporation TU102 [CMP 50HX] (rev a1)`
I changed the next line in the file webui.sh to make it working:
` gpu_info=$(lspci 2>/dev/null | grep -E "VGA|Display|CMP")`
### Additional information
_No response_ | closed | 2025-02-17T20:52:21Z | 2025-02-18T14:05:37Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16849 | [
"enhancement"
] | carbofos | 3 |
ultralytics/yolov5 | pytorch | 12,747 | OnnxSlim support | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
Hi, we have developed a tool called [onnxslim](https://github.com/WeLoveAI/OnnxSlim), which can help slim exported onnx model, and it's pure python, and it works well on yolo, should we intergrate it into awesome yolov5.
### Use case
when export to onnx, we can use onnxslim to slim our exported model
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-02-21T06:00:07Z | 2024-10-20T19:39:54Z | https://github.com/ultralytics/yolov5/issues/12747 | [
"enhancement",
"Stale"
] | inisis | 4 |
Teemu/pytest-sugar | pytest | 46 | Gibberish stdout in win 8 32 bit py34 | Terminal color codes are not properly reflected when using pysugar in windows.
Sample output:
Test session starts (platform: win32, Python 3.4.0, pytest 2.6.4, pytest-sugar 0.3.4)
plugins: sugar, instafail
test_sugar.py \r \x1b[1;30m\x1b[0mtest_sugar.py \x1b[92m\u2713\x1b[0m 7% \x1b[92m\x1b[100m\u
test_sugar.py \r \x1b[1;30m\x1b[0mtest_sugar.py \x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m 13%
test_sugar.py \r \x1b[1;30m\x1b[0mtest_sugar.py \x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m
test_sugar.py \r \x1b[1;30m\x1b[0mtest_sugar.py \x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m
27% \x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1
b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1
b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m\x1b[92m\u2713\x1b[0m 100% \x1b[92m\x1b[100m\u
2589\u2589\u2589\u2589\u2589\u2589\u2589\u2589\u2589\u2589\u2589\x1b[0m\x1b[1;30m\x1b[100m\x1b[0m
| closed | 2014-11-24T04:07:09Z | 2014-12-04T19:04:50Z | https://github.com/Teemu/pytest-sugar/issues/46 | [] | kman0 | 5 |
deepspeedai/DeepSpeed | machine-learning | 7,077 | [BUG] Deepspeed does not update the model when using "Qwen/Qwen2.5-3B" and is fine with ""Qwen/Qwen2.5-1.%B"" | **Describe the bug**
I know this sounds very weird. However, when I use the deepspeed to optimize a "Qwen/Qwen2.5-3B" model, the model does not update at all. The same exact training code works with "Qwen/Qwen2.5-1.5B". Also checked and optimizing "meta-llama/Llama-3.2-3B" does not work. The parameters remain exactly the same. However, by just setting "torch_adam" to true, the issue goes away.
| closed | 2025-02-25T04:11:40Z | 2025-03-21T15:10:51Z | https://github.com/deepspeedai/DeepSpeed/issues/7077 | [
"bug",
"training"
] | MiladInk | 4 |
RobertCraigie/prisma-client-py | asyncio | 349 | Add missing most common errors | ## Problem
There are certain [query engine errors](https://www.prisma.io/docs/reference/api-reference/error-reference#prisma-client-query-engine) that are not mapped to the python client errors. A [foreign key violation error](https://www.prisma.io/docs/reference/api-reference/error-reference#p2003) would be notable example of a very common error that might require a dedicated handling in the code. Currently we are getting DataError with the following message:
```python
prisma.errors.DataError: Foreign key constraint failed on the field: `some_field_fkey` (index)
```
## Suggested solution
Add proper query engine error mapping to https://github.com/RobertCraigie/prisma-client-py/blob/main/src/prisma/engine/utils.py and corresponding tests:
- [x] #351
- [x] #368
## Alternatives
The only alternative would be to parse actual error message `Foreign key constraint failed on the field` which to me feels like a fragile solution.
| open | 2022-04-01T13:28:36Z | 2023-04-02T17:29:48Z | https://github.com/RobertCraigie/prisma-client-py/issues/349 | [
"kind/epic",
"topic: client",
"level/beginner",
"priority/medium"
] | OMotornyi | 3 |
hootnot/oanda-api-v20 | rest-api | 176 | How to use oanda-api-v20 via proxy | Hello, colleagues,
I need to connect to OANDA using a proxy server with authorization to follow the corporate security. Is it possible to do this when the request API? | closed | 2020-12-07T04:51:13Z | 2021-01-12T09:06:16Z | https://github.com/hootnot/oanda-api-v20/issues/176 | [] | Warlib1975 | 6 |
cvat-ai/cvat | pytorch | 8,993 | Clickhouse suddenly startes using about 25% of the CPU even when there is no cvat activity | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Go to the shell of the host
Type top
see high cpu utilisation for clickhouse process even when there is no activity in cvat approx. 25 percent of a total 4 core server
### Expected Behavior
Clickhouse should be using hardly any cpu especially as its not a core part of the capability just analytics / monitoring
My small config had about 3gb of raw data and 14 gb of clickhouse data, that’s just wrong.
If I backup up cvat_db , cvar_server & cvat_clickhouse, built a new instance then restoring just cvat_db & cvar_server appeared to fix the issue, but after a few hours the clickhouse server was sitting at about 13% cpu of the server up from under 1% with no activity in cvat and the cpu was climbing.
### Possible Solution
Well work around, Changed the autostart for click =house container in docker not to start , rebooted and the issue went away, except I get some pop ups about contacting the logging service but cvat works with low cpu.
### Context
High cpu on a component that unless your monitoring activity of cvat is not reasonable.
### Environment
```Markdown
Ubuntu server running in KVM (4 cpre 16gb) with a docker container built as per cvat docs.
``` | closed | 2025-01-25T10:55:47Z | 2025-01-26T17:28:09Z | https://github.com/cvat-ai/cvat/issues/8993 | [
"bug"
] | AndrewW-GIT | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 58 | set the parametres with densenet121 | Thanks for you code.I have a question.
The script for 'xent + htri ' of densenet121:
python3.5 train_imgreid_xent_htri.py -d market1501 -a densenet121 --lr 0.0003 --max-epoch 180 --stepsize 60 --train-batch 32 --test-batch 32 --save-dir logs/densenet121-xent-htri-market1501 --gpu-devices 0,1,2,3
But mAP is 60.8%, Rank-1 is 80.0%.A little different from the data you provided.Can you tell me where this script is wrong?
| closed | 2018-09-18T01:15:35Z | 2018-09-25T01:11:13Z | https://github.com/KaiyangZhou/deep-person-reid/issues/58 | [] | Adorablepet | 3 |
xlwings/xlwings | automation | 2,025 | UDFs: Rename `@xw.ret(expand="...")` into `@xw.ret(legacy_expand="...")` | I think there's quite a few users out there who have the native dynamic arrays but are still using the hacky return decorator. | open | 2022-09-21T15:12:57Z | 2022-09-21T15:13:38Z | https://github.com/xlwings/xlwings/issues/2025 | [
"enhancement"
] | fzumstein | 0 |
Lightning-AI/pytorch-lightning | deep-learning | 19,780 | Does `fabric.save()` save on rank 0? | ### Bug description
I'm trying to save a simple object using `fabric.save()` but always get the same error and I don't know if I'm missing something about the way checkpoints are saved and loaded or if it's a bug. The error is caused when saving the model, and the `fabric.barrier()` produces that the state.pkl file is saved correclty. However I always get the same `RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [10.1.103.33]:62095` error.
I've already red the [documentation](https://lightning.ai/docs/fabric/stable/guide/checkpoint/distributed_checkpoint.html) but I still don't understand why it is happening.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import lightning as L
def setup():
fabric = L.Fabric(accelerator="cpu", devices=2)
fabric.launch(main)
def main(fabric):
state = {"a": 1, "b": 2}
if fabric.global_rank == 0:
fabric.save("state.pkl", state)
fabric.barrier()
if __name__ == "__main__":
setup()
```
### Error messages and logs
```
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=gloo
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 20, in <module>
setup()
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 7, in setup
fabric.launch(main)
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 859, in launch
return self._wrap_and_launch(function, self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 944, in _wrap_and_launch
return launcher.launch(to_run, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/strategies/launchers/subprocess_script.py", line 107, in launch
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 950, in _wrap_with_setup
return to_run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 16, in main
fabric.barrier()
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 545, in barrier
self._strategy.barrier(name=name)
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/strategies/ddp.py", line 162, in barrier
torch.distributed.barrier()
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 3446, in barrier
work.wait()
RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [10.1.103.33]:62095
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- lightning: 2.2.2
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.2.2
- torch: 2.2.2
- torchmetrics: 1.3.2
* Packages:
- aiohttp: 3.9.4
- aiosignal: 1.3.1
- attrs: 23.2.0
- filelock: 3.13.4
- frozenlist: 1.4.1
- fsspec: 2024.3.1
- idna: 3.7
- jinja2: 3.1.3
- lightning: 2.2.2
- lightning-utilities: 0.11.2
- markupsafe: 2.1.5
- mpmath: 1.3.0
- multidict: 6.0.5
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu12: 12.1.105
- packaging: 24.0
- pip: 23.3.1
- pytorch-lightning: 2.2.2
- pyyaml: 6.0.1
- setuptools: 68.2.2
- sympy: 1.12
- torch: 2.2.2
- torchmetrics: 1.3.2
- tqdm: 4.66.2
- typing-extensions: 4.11.0
- wheel: 0.41.2
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.12.2
- release: 6.5.0-26-generic
- version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
</details>
### More info
_No response_
cc @carmocca @justusschock @awaelchli | closed | 2024-04-15T20:05:40Z | 2024-04-16T11:45:38Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19780 | [
"question",
"fabric"
] | LautaroEst | 3 |
roboflow/supervision | computer-vision | 1,025 | Can i track unique faces in video ? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Can I track unique faces in the video?
### Additional
_No response_ | closed | 2024-03-20T07:07:38Z | 2024-03-20T11:15:55Z | https://github.com/roboflow/supervision/issues/1025 | [
"question"
] | anas140 | 1 |
onnx/onnx | pytorch | 6,365 | codeformatter / linter for yaml files? | # Ask a Question
### Question
Do we have a codeformatter / linter for yaml files? | open | 2024-09-14T16:20:23Z | 2024-09-16T16:29:41Z | https://github.com/onnx/onnx/issues/6365 | [
"question"
] | andife | 4 |
open-mmlab/mmdetection | pytorch | 11,630 | Need help pls "AttributeError: module 'mmcv' has no attribute 'jit'" | I ran this command previously and it worked then i try to ran some other models. when i run this command again i got error. i tried to uninstall and install back the mmcv but no changes pls help
(openmmlab) PS C:\Users\praba\PycharmProjects\mmdetection> python demo/image_demo.py demo/demo.jpg demo/rtmdet_tiny_8xb32-300e_coco.py --weights demo/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cuda --show
Loads checkpoint by local backend from path: demo/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth
c:\users\praba\pycharmprojects\mmdetection\mmdet\mmcv\__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
warnings.warn(
Traceback (most recent call last):
File "demo/image_demo.py", line 192, in <module>
main()
File "demo/image_demo.py", line 179, in main
inferencer = DetInferencer(**init_args)
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\apis\det_inferencer.py", line 99, in __init__
super().__init__(
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\infer\infer.py", line 180, in __init__
self.model = self._init_model(cfg, weights, device) # type: ignore
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\infer\infer.py", line 483, in _init_model
model = MODELS.build(cfg.model)
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\registry\build_functions.py", line 232, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\registry\build_functions.py", line 98, in build_from_cfg
obj_cls = registry.get(obj_type)
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\registry\registry.py", line 451, in get
self.import_from_location()
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\site-packages\mmengine\registry\registry.py", line 376, in import_from_location
import_module(loc)
File "C:\Users\praba\anaconda3\envs\openmmlab\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\models\__init__.py", line 4, in <module>
from .dense_heads import * # noqa: F401,F403
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\models\dense_heads\__init__.py", line 55, in <module>
from .reppoints_v2_head import RepPointsV2Head
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\models\dense_heads\reppoints_v2_head.py", line 8, in <module>
from mmdet.core import (PointGenerator, build_assigner, build_sampler,
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\core\__init__.py", line 2, in <module>
from .bbox import * # noqa: F401, F403
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\core\bbox\__init__.py", line 4, in <module>
from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, PseudoBBoxCoder,
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\core\bbox\coder\__init__.py", line 2, in <module>
from .bucketing_bbox_coder import BucketingBBoxCoder
File "c:\users\praba\pycharmprojects\mmdetection\mmdet\core\bbox\coder\bucketing_bbox_coder.py", line 94, in <module>
@mmcv.jit(coderize=True)
AttributeError: module 'mmcv' has no attribute 'jit'
| open | 2024-04-11T11:57:41Z | 2024-09-13T11:14:43Z | https://github.com/open-mmlab/mmdetection/issues/11630 | [] | PRABS25 | 3 |
falconry/falcon | api | 2,184 | Drop `--no-build-isolation` in testing | We should be able to do without `--no-build-isolation` in our CI gates for Cython (see `tox.ini`).
We should be able to leverage [PEP 517](https://peps.python.org/pep-0517/) and/or [PEP 660](https://peps.python.org/pep-0660/) instead. | closed | 2023-11-05T18:39:55Z | 2024-04-17T14:22:57Z | https://github.com/falconry/falcon/issues/2184 | [
"cleanup",
"needs contributor",
"maintenance"
] | vytas7 | 0 |
jina-ai/serve | fastapi | 6,225 | The read operation timed out | **Describe the bug**
I am using dify ai and using jina as rereank model in dify. Earlier it was working fine i changed nothing. Suddenly it had stopped working and giving me this error
"message": "[jina] Bad Request Error, The read operation timed out",
I have added tokens as well but still its crashing.
**Environment**
**Screenshots**


| closed | 2025-01-15T10:52:29Z | 2025-01-15T11:08:09Z | https://github.com/jina-ai/serve/issues/6225 | [] | qadeerikram-art | 8 |
QingdaoU/OnlineJudge | django | 54 | 求加入讨论功能 | rt。
Demo:
[vijos](https://vijos.org/discuss)
如果能加入题解功能就更好了~
| closed | 2016-08-07T23:36:38Z | 2019-08-30T15:08:04Z | https://github.com/QingdaoU/OnlineJudge/issues/54 | [] | Ruanxingzhi | 4 |
ansible/ansible | python | 84,771 | deb822_repository: Writing a literal PGP pubkey into sources file as Signed-By field results in a failure on Ubuntu 20.04 | ### Summary
deb822_repository module writes a literal PGP pubkey into sources file even though it is not supported on older Ubuntu versions - support for GPG literals came in later (22.04 works). See the man page entries for sources.list on 20.04 & 24.04 respectively:


I realise Ubuntu 20.04 is not long for this world, but I'd assume this has similar issues on older debian releases too (and any other dpkg-based distros) which I think have longer support windows.
I imagine this has the same issue with a literal `signed_by` in the ansible task (as per the [example](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/deb822_repository_module.html#id5)) as well.
### Issue Type
Bug Report
### Component Name
deb822_repository
### Ansible Version
```console
ansible [core 2.17.7]
config file = /home/cpigott/ansible/ansible.cfg
configured module search path = ['/home/cpigott/ansible/library', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/cpigott/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/cpigott/ansible/vendor/collections:/etc/ansible/collections
executable location = /home/cpigott/.local/bin/ansible
python version = 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
Run a task such as the following on Ubuntu 20.04
```yaml
- ansible.builtin.deb822_repository:
state: present
name: "ondrej-php-focal"
types: [deb]
uris: ["https://packages.sury.org/php"]
suites: ["focal"]
components: [main]
signed_by: "https://packages.sury.org/php/apt.gpg"
```
Observe subsequent `apt update` failure
### Expected Results
apt should not be broken.
### Actual Results
The following sources file gets written out:
```
Components: main
X-Repolib-Name: ondrej-php-focal
Signed-By:
-----BEGIN PGP PUBLIC KEY BLOCK-----
<snip>
-----END PGP PUBLIC KEY BLOCK-----
Suites: focal
Types: deb
URIs: https://ppa.launchpadcontent.net/ondrej/php/ubuntu
```
which then causes an apt failure:
```console
$ sudo apt update
E: Invalid value set for option Signed-By regarding source https://ppa.launchpadcontent.net/ondrej/php/ubuntu/ focal (not a fingerprint)
E: The list of sources could not be read.
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-03-04T16:32:51Z | 2025-03-11T14:59:21Z | https://github.com/ansible/ansible/issues/84771 | [
"module",
"bug",
"affects_2.17"
] | LordAro | 3 |
unytics/bigfunctions | data-visualization | 114 | [new]: `export_to_pubsub(project, topic, data, attributes)` | ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
-
### Examples of (arguments, expected output) as they would appear in the documentation
- | closed | 2023-06-05T15:07:43Z | 2023-06-09T12:00:22Z | https://github.com/unytics/bigfunctions/issues/114 | [
"new-bigfunction"
] | unytics | 1 |
albumentations-team/albumentations | machine-learning | 1,730 | How It's works "Normalize" function | Hi, everyone.
I am doing a university work and It is necessary to know how the **_"Normalize"_** function works much better than how It is explained in the docomentation.
Can anyone help me?
How exactly do "mean" parametrer and "std" parametrer work? They are involved in what type of normalization makes the function or if I specify what type of normalization don't matter the value of "mean" and "std"
Per example:
If I use mean(0.0, 0.0, 0.0) and std=(1.0, 1.0, 1.0)
What type of normalization I'm doing?
Thanks | closed | 2024-05-17T18:42:50Z | 2024-05-18T09:48:24Z | https://github.com/albumentations-team/albumentations/issues/1730 | [
"question"
] | Jes46 | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 65 | error on pip install | ```
$ pip install -r requirements.txt
Collecting git+https://github.com/feder-cr/lib_resume_builder_AIHawk.git (from -r requirements.txt (line 14))
Cloning https://github.com/feder-cr/lib_resume_builder_AIHawk.git to /tmp/pip-req-build-hg_e4zhr
Running command git clone --filter=blob:none --quiet https://github.com/feder-cr/lib_resume_builder_AIHawk.git /tmp/pip-req-build-hg_e4zhr
Resolved https://github.com/feder-cr/lib_resume_builder_AIHawk.git to commit 85084f925ae9043fb14dfa8d7c7ee8e21399afb1
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Traceback (most recent call last):
File "/home/ettinger/src/tmp/LinkedIn_AIHawk_automatic_job_application/myenv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/ettinger/src/tmp/LinkedIn_AIHawk_automatic_job_application/myenv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ettinger/src/tmp/LinkedIn_AIHawk_automatic_job_application/myenv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-adk4iqso/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-adk4iqso/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-adk4iqso/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 502, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-adk4iqso/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 7, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'README.md'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```` | closed | 2024-08-24T13:23:20Z | 2024-08-25T07:08:07Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/65 | [] | ralyodio | 6 |
zappa/Zappa | django | 752 | [Migrated] SQLite ImproperlyConfigured exception | Originally from: https://github.com/Miserlou/Zappa/issues/1880 by [ebridges](https://github.com/ebridges)
## Exception:
```
raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
```
## Expected Behavior
An HTTP GET to the API Gateway URL should show the default Django "welcome" page, as is shown when invoking `open http://127.0.0.1:8080` on a new Django project.
## Actual Behavior
```
$ zappa deploy
...
Calling deploy for stage dev..
Creating testing-dev-ZappaLambdaExecutionRole IAM Role..
Creating zappa-permissions policy on testing-dev-ZappaLambdaExecutionRole IAM Role.
Downloading and installing dependencies..
- sqlite==python3: Using precompiled lambda package
'python3.7'
Packaging project as zip.
Uploading testing-dev-1559325136.zip (14.8MiB)..
100%|██████████████████████████████████████████████| 15.6M/15.6M [00:07<00:00, 1.93MB/s]
Scheduling..
Scheduled testing-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Uploading testing-dev-template-1559325160.json (1.6KiB)..
100%|██████████████████████████████████████████████| 1.60K/1.60K [00:00<00:00, 16.5KB/s]
Waiting for stack testing-dev to create (this can take a bit)..
75%|███████████████████████████████████████ | 3/4 [00:13<00:05, 5.59s/res]
Deploying API Gateway..
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.
```
```
$ zappa tail
...
[1559325174623] [ERROR] ImproperlyConfigured: SQLite 3.8.3 or later is required (found 3 raise ImproperlyConfigured('SQLite 3.8.3 or later is required (found %s).' % Database.sqlite_version)
...
```
```
$ curl https://qzgbmw28gk.execute-api.us-east-1.amazonaws.com/dev
"{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 531, in handler\\n with Response.from_app(self.wsgi_app, environ) as response:\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 287, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/test.py\", line 1119, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
```
## Steps to Reproduce
```
mkdir testing
cd testing
poetry init --name=testing --author='foo@example.com'
poetry add django zappa
source $(dirname $(poetry run which python))/activate # activate venv
django-admin startproject myproject .
python manage.py runserver
open http://127.0.0.1:8000/ # confirm all works
zappa init
zappa deploy dev
zappa tail
zappa status
```
## Your Environment
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.4
BuildVersion: 18E226
$ poetry --version
Poetry 0.12.11
$ python --version
Python 3.7.0
$ django-admin --version
2.2.1
$ zappa --version
0.48.2
```
<details>
<summary>Complete list of packages installed in venv</summary>
```
$ poetry show
argcomplete 1.9.3 Bash tab completion for argparse
boto3 1.9.159 The AWS SDK for Python
botocore 1.12.159 Low-level, data-driven core of boto 3.
certifi 2019.3.9 Python package for providing Mozilla's CA Bundle.
cfn-flip 1.2.0 Convert AWS CloudFormation templates between JSON and YAML formats
chardet 3.0.4 Universal encoding detector for Python 2 and 3
click 7.0 Composable command line interface toolkit
django 2.2.1 A high-level Python Web framework that encourages rapid development and clean, pragmatic design.
docutils 0.14 Docutils -- Python Documentation Utilities
durationpy 0.5 Module for converting between datetime.timedelta and Go's Duration strings.
future 0.16.0 Clean single-source support for Python 3 and 2
hjson 3.0.1 Hjson, a user interface for JSON.
idna 2.8 Internationalized Domain Names in Applications (IDNA)
jmespath 0.9.3 JSON Matching Expressions
kappa 0.6.0 A CLI tool for AWS Lambda developers
lambda-packages 0.20.0 AWS Lambda Packages
placebo 0.9.0 Make boto3 calls that look real but have no effect
python-dateutil 2.6.1 Extensions to the standard Python datetime module
python-slugify 1.2.4 A Python Slugify application that handles Unicode
pytz 2019.1 World timezone definitions, modern and historical
pyyaml 5.1 YAML parser and emitter for Python
requests 2.22.0 Python HTTP for Humans.
s3transfer 0.2.0 An Amazon S3 Transfer Manager
six 1.12.0 Python 2 and 3 compatibility utilities
sqlparse 0.3.0 Non-validating SQL parser
toml 0.10.0 Python Library for Tom's Obvious, Minimal Language
tqdm 4.19.1 Fast, Extensible Progress Meter
troposphere 2.4.7 AWS CloudFormation creation library
unidecode 1.0.23 ASCII transliterations of Unicode text
urllib3 1.25.3 HTTP library with thread-safe connection pooling, file post, and more.
werkzeug 0.15.4 The comprehensive WSGI web application library.
wheel 0.33.4 A built-package format for Python.
wsgi-request-logger 0.4.6 Apache-like combined logging for WSGI Web Applications
zappa 0.48.2 Server-less Python Web Services for AWS Lambda and API Gateway
```
</details>
Zappa settings:
```
{
"dev": {
"aws_region": "us-east-1",
"django_settings": "myproject.settings",
"profile_name": "ebridges@roja",
"project_name": "testing",
"runtime": "python3.7",
"s3_bucket": "mybucket"
}
}
```
| closed | 2021-02-20T12:41:45Z | 2024-04-13T18:36:49Z | https://github.com/zappa/Zappa/issues/752 | [
"no-activity",
"auto-closed"
] | jneves | 4 |
viewflow/viewflow | django | 10 | generic view for task form | started_time as invisible field
| closed | 2014-02-28T03:02:27Z | 2014-05-01T09:58:11Z | https://github.com/viewflow/viewflow/issues/10 | [
"request/enhancement"
] | kmmbvnr | 1 |
ivy-llc/ivy | tensorflow | 27,895 | Wrong key-word arguments `return_index` and `return_inverse` in `ivy.unique_all()` function call | In the following function call, the arguments `return_index` and `return_inverse` are passed,
https://github.com/unifyai/ivy/blob/06508027180ea29977b4cafd316d536247cb5664/ivy/functional/frontends/sklearn/model_selection/_split.py#L80
From the actual `def` of `unique_all()`, there are no arguments like `return_index` and `return_inverse`.
https://github.com/unifyai/ivy/blob/06508027180ea29977b4cafd316d536247cb5664/ivy/functional/ivy/set.py#L29-L35 | closed | 2024-01-11T07:30:03Z | 2024-01-17T22:02:30Z | https://github.com/ivy-llc/ivy/issues/27895 | [] | Sai-Suraj-27 | 2 |
sinaptik-ai/pandas-ai | data-visualization | 1,240 | Docker compose platform errors at startup in the browser | ### System Info
PAI 2.2.3 docker compose
### 🐛 Describe the bug
I get the below with `docker compose up`
I double checked the creds in the .env file on the server and client
```
Something went wrong fetching credentials, please refresh the page
```

| closed | 2024-06-19T01:58:26Z | 2024-10-30T07:09:35Z | https://github.com/sinaptik-ai/pandas-ai/issues/1240 | [
"bug"
] | metalshanked | 3 |
dynaconf/dynaconf | flask | 861 | Vault auth login with Dynaconf | Hi, I would like to use Dynaconf to save my vault secrets. I enabled "vault_enabled" env and wanted to use VAULT_AUTH_WITH_IAM_FOR_DYNACONF for IAM auth authentication.
There is a problem when Dynaconf runs client.auth.aws.iam_login(
credentials.access_key,
credentials.secret_key,
credentials.token,
role=obj.VAULT_AUTH_ROLE_FOR_DYNACONF,
)
in the vault_loader class there is no option to add header_value(for X-Vault-AWS-IAM-Server-ID) and mount_point
Is there something I miss? | open | 2023-02-08T11:09:33Z | 2023-08-21T19:47:45Z | https://github.com/dynaconf/dynaconf/issues/861 | [
"question"
] | eladhaz05 | 1 |
marshmallow-code/flask-smorest | rest-api | 144 | Accessing arguments in response schema | I'd like to conditionally return some data in a response based on the arguments the user passes in. E.g.
```
GET /foo?include=optional_field_1,optional_field_2
Response:
{
"optional_field_1": "baz",
"optional_field_2": "qux"
}
```
Prior to adopting flask-smorest, I would use [schema.context](https://marshmallow.readthedocs.io/en/stable/why.html#context-aware-serialization) for this and conditionally manipulate the response in a pre_dump handler. I don't think this is possible now with flask-smorest. I could simply do it in the view function, but as I want this logic on create, read, update and list endpoints for this collection, it would be nice to keep in a single location - the pre_dump handler.
Any thoughts about this? | closed | 2020-04-15T17:43:58Z | 2021-09-29T12:51:55Z | https://github.com/marshmallow-code/flask-smorest/issues/144 | [
"enhancement"
] | pmdarrow | 3 |
lexiforest/curl_cffi | web-scraping | 193 | Limit `max_redirects` by default | curl-cffi uses infinity redirects, in `requests` the default `max_redirects` is 30:
> ### max_redirects
> Maximum number of redirects allowed. If the request exceeds this limit, a [TooManyRedirects](https://requests.readthedocs.io/en/latest/api/#requests.TooManyRedirects) exception is raised. This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is 30.
_Originally posted by @T-256 in https://github.com/yifeikong/curl_cffi/pull/174#discussion_r1426544708_
| closed | 2023-12-26T16:00:39Z | 2024-05-30T09:32:18Z | https://github.com/lexiforest/curl_cffi/issues/193 | [
"good first issue"
] | T-256 | 0 |
ray-project/ray | python | 51,211 | [Ray Core] For the same python test, the results of pytest and bazel are inconsistent | ### What happened + What you expected to happen
The results of using `pytest` and `bazel` to test the same python code are different. Pytest always succeeds, while bazel test always throws the following exception. What may be the cause?
### Versions / Dependencies
Ray v2.38.0
### Reproduction script
The two test statements are:
`python -m pytest -v -s python/ray/tests/test_ray_debugger.py`
`bazel test --build_tests_only $(./ci/run/bazel_export_options) --config=ci --test_env=CI="1" --test_output=streamed -- //python/ray/tests:test_ray_debugger`
The error message of bazel test is:
```
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //python/ray/tests:test_ray_debugger
-----------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.3.0 -- /opt/conda/envs/original-env/bin/python3
cachedir: .pytest_cache
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /root/.cache/bazel/_bazel_root/7b4611e5f7d910d529cf99d9ecdcc56a/execroot/com_github_ray_project_ray
configfile: pytest.ini
plugins: asyncio-0.17.0, forked-1.4.0, shutil-1.7.0, sugar-0.9.5, rerunfailures-11.1.2, timeout-2.1.0, httpserver-1.0.6, sphinx-0.5.1.dev0, docker-tools-3.1.3, anyio-3.7.1, virtualenv-1.7.0, lazy-fixture-0.6.3, benchmark-4.0.0
timeout: 180.0s
timeout method: signal
timeout func_only: False
collecting ... collected 10 items
python/ray/tests/test_ray_debugger.py::test_ray_debugger_breakpoint 2025-03-07 02:42:55,881 INFO worker.py:1807 -- Started a local Ray instance. View the dashboard at [1m[32m127.0.0.1:8265 [39m[22m
[36m(f pid=26195)[0m RemotePdb session open at localhost:44791, use 'ray debug' to connect...
[36m(f pid=26195)[0m RemotePdb accepted connection from ('127.0.0.1', 48272).
[36m(f pid=26195)[0m *** SIGSEGV received at time=1741315376 on cpu 3 ***
[36m(f pid=26195)[0m PC: @ 0x7f4ab74057fd (unknown) (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab72aa520 (unknown) (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab04d3061 16544 (unknown)
[36m(f pid=26195)[0m @ 0x7f4ab04c9d20 (unknown) _rl_set_mark_at_pos
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: *** SIGSEGV received at time=1741315376 on cpu 3 ***
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: PC: @ 0x7f4ab74057fd (unknown) (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab72aa520 (unknown) (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab04d3061 16544 (unknown)
[36m(f pid=26195)[0m [2025-03-07 02:42:56,386 E 26195 26195] logging.cc:440: @ 0x7f4ab04c9d20 (unknown) _rl_set_mark_at_pos
[36m(f pid=26195)[0m Fatal Python error: Segmentation fault
[36m(f pid=26195)[0m
[36m(f pid=26195)[0m Stack (most recent call first):
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap_external>", line 1176 in create_module
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 571 in module_from_spec
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 674 in _load_unlocked
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
[36m(f pid=26195)[0m File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
[36m(f pid=26195)[0m File "/opt/conda/envs/original-env/lib/python3.10/pdb.py", line 148 in __init__
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 122 in listen
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 269 in _connect_ray_pdb
[36m(f pid=26195)[0m File "/data/ray/python/ray/util/rpdb.py", line 290 in set_trace
[36m(f pid=26195)[0m File "/root/.cache/bazel/_bazel_root/7b4611e5f7d910d529cf99d9ecdcc56a/execroot/com_github_ray_project_ray/bazel-out/k8-opt/bin/python/ray/tests/test_ray_debugger.runfiles/com_github_ray_project_ray/python/ray/tests/test_ray_debugger.py", line 23 in f
[36m(f pid=26195)[0m File "/data/ray/python/ray/_private/worker.py", line 917 in main_loop
[36m(f pid=26195)[0m File "/data/ray/python/ray/_private/workers/default_worker.py", line 289 in <module>
[36m(f pid=26195)[0m
[36m(f pid=26195)[0m Extension modules: psutil._psutil_linux, psutil._psutil_posix, msgpack._cmsgpack, google.protobuf.pyext._message, setproctitle, yaml._yaml, charset_normalizer.md, ray._raylet, pvectorc (total: 9)
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~ Stack of ray_print_logs (139687217845824) ~~~~~~~~~~~~~~~~~~~
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/data/ray/python/ray/_private/worker.py", line 939, in print_logs
data = subscriber.poll()
~~~~~~~~~~~~~ Stack of ray_listen_error_messages (139687226238528) ~~~~~~~~~~~~~
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/envs/original-env/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/data/ray/python/ray/_private/worker.py", line 2198, in listen_error_messages
_, error_data = worker.gcs_error_subscriber.poll()
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
Traceback (most recent call last):
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/pytest_timeout.py", line 241, in handler
timeout_sigalrm(item, settings.timeout)
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/pytest_timeout.py", line 409, in timeout_sigalrm
pytest.fail("Timeout >%ss" % timeout)
File "/opt/conda/envs/original-env/lib/python3.10/site-packages/_pytest/outcomes.py", line 198, in fail
raise Failed(msg=reason, pytrace=pytrace)
Failed: Timeout >180.0s
```
### Issue Severity
None | open | 2025-03-10T08:43:52Z | 2025-03-10T22:18:22Z | https://github.com/ray-project/ray/issues/51211 | [
"bug",
"P2",
"core"
] | Moonquakes | 0 |
tableau/server-client-python | rest-api | 1,299 | Server Response Errror (Bad Request) when overwriting large hyperfile since v0.26 | **Describe the bug**
When updating from version 0.25 to 0.26, we found that one of our scripts which overwrites a datasource started failing.
We have narrowed down the source of the bug to the following change:
https://github.com/tableau/server-client-python/commit/307d8a20a30f32c1ce615cca7c6a78b9b9bff081#r130310838
This error occurs with our servers running version 2022.1.13,
**Versions**
Details of your environment, including:
- Tableau Server version: bug present when using 2022.1.13, but not with 2022.1.16
- Python version 3.9
- TSC library version 0.26+
**To Reproduce**
Upload a large hyperfile, ours causing the error is 150+ MB.
**Results**
```
raise ServerResponseError.from_response(server_response.content, self.parent_srv.namespace, url)
tableauserverclient.server.endpoint.exceptions.ServerResponseError:
400011: Bad Request
There was a problem publishing the file
```
| closed | 2023-10-18T14:58:50Z | 2024-02-27T08:48:20Z | https://github.com/tableau/server-client-python/issues/1299 | [
"bug",
"fixed",
"0.29"
] | kykrueger | 8 |
ydataai/ydata-profiling | data-science | 1,026 | How can we add a user-defined statistical chart | ### Missing functionality
I think it would be better if we add some customerize options not noly the appearance or color theme but also the **content of report**, especially adding some **user-defined statistical charts**. I think with this feature, pandas-profiling can be an powerful offline **BI** tools.
### Proposed feature
We can config the content of report, such as adding an statistical charts, or don't show variables section.
### Alternatives considered
_No response_
### Additional context
_No response_ | open | 2022-08-29T09:54:32Z | 2022-09-14T17:37:48Z | https://github.com/ydataai/ydata-profiling/issues/1026 | [
"feature request 💬"
] | Alpha-su | 0 |
dynaconf/dynaconf | fastapi | 288 | [bug] Dynaconf does not support null values in yaml | **Describe the bug**
If you specify a null value for a variable in a yaml config file and when you try to use this variable you will get "variable does not exists" error.
The same issue should appear when using nulls in vault (since it's a JSON).
**To Reproduce**
The following python snippet can be used to reproduce the issue:
```python
from dynaconf.base import Settings
settings = Settings()
settings['name'] = 'Hello'
settings['nulled_name'] = None
print(settings['name'])
print(settings['nulled_name'])
```
The output is:
```
Hello
Traceback (most recent call last):
File ".../dynaconf/base.py", line 222, in __getitem__
raise KeyError("{0} does not exists".format(item))
KeyError: 'nulled_name does not exists'
```
| closed | 2020-01-21T08:09:55Z | 2020-03-08T04:40:02Z | https://github.com/dynaconf/dynaconf/issues/288 | [
"bug"
] | Bahus | 2 |
jmcnamara/XlsxWriter | pandas | 788 | excel | closed | 2021-02-10T10:31:07Z | 2021-02-10T10:48:45Z | https://github.com/jmcnamara/XlsxWriter/issues/788 | [] | Mandoospacial | 0 | |
holoviz/panel | jupyter | 7,597 | Top-left links to Open this Notebook in Jupyterlite not working due to path issue. |
There are 2 links to Jupyterlite on the documentation pages.
The top left one below the title does not work. The top left link says: Open this Notebook in Jupyterlite.
URL: https://panelite.holoviz.org/?path=/reference/widgets/Button.ipynb
The other one, top right, is a button that says: launch Jupyterlite. I would expect that to just launch Jupyterlite without a Notebook. That's why I didn't try it before.
However, it actually opens the Notebook succesfully in Jupyterlite.
URL: https://panelite.holoviz.org/lab?path=reference/widgets/Button.ipynb
So the top left URL falls back to the index because the /lab subdir is missing from the path I guess? Hopefully that's an easy fix.
Happy to address it, but I don't know where the path is defined in the source for the docs.
| closed | 2025-01-06T23:01:07Z | 2025-01-24T09:44:28Z | https://github.com/holoviz/panel/issues/7597 | [
"type: docs"
] | Coderambling | 0 |
browser-use/browser-use | python | 646 | Why does this error occur after I run it? TypeError: LaminarDecorator.observe() got an unexpected keyword argument 'ignore_output' | ### Bug Description
(base) E:\zidong>python 1.py
INFO [browser_use] BrowserUse logging setup complete with level info
INFO [root] Anonymized telemetry enabled. See https://docs.browser-use.com/development/telemetry for more information.
Traceback (most recent call last):
File "E:\zidong\1.py", line 2, in <module>
from browser_use import Agent
File "D:\andconda\Lib\site-packages\browser_use\__init__.py", line 6, in <module>
from browser_use.agent.service import Agent as Agent
File "D:\andconda\Lib\site-packages\browser_use\agent\service.py", line 64, in <module>
class Agent:
File "D:\andconda\Lib\site-packages\browser_use\agent\service.py", line 266, in Agent
@observe(name='agent.step', ignore_output=True, ignore_input=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: LaminarDecorator.observe() got an unexpected keyword argument 'ignore_output'
### Reproduction Steps
lmnr-0.3.5 requests-2.31.0 browser-use 0.1.36
### Code Sample
```python
from langchain_openai import ChatOpenAI
from browser_use import Agent
from pydantic import SecretStr
# Initialize the model
llm=ChatOpenAI(base_url='https://api.deepseek.com/v1', model='deepseek-chat', api_key='')
# Create agent with the model
agent = Agent(
task="Search for latest news about AI",
llm=llm,
use_vision=False
)
```
### Version
browser-use 0.1.36
### LLM Model
DeepSeek Coder
### Operating System
windows11
### Relevant Log Output
```shell
``` | closed | 2025-02-10T06:30:16Z | 2025-02-22T02:38:40Z | https://github.com/browser-use/browser-use/issues/646 | [
"bug"
] | yxl23 | 0 |
liangliangyy/DjangoBlog | django | 252 | 运行时wsgi报错 | 
在启动时,终端提示出了这个错误,不清楚哪里出了问题 | closed | 2019-04-25T11:57:08Z | 2019-04-28T09:14:59Z | https://github.com/liangliangyy/DjangoBlog/issues/252 | [] | wangli1 | 1 |
pydata/bottleneck | numpy | 204 | pandas import errors with current bottleneck pip wheel | ```
conda create --name bottleneck python=3.7
conda activate bottleneck
pip install pandas bottleneck
Successfully installed bottleneck-1.2.1 numpy-1.15.4 pandas-0.23.4 python-dateutil-2.7.5 pytz-2018.9 six-1.12.0
```
then:
```
python -c "import pandas"
```
gives:
```
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
```
In the example, the error is not fatal, in normal settings, but in some settings it leads to a fatal error.
Different workarounds:
1. `conda install bottleneck numpy` (instead of pip)
2. pip install bottleneck from source
```
pip install git+https://github.com/kwgoodman/bottleneck/commit/104778a8dea49d0ca230288b5011c17979c4ac99
```
3. `pip install numpy==1.16.0rc1`
There are quite a few of these reports on [google](https://www.google.com/search?q=ModuleNotFoundError%3A+No+module+named+%22numpy.core._multiarray_umath%22)
| closed | 2019-01-12T05:03:10Z | 2019-11-13T05:14:15Z | https://github.com/pydata/bottleneck/issues/204 | [] | stas00 | 3 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 239 | api调用水印接口不能调整参数 | api调用水印接口,参数设置无效,只能修改text,其他参数均为默认值。 | open | 2025-03-20T08:16:39Z | 2025-03-23T04:08:44Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/239 | [] | cirbinus | 1 |
plotly/plotly.py | plotly | 5,003 | Option to compress numpy array in `hovertemplate` `customdata` for `px.imshow` | Our application enables the generation of multiplexed images, where image regions show multiple "markers" in a tissue sample to demonstrate spatial patterns of marker co-occurrence.
We enable users to toggle a custom hovertemplate that shows the underlying raw array values for each marker before they are converted to RGB:

However, this becomes noticeably slower as the number of markers grows linearly; we pass the `customdata` template as a stacked numpy array paired with custom hovertext.
An option to compress this template array would help in increasing the speed and effectiveness of using `customdata` for our `hovertemplate`
| open | 2025-01-31T12:41:10Z | 2025-02-03T15:49:38Z | https://github.com/plotly/plotly.py/issues/5003 | [
"feature",
"P3"
] | matt-sd-watson | 1 |
ionelmc/pytest-benchmark | pytest | 194 | Comparison table not right? | When I run with `--benchmark-compare`, it compares all tests together instead of separately comparing each of the test functions. How can I get it to compare each test separately?
$ pytest -x --benchmark-storage=.cache/benchmarks --benchmark-autosave --benchmark-compare
Comparing against benchmarks from: Linux-CPython-3.8-64bit/0017_ae7ddd9a45ce97d104c6beebeee5aaa2b2d7bd55_20210221_231948_uncommited-changes.json
============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
benchmark: 3.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/<snip>
plugins: hypothesis-6.3.0, benchmark-3.2.3
collected 5 items
tests/test_example.py .. [100%]
Saved benchmark data in: /home/<snip>/.cache/benchmarks/Linux-CPython-3.8-64bit/0018_ae7ddd9a45ce97d104c6beebeee5aaa2b2d7bd55_20210221_232123_uncommited-changes.json
---------------------------------------------------------------------------------------------- benchmark: 6 tests ---------------------------------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_1 (NOW) 25.9614 (1.0) 26.7989 (1.0) 26.2850 (1.0) 0.2422 (1.86) 26.2250 (1.0) 0.3782 (3.87) 12;0 38.0445 (1.0) 38 1
test_1 (0017_ae7ddd9) 26.4183 (1.02) 27.0430 (1.01) 26.5549 (1.01) 0.1301 (1.0) 26.5203 (1.01) 0.0978 (1.0) 5;3 37.6579 (0.99) 37 1
test_2 (NOW) 502.0464 (19.34) 564.8274 (21.08) 534.6141 (20.34) 29.1967 (224.40) 541.7351 (20.66) 55.0144 (562.75) 2;0 1.8705 (0.05) 5 1
test_2 (0017_ae7ddd9) 505.7677 (19.48) 513.2472 (19.15) 508.5874 (19.35) 2.9334 (22.55) 508.1235 (19.38) 3.8804 (39.69) 1;0 1.9662 (0.05) 5 1
test_3 (NOW) 2,930.4952 (112.88) 3,144.5371 (117.34) 2,988.5274 (113.70) 88.5852 (680.85) 2,952.7849 (112.59) 76.4680 (782.20) 1;1 0.3346 (0.01) 5 1
test_3 (0017_ae7ddd9) 2,959.1996 (113.98) 2,978.7907 (111.15) 2,966.4838 (112.86) 7.8562 (60.38) 2,964.2083 (113.03) 11.0282 (112.81) 1;0 0.3371 (0.01) 5 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
=============================================================================================== 5 passed in 44.46s ===============================================================================================
I expected to see something more like below, where of course the ratios for the comparison (in the parentheticals) would be adjusted to compare only the two tests together.
---------------------------------------------------------------------------------------------- benchmark 'test_1': 2 tests ------------------------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_1 (NOW) 25.9614 (1.0) 26.7989 (1.0) 26.2850 (1.0) 0.2422 (1.86) 26.2250 (1.0) 0.3782 (3.87) 12;0 38.0445 (1.0) 38 1
test_1 (0017_ae7ddd9) 26.4183 (1.02) 27.0430 (1.01) 26.5549 (1.01) 0.1301 (1.0) 26.5203 (1.01) 0.0978 (1.0) 5;3 37.6579 (0.99) 37 1
---------------------------------------------------------------------------------------------- benchmark 'test_2': 2 tests ------------------------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_2 (NOW) 502.0464 (19.34) 564.8274 (21.08) 534.6141 (20.34) 29.1967 (224.40) 541.7351 (20.66) 55.0144 (562.75) 2;0 1.8705 (0.05) 5 1
test_2 (0017_ae7ddd9) 505.7677 (19.48) 513.2472 (19.15) 508.5874 (19.35) 2.9334 (22.55) 508.1235 (19.38) 3.8804 (39.69) 1;0 1.9662 (0.05) 5 1
---------------------------------------------------------------------------------------------- benchmark 'test_3': 2 tests ------------------------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_3 (NOW) 2,930.4952 (112.88) 3,144.5371 (117.34) 2,988.5274 (113.70) 88.5852 (680.85) 2,952.7849 (112.59) 76.4680 (782.20) 1;1 0.3346 (0.01) 5 1
test_3 (0017_ae7ddd9) 2,959.1996 (113.98) 2,978.7907 (111.15) 2,966.4838 (112.86) 7.8562 (60.38) 2,964.2083 (113.03) 11.0282 (112.81) 1;0 0.3371 (0.01) 5 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| closed | 2021-02-21T23:28:44Z | 2021-02-22T01:22:30Z | https://github.com/ionelmc/pytest-benchmark/issues/194 | [] | Spectre5 | 2 |
davidteather/TikTok-Api | api | 267 | get_Video_No_Watermark_ID return None | **version**
Python
3.6.9
TikTokApi
3.5.2
**code**
```
from TikTokApi import TikTokApi
api = TikTokApi()
print("get_Video_No_Watermark_ID", api.get_Video_No_Watermark_ID("6865390105981390086"))
```
**print content**
```
get_Video_No_Watermark_ID None
```
I tried about 100 IP, but always return None
Please!
| closed | 2020-09-17T11:11:41Z | 2020-09-17T14:23:25Z | https://github.com/davidteather/TikTok-Api/issues/267 | [
"bug"
] | saitama2020 | 3 |
vimalloc/flask-jwt-extended | flask | 313 | Setting JWT_DECODE_AUDIENCE to None triggers invalid audience | Hi, I am finding an issue where setting JWT_DECODE_AUDIENCE to None will still trigger audience check in jwt.decode because the PyJWT options still sets 'verify_aud' to True by default.
Inside the code 4.0.0-dev/flask_jwt_extended/tokens.py I found this:
```
options = {}
if allow_expired:
options["verify_exp"] = False
```
I think if we set it to:
```
options = {}
if allow_expired:
options["verify_exp"] = False
if audience is None:
options["verify_aud"] = False
```
This error would not trigger.
But is this intentional? I mean 'aud' claims is supposed to be optional. | closed | 2020-01-27T05:55:22Z | 2020-01-27T06:17:58Z | https://github.com/vimalloc/flask-jwt-extended/issues/313 | [] | lunarray | 1 |
flasgger/flasgger | api | 443 | Compatibility Proposal for OpenAPI 3 | This issue to discuss compatibility of OpenAPI3 in flasgger. Currently, the code differentiates them in runtime, and mixes up the processing of both specifications. In long term, I believe that this would lower code quality, and make the code harder to maintain. Please raise any suggestions or plans to make Flasgger work better with OpenAPI 3 and 2 at the same time. | open | 2020-11-21T18:15:27Z | 2021-11-14T08:53:02Z | https://github.com/flasgger/flasgger/issues/443 | [] | billyrrr | 3 |
ipython/ipython | jupyter | 14,615 | IPython does not print characters to console | IPython's output is not consistent with Python interpreter

Python version:
```ps
PS C:\Users\iftak\Desktop\jamk\2024 Autumn\CTF> python --version
Python 3.12.0
```
IPython version:
```ps
PS C:\Users\iftak\Desktop\jamk\2024 Autumn\CTF> ipython --version
8.20.0
``` | open | 2024-12-11T17:13:47Z | 2024-12-13T10:19:51Z | https://github.com/ipython/ipython/issues/14615 | [] | Iftakharpy | 1 |
taverntesting/tavern | pytest | 473 | Support for providing custom content type for files | When sending multipart HTTP requests with `requests` you can specify a custom content type for each part of the multipart request.
```
files = {'file': ('report.xls', open('report.xls', 'rb'), 'application/vnd.ms-excel')}
>>> r = requests.post(url, files=files)
```
This is sometimes needed when a custom content type is required by an API eg.
> 'application/vnd+vendorspecific+xml'.
Tavern only supports the content type header for files as guessed by the `mimetyoes.guess_type` function.
```
files:
file_name: /path/to/file
```
A possibility for this could be to replace the path to the file with a two element list, where the second element is the header, the default case should still be supported and work as normal.
```
files:
file_name: [/path/to/file, apllication/custom]
file_name_2: /path/to/file2
```
I am happy to give this a go myself if it is a desired feature. | closed | 2019-11-04T11:40:36Z | 2019-12-05T08:16:43Z | https://github.com/taverntesting/tavern/issues/473 | [] | justin-fay | 2 |
skypilot-org/skypilot | data-science | 4,026 | Catalog missing H100s | nvm, resolved | closed | 2024-10-02T16:28:53Z | 2024-12-19T09:31:43Z | https://github.com/skypilot-org/skypilot/issues/4026 | [] | nikhilmishra000 | 0 |
SYSTRAN/faster-whisper | deep-learning | 382 | Strange performance behaviour | I'm testing a 1m35s audio on the cpu with int8, model large-v2
model = WhisperModel(model_size, device="cpu", compute_type="int8")
The input file is 16khz 16bit mono. With the string
segments, _ = model.transcribe("test.wav", beam_size=1, best_of=1, vad_filter=True)
the transcription time is 1m40s. However, if i pass the language argument which should skip the language detection, like this
segments, _ = model.transcribe("test.wav", language='it', beam_size=1, best_of=1, vad_filter=True)
the transcription time goes up to 1m59s instead of going down. | closed | 2023-07-27T14:27:29Z | 2023-07-28T13:22:23Z | https://github.com/SYSTRAN/faster-whisper/issues/382 | [] | x86Gr | 9 |
ultralytics/ultralytics | deep-learning | 18,885 | train yolo with random weighted sampler | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have many data sources, Now I can write all data paths in data.yaml file and when I start the training, the different dataset sources are merged together into only 1 dataset.
Can I use or create a method like random weighted sampler to take equal number of samples from each data source for every epoch
For example:
data1 (20,000 images)
data2 (1000 images)
I want every epoch to have only 1000 images from each data source instead of having all images for every epoch
How can I do this or what are the methods I should edit to do this
thank you
### Additional
_No response_ | open | 2025-01-25T18:40:01Z | 2025-01-26T16:52:01Z | https://github.com/ultralytics/ultralytics/issues/18885 | [
"enhancement",
"question"
] | Dreahim | 7 |
widgetti/solara | fastapi | 938 | dense option for SelectMultiple has no effect | Dense option in SelectMultiple is passed to downstream reacton as always False. Is it because of dense style doesn't make sense for SelectMultiple or it is simply forgotten to set "dense=dense".? In the first case, we should remove dense from SelectMultiple or pass dense to reacton.
https://github.com/widgetti/solara/blob/071589d0923f1323f09c84cd941b3598e75677f5/solara/components/select.py#L177
| closed | 2024-12-19T10:32:33Z | 2024-12-20T11:41:18Z | https://github.com/widgetti/solara/issues/938 | [] | hkayabilisim | 1 |
cvat-ai/cvat | tensorflow | 8,887 | "docker-compose up" got error on Orangepi5 | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
There are 2 ways to reproduce the bug, both ending in output
```
root@orangepi5:/home/orangepi/dev/external/cvat# docker-compose up
ERROR: In file './docker-compose.yml', service 'name' must be a mapping not a string.
```
Way 1:
1. git clone https://github.com/cvat-ai/cvat
2. cd cvat
3. docker-compose up
Way 2:
1. wget https://github.com/cvat-ai/cvat/archive/refs/tags/v2.24.0.tar.gz
2. tar xvf v2.24.0.tar.gz
3. cd cvat-2.24.0
4. docker-compose up
### Expected Behavior
Successful launch of docker-compose
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
root@orangepi5:/home/orangepi/dev/external/cvat# git log -1
commit 9a25291e676845f4863e5c5330d2e3c876dc001b (HEAD -> develop, origin/develop, origin/HEAD)
Author: Maria Khrustaleva <maria@cvat.ai>
Date: Fri Dec 27 10:44:57 2024 +0100
Fix link to the authentication with Amazon Cognito (#8877)
root@orangepi5:/home/orangepi/dev/external/cvat# docker version
Client:
Version: 20.10.5+dfsg1
API version: 1.41
Go version: go1.15.15
Git commit: 55c4c88
Built: Sun Oct 13 16:05:55 2024
OS/Arch: linux/arm64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.5+dfsg1
API version: 1.41 (minimum version 1.12)
Go version: go1.15.15
Git commit: 363e9a8
Built: Sun Oct 13 16:05:55 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.13~ds1
GitCommit: 1.4.13~ds1-1~deb11u4
runc:
Version: 1.0.0~rc93+ds1
GitCommit: 1.0.0~rc93+ds1-5+deb11u5
docker-init:
Version: 0.19.0
GitCommit:
-
root@orangepi5:/home/orangepi/dev/external/cvat# uname -a
Linux orangepi5 5.10.110-rockchip-rk3588 #1.1.4 SMP Wed Mar 8 14:50:47 CST 2023 aarch64 GNU/Linux
root@orangepi5:/home/orangepi/dev/external/cvat# docker-compose --version
docker-compose version 1.25.0, build unknown
```
| closed | 2024-12-28T21:06:11Z | 2024-12-28T21:50:34Z | https://github.com/cvat-ai/cvat/issues/8887 | [
"bug"
] | PospelovDaniil | 1 |
graphql-python/graphene-sqlalchemy | graphql | 42 | filter how to use? | My ui have a search , how can i use relay?
how do i realize this example by connectfiled or relay?
```
query combineMovies {
allMovies(filter: {
OR: [{
AND: [{
releaseDate_gte: "2009"
}, {
title_starts_with: "The Dark Knight"
}]
}, {
title: "Inception"
}]
}) {
title
releaseDate
}
}
result:
{
"data": {
"allMovies": [
{
"title": "Inception",
"releaseDate": "2010-08-28T20:00:00.000Z"
},
{
"title": "The Dark Knight Rises",
"releaseDate": "2012-07-20T00:00:00.000Z"
}
]
}
}
```
| closed | 2017-04-27T09:56:30Z | 2023-02-25T00:48:40Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/42 | [] | fangaofeng | 5 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,366 | Check point tick for "whistleblower has already read the latest update" still not working properly | ### What version of GlobaLeaks are you using?
5.0.41
### What browser(s) are you seeing the problem on?
Chrome
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
There is often that despite there has been a new comment from recipents, the tick remains and does not conver to x.
I'm sending an example that managed to recrete from older test reports. The example is just a small time differece, however the client has shown me similar cases with diffences in dates.
![Uploading 20241223_150715.jpg…]()

| closed | 2024-12-23T13:30:00Z | 2024-12-30T15:52:03Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4366 | [] | elbill | 7 |
voila-dashboards/voila | jupyter | 1,426 | Team Compass for Voila | Hello Voila team. Recently the Jupyter Executive Council needed to make a list of all the Subproject Council members. In the process of compiling that list we discovered that Voila didn’t have a public list of Council members that we could find. The EC asks that you create a list of your Council members in your team compass (and create a team compass repo if you don’t have one). Let me know if you have any questions. Thanks!
https://jupyter.org/governance/software_subprojects.html | open | 2023-12-05T17:33:39Z | 2023-12-05T18:54:13Z | https://github.com/voila-dashboards/voila/issues/1426 | [
"documentation"
] | Ruv7 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,021 | Output is only repeated noises | I keep getting only strange sounding outputs rather than actual words. this is for every entry no matter the length, I'll send a sample here. Does anyone know how to fix this?
https://user-images.githubusercontent.com/77423202/155207338-4f463543-e785-4434-b7a8-12eaef259559.mp4
| open | 2022-02-22T19:46:33Z | 2022-10-05T09:19:57Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1021 | [] | LinkleZe | 4 |
custom-components/pyscript | jupyter | 448 | failling to get debug in log file | I have a script name roudrobin.py in the pyscript directory.
I cannot get log at debug or info level just for that script.
from the documentation and forum I understand that the two following config should work but nothing does it.
Is the documentation in line with latest release ? Where do I get it wrong ?
Is a full reboot the only way to change log level avec changing config.yaml
------------ from documentation -------
logger:
default: critial
logs:
custom_components.pyscript.file.roundrobin: debug
-------------- from forum -------------------
logger:
default: critial
logs:
custom_components.pyscript: debug | closed | 2023-03-11T14:13:43Z | 2023-09-22T10:24:44Z | https://github.com/custom-components/pyscript/issues/448 | [] | dominig | 4 |
huggingface/transformers | python | 36,822 | Gemma 3 is broken with fp16 | ### System Info
- `transformers` version: 4.50.0.dev0
- Platform: Linux-6.8.0-39-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- Accelerate version: 1.5.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
Gemma 3 works fine with bfloat16 but the output is empty with float16.
@amyeroberts, @qubvel @ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```Python
import torch
device = 'cuda:0'
compute_dtype = torch.float16 #bfloat16 works fine
cache_dir = None
model_id = 'google/gemma-3-4b-it'
from transformers import Gemma3ForConditionalGeneration, AutoProcessor
processor = AutoProcessor.from_pretrained(model_id, cache_dir=cache_dir)
model = Gemma3ForConditionalGeneration.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa", cache_dir=cache_dir, device_map='cuda')
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=compute_dtype)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=128, do_sample=False)[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Expected behavior
Gemma 3 should work with float16 weights too. | open | 2025-03-19T13:20:56Z | 2025-03-19T16:47:04Z | https://github.com/huggingface/transformers/issues/36822 | [
"bug"
] | mobicham | 2 |
davidsandberg/facenet | computer-vision | 962 | Etract features from a specific layer | Hi All
Could you please advice how to extract features from a specific layer in facenet?
At the moment I use extracted features from FC layer (512), but I want to extract them from middle layers.
I am very appreciate
Neamah
| open | 2019-01-29T02:06:42Z | 2019-01-29T02:06:42Z | https://github.com/davidsandberg/facenet/issues/962 | [] | NeamahAlskeini | 0 |
encode/httpx | asyncio | 2,906 | Support brotlicffi in tests | While the package itself can work with either `brotli` or `brotlicffi`, `tests/test_decoders.py` explicitly requires `brotli`. We're currently working on having all packages support `brotlicffi` in Gentoo, since `brotli` doesn't work reliably on PyPy3. Could you please consider making the test accept `brotlicffi` as well?
I suppose the simplest way would be to reuse the existing logic, i.e.:
from httpx._compat import brotli
I can submit a PR for that.
---
- [x] Initially raised as discussion #2903
| closed | 2023-10-29T18:34:17Z | 2023-11-10T15:07:07Z | https://github.com/encode/httpx/issues/2906 | [] | mgorny | 4 |
strawberry-graphql/strawberry-django | graphql | 28 | update_m2m_fields Problem. | Thanks for awesome project.
> I find a problem, when update m2m.
At code "strawberry-graphql-django/tests/mutations/test_relations.py" test,
result = mutation('{ updateGroups(data: { tagsSet: [12] }) { id } }')
=> will set "id==1" and "id==2", NOT "12"
> FIX MAY BE
@ strawberry_django.mutations.resolvers.update_m2m_fields
```
def update_m2m_fields(model, objects, data):
data = utils.get_input_data_m2m(model, data)
if not data:
return
# iterate through objects and update m2m fields
for obj in objects:
for key, actions in data.items():
relation_field = getattr(obj, key)
for key, values in actions.items():
# action is add, set or remove function of relation field
action = getattr(relation_field, key)
# action(*values) #<======= MAY BE BUG
# FIX ------------------------
action(values)
```
| closed | 2021-05-10T15:44:12Z | 2021-05-10T20:07:19Z | https://github.com/strawberry-graphql/strawberry-django/issues/28 | [] | fingul | 2 |
httpie/cli | rest-api | 1,035 | HTTP response code 425 should return the correct RFC message | ```
$ http -h https://server.tld/health
HTTP/1.1 425 Unordered Collection
(snip)
```
According to https://tools.ietf.org/html/rfc8470#section-5.2 we should return `HTTP/1.1 425 Too Early` instead.
Here's an example of node.js fixing it a little while ago https://github.com/nodejs/node/commit/458a38c904c78b072f4b49c45dda7c63987bb60b | closed | 2021-02-17T12:24:59Z | 2021-02-17T15:36:25Z | https://github.com/httpie/cli/issues/1035 | [
"invalid"
] | anavarre | 2 |
alpacahq/alpaca-trade-api-python | rest-api | 6 | Add CI | closed | 2018-05-08T23:18:20Z | 2018-05-27T01:14:13Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/6 | [] | umitanuki | 0 | |
ansible/awx | automation | 15,560 | Import git repo for AWX error | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
ssh: connect to host github.com port 22: Connection timed out
### AWX version
22.3.0
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [X] Collection
- [ ] CLI
- [X] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
Ubuntu 22.04.2 LTS
### Web browser
Firefox
### Steps to reproduce
1. ssh-keygen -t rsa
2. Import a public key to Github SSH and GPG keys
3. Create a Credential by private key
4. Create project from git@github.com:... with source branch
### Expected results
Successfully import project
### Actual results
ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.
### Additional information
_No response_ | closed | 2024-09-30T08:30:01Z | 2024-10-10T05:30:20Z | https://github.com/ansible/awx/issues/15560 | [
"type:bug",
"component:ui",
"component:awx_collection",
"needs_triage",
"community"
] | NobeliY | 1 |
nerfstudio-project/nerfstudio | computer-vision | 3,002 | Pynerf - TypeError: __init__() takes 2 positional arguments but 3 were given - Error | **Describe the bug**
When attempting to train a data set with Pynerf. I persistently get this issue.
[https://docs.nerf.studio/nerfology/methods/pynerf.html]
ns-train pynerf nerfstudio-data --data data/nerfstudio/Egypt
-----
Traceback (most recent call last):
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\James\anaconda3\envs\nerfstudio\Scripts\ns-train.exe\__main__.py", line 7, in <module>
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 262, in entrypoint
main(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 247, in main
launch(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 99, in train_loop
trainer.setup()
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\engine\trainer.py", line 149, in setup
self.pipeline = self.config.pipeline.setup(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\configs\base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\pipelines\base_pipeline.py", line 254, in __init__
self.datamanager: DataManager = config.datamanager.setup(
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\configs\base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "C:\Users\James\anaconda3\envs\nerfstudio\lib\site-packages\pynerf\data\datamanagers\random_subset_datamanager.py", line 96, in __init__
self.train_ray_generator = RayGenerator(self.train_dataparser_outputs.cameras.to(self.device),
TypeError: __init__() takes 2 positional arguments but 3 were given
| open | 2024-03-15T12:04:01Z | 2024-03-15T14:08:39Z | https://github.com/nerfstudio-project/nerfstudio/issues/3002 | [] | JamesAscroft | 3 |
mwouts/itables | jupyter | 186 | Databricks support? | I understand that it's not among the supported editors, but would be a very cool display option. The standard df view of pandas leaves much to be desired.
Currently I am getting this when displaying a table:
`Uncaught ReferenceError: $ is not defined` | closed | 2023-06-20T11:38:02Z | 2024-03-05T21:07:19Z | https://github.com/mwouts/itables/issues/186 | [] | Ljupch0 | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,207 | Controlling page load time in the "uc_open_with_reconnect" method | Hello, I'd like to inquire about how to enforce a specific page loading time in the following code scenario.
The two lines of code that are commented out in the script represent methods I've tried before, but they seem to be less effective.
```
browser = Driver(headless=True, uc=True)
# browser.set_page_load_timeout(60)
browser.uc_open_with_reconnect('https://mjai.ekyu.moe', reconnect_time=7)
# browser.implicitly_wait(30)
``` | closed | 2023-10-24T09:20:33Z | 2023-10-25T02:05:59Z | https://github.com/seleniumbase/SeleniumBase/issues/2207 | [
"question",
"UC Mode / CDP Mode"
] | CatalinaCharlotte | 3 |
browser-use/browser-use | python | 1,062 | Sensitive Data not working | ### Bug Description
Even with the sensitive_data filled, when working with open AI's api (gpt-4-mini to be exact, but I tried it with gpt-4 as well with the same result). It wont populate the filler name and password with the actual password. It only places the placeholder username and password in the login field. I've looked at the documentation and I seem to be doing everything right, but if not if someone could let me know it would be much appreciated

### Reproduction Steps
Run File
Navigates to login
Inserts placeholder login information (The Error)
### Code Sample
```python
import asyncio
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from browser_use import Agent
load_dotenv()
# Initialize the model
llm = ChatOpenAI(
model='gpt-4o-mini',
temperature=0.0,
)
sensitive_data={'x_name': 'fakeusername', 'x_password': 'placeholderPassword'}
task='''
Your purpose is to download orders from store onto sellercloud. This can be done by
1. Navigate to sellercloud.com
2. Login with x_name and x_password... etc '''
agent = Agent(task=task, llm=llm)
async def main():
await agent.run()
if __name__ == '__main__':
asyncio.run(main())
```
### Version
main branch
### LLM Model
GPT-4, Other (specify in description)
### Operating System
Windows 11
### Relevant Log Output
```shell
``` | open | 2025-03-18T18:11:26Z | 2025-03-20T07:15:59Z | https://github.com/browser-use/browser-use/issues/1062 | [
"bug"
] | MaxoOwen | 4 |
huggingface/diffusers | pytorch | 10,616 | Accelerate.__init__() got an unexpected keyword argument 'logging_dir' | ### Describe the bug
I'm trying to **train** an unconditional diffusion model on a greyscale image dataset. I am using [diffusers_training_example.ipynb](https://huggingface.co/docs/diffusers/v0.32.2/training/unconditional_training) on Google Colab connected to my local GPU. When running the ‘Let's train!’ cell I am getting this **Accelerate** error. Initially, I tried downgrading my Accelerate from 1.3.0 to 0.3.0 and 0.27.0 as some forums suggested but this made no difference. Any advice would be great! Thank you.
### Reproduction
Run through the google colab notebook up until the training cell. Ensure you are running on a local GPU and using greyscale images.
### Logs
```shell
```
### System Info

Python 3.12.8
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Oct_30_01:18:48_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0
Package Version
------------------------- --------------
absl-py 2.1.0
accelerate 0.27.2
aiohappyeyeballs 2.4.4
aiohttp 3.11.11
aiosignal 1.3.2
anyio 4.8.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 3.0.0
async-lru 2.0.4
attrs 24.3.0
babel 2.16.0
beautifulsoup4 4.12.3
bleach 6.2.0
certifi 2024.12.14
cffi 1.17.1
charset-normalizer 3.4.1
colorama 0.4.6
comm 0.2.2
contourpy 1.3.1
cycler 0.12.1
datasets 3.2.0
debugpy 1.8.12
decorator 5.1.1
defusedxml 0.7.1
diffusers 0.11.1
dill 0.3.8
executing 2.1.0
fastjsonschema 2.21.1
filelock 3.16.1
fonttools 4.55.3
fqdn 1.5.1
frozenlist 1.5.0
fsspec 2024.9.0
ftfy 6.3.1
grpcio 1.69.0
h11 0.14.0
httpcore 1.0.7
httpx 0.28.1
huggingface-hub 0.25.0
idna 3.10
importlib_metadata 8.5.0
ipykernel 6.29.5
ipython 8.31.0
ipywidgets 8.1.5
isoduration 20.11.0
jax 0.5.0
jaxlib 0.5.0
jedi 0.19.2
Jinja2 3.1.5
json5 0.10.0
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter 1.1.1
jupyter_client 8.6.3
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.11.0
jupyter-http-over-ws 0.0.8
jupyter-lsp 2.2.5
jupyter_server 2.15.0
jupyter_server_terminals 0.5.3
jupyterlab 4.3.4
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
kiwisolver 1.4.8
Markdown 3.7
MarkupSafe 3.0.2
matplotlib 3.10.0
matplotlib-inline 0.1.7
mistune 3.1.0
ml_dtypes 0.5.1
modelcards 0.1.6
mpmath 1.3.0
multidict 6.1.0
multiprocess 0.70.16
nbclient 0.10.2
nbconvert 7.16.5
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.2
notebook 7.3.2
notebook_shim 0.2.4
numpy 2.2.2
opt_einsum 3.4.0
overrides 7.7.0
packaging 24.2
pandas 2.2.3
pandocfilters 1.5.1
parso 0.8.4
pillow 11.1.0
pip 24.3.1
platformdirs 4.3.6
prometheus_client 0.21.1
prompt_toolkit 3.0.48
propcache 0.2.1
protobuf 5.29.3
psutil 6.1.1
pure_eval 0.2.3
pyarrow 19.0.0
pycparser 2.22
Pygments 2.19.1
pyparsing 3.2.1
python-dateutil 2.9.0.post0
python-json-logger 3.2.1
pytz 2024.2
pywin32 308
pywinpty 2.0.14
PyYAML 6.0.2
pyzmq 26.2.0
referencing 0.36.1
regex 2024.11.6
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.22.3
safetensors 0.5.2
scipy 1.15.1
Send2Trash 1.8.3
setuptools 75.8.0
six 1.17.0
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
sympy 1.13.1
tensorboard 2.18.0
tensorboard-data-server 0.7.2
terminado 0.18.1
tinycss2 1.4.0
tokenizers 0.21.0
torch 2.5.1+cu124
torchaudio 2.5.1+cu124
torchvision 0.20.1+cu124
tornado 6.4.2
tqdm 4.67.1
traitlets 5.14.3
transformers 4.48.0
types-python-dateutil 2.9.0.20241206
typing_extensions 4.12.2
tzdata 2024.2
uri-template 1.3.0
urllib3 2.3.0
wcwidth 0.2.13
webcolors 24.11.1
webencodings 0.5.1
websocket-client 1.8.0
Werkzeug 3.1.3
widgetsnbextension 4.0.13
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
### Who can help?
_No response_ | closed | 2025-01-21T03:31:01Z | 2025-02-20T20:19:19Z | https://github.com/huggingface/diffusers/issues/10616 | [
"bug",
"stale"
] | DavidGill159 | 5 |
google-research/bert | tensorflow | 410 | can't use the trained check points to retrain on different data set | Hi,
I trained Bert_base model on squad1.0 and got some check points. I have another dataset which is in squad format and I want to retrain the model using this data set as train_file, but use the latest check point that I got from training on squad1.0. Is it possible to do like this? because when I do this, the model is directly restoring the provided check points and it is straight away giving me the results. I am not seeing any updated weights here as I am not getting any new check point files.
Are there any configurations I am missing to do this transfer learning?
| open | 2019-02-01T09:03:02Z | 2019-02-13T20:25:02Z | https://github.com/google-research/bert/issues/410 | [] | sravand93 | 1 |
sqlalchemy/alembic | sqlalchemy | 572 | Pull some variables to mako | How i can pass my variables to mako template with commands.verision? | closed | 2019-06-04T07:48:42Z | 2019-06-04T13:13:41Z | https://github.com/sqlalchemy/alembic/issues/572 | [
"question"
] | mrquokka | 1 |
agronholm/anyio | asyncio | 55 | Add Hypothesis support to pytest plugin | Hypothesis requires some explicit support to work properly with async tests. This would also make @Zac-HD happy :) | closed | 2019-05-06T19:23:27Z | 2019-05-07T16:43:26Z | https://github.com/agronholm/anyio/issues/55 | [
"enhancement"
] | agronholm | 1 |
pallets-eco/flask-sqlalchemy | flask | 973 | Support for context manager style | In sql-alchemy there are two styles of working with sessions:
- Context manager
- Commit as you go
See https://docs.sqlalchemy.org/en/14/orm/session_transaction.html#managing-transactions
I would like to use the "context manager"-style with flask-sqlalchemy but it doesn't seem to be supported.
Running:
```
with db.session.begin():
db.session.add(some_object())
db.session.add(some_other_object())
```
Gives:
```
sqlalchemy.exc.InvalidRequestError: a transaction is already begun on this session
``` | closed | 2021-05-28T19:40:15Z | 2021-06-12T00:05:18Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/973 | [] | lverweijen | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,079 | Email destinated to wrongly configured email addresses get an error 550 and keeps retrying to be sent undefinitely | In case of wrong email address is provided while changing account email address,
system continues to send again and again the confirm email to that wrong address, even if the correct email address is set little later.
Example:
1. receiver access to his account
2. receiver changes his email into a wrong address (non existing address) -> system starts to try sending the confirm email to that wrong address
3. receiver newly changes his email into the correct one -> email sent correctly and email change is done
4. system continues to trying to send indefinitely the email at point 2
I feel to suggest:
- to set a sort of warning/error message to the user in case of SMTP failure while user changes his email
- to show in some way into the user account preferences, the new email address the user wants to switch to.
| open | 2021-10-26T12:50:04Z | 2021-10-26T18:15:49Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3079 | [
"T: Bug",
"C: Backend"
] | larrykind | 5 |
mwaskom/seaborn | pandas | 3,248 | common_norm for kdeplot with multiple="stack" | I just noticed that in in `kdeplot` when `multiple=stack` the setting of `common_norm` is ignored, and always considered True.
when setting `multiple=layer` everything works as expected and `common_norm=False` results in independently normalised densities.
I see that this might depend on the fact that stacking distributions without transparencies might mask some of the lower ones and proper visualization can happen only with common normalization, but if this behaviour is intentional a warning/error and a mention in the documentation would be very helpful | closed | 2023-02-08T12:40:14Z | 2023-02-08T14:13:40Z | https://github.com/mwaskom/seaborn/issues/3248 | [] | perinom | 6 |
liangliangyy/DjangoBlog | django | 351 | 用户注册导致502 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [x] BUG 反馈
- [ ] 添加新的特性或者功能
- [ ] 请求技术支持
| closed | 2020-02-04T13:29:05Z | 2020-02-04T13:49:15Z | https://github.com/liangliangyy/DjangoBlog/issues/351 | [] | hackzhu | 3 |
PaddlePaddle/models | computer-vision | 5,309 | video_tag Out of memory error | 1.运行此代码时

2.报了以下错误
[2021-05-18 13:49:17,554] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the __init__() to handle the initialization of the object
2021-05-18 13:49:24,454 - INFO - load extractor weights from C:\Users\Administrator\.paddlehub\modules\videotag_tsn_lstm\weights\tsn
[INFO 2021-05-18 13:49:24,454 module.py:87] load extractor weights from C:\Users\Administrator\.paddlehub\modules\videotag_tsn_lstm\weights\tsn
2021-05-18 13:49:25,316 - INFO - load lstm weights from C:\Users\Administrator\.paddlehub\modules\videotag_tsn_lstm\weights\attention_lstm
[INFO 2021-05-18 13:49:25,316 module.py:117] load lstm weights from C:\Users\Administrator\.paddlehub\modules\videotag_tsn_lstm\weights\attention_lstm
Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/douyin/1.py", line 9, in <module>
top_k=10) # 返回预测结果的前k个,默认为10
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\paddlehub\compat\paddle_utils.py", line 220, in runner
return func(*args, **kwargs)
File "C:\Users\Administrator\.paddlehub\modules\videotag_tsn_lstm\module.py", line 188, in classify
scope=self.extractor_scope)
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\paddle\fluid\executor.py", line 1110, in run
six.reraise(*sys.exc_info())
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\six.py", line 719, in reraise
raise value
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\paddle\fluid\executor.py", line 1108, in run
return_merged=return_merged)
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\paddle\fluid\executor.py", line 1238, in _run_impl
use_program_cache=use_program_cache)
File "D:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\paddle\fluid\executor.py", line 1328, in _run_program
[fetch_var_name])
RuntimeError: ResourceExhaustedError:
Out of memory error on GPU 0. Cannot allocate 918.750244MB memory on GPU 0, available memory is only 594.612109MB.
Please check whether there is any other process using GPU 0.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please decrease the batch size of your model.
(at D:\v2.0.2\paddle\paddle\fluid\memory\allocation\cuda_allocator.cc:69)
W0518 13:49:20.264742 11740 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 11.0, Runtime API Version: 10.0
W0518 13:49:20.280762 11740 device_context.cc:372] device: 0, cuDNN Version: 7.4.
W0518 13:49:39.900574 11740 operator.cc:206] batch_norm raises an exception struct paddle::memory::allocation::BadAlloc, ResourceExhaustedError:
Out of memory error on GPU 0. Cannot allocate 918.750244MB memory on GPU 0, available memory is only 594.612109MB.
Please check whether there is any other process using GPU 0.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please decrease the batch size of your model.
(at D:\v2.0.2\paddle\paddle\fluid\memory\allocation\cuda_allocator.cc:69) | open | 2021-05-18T06:00:00Z | 2024-02-26T05:08:59Z | https://github.com/PaddlePaddle/models/issues/5309 | [] | Jasonxgw | 3 |
pydantic/pydantic | pydantic | 11,070 | Unexpected validation of annotated enum in strict mode | ### Discussed in https://github.com/pydantic/pydantic/discussions/11068
<div type='discussions-op-text'>
<sup>Originally posted by **namezys** December 9, 2024</sup>
I've tried to add wrap validators for the enum field.
Using strict mode is important (without strict everything works).
Let's start with simple code:
```python
import enum
from typing import Annotated
from pydantic import BaseModel, WrapValidator
class E(enum.StrEnum):
a = 'A'
x = 'X'
class M(BaseModel, strict=True):
a: E
# a: Annotated[E, WrapValidator(lambda v, h: h(v))]
M.model_validate_json('{"a": "X"}')
```
everything works as expected.
Let's add simples validator (identical)
```python
import enum
from typing import Annotated
from pydantic import BaseModel, WrapValidator
class E(enum.StrEnum):
a = 'A'
x = 'X'
class M(BaseModel, strict=True):
# a: E
a: Annotated[E, WrapValidator(lambda v, h: h(v))]
M.model_validate_json('{"a": "X"}')
```
And this results in an error
```
Traceback (most recent call last):
File "/Users/namezys/job/pydantic-experements/safe.py", line 17, in <module>
m = M.model_validate_json('{"a": "X"}')
File "/Users/namezys/job/pydantic-experements/.venv/lib/python3.13/site-packages/pydantic/main.py", line 656, in model_validate_json
return cls.__pydantic_validator__.validate_json(json_data, strict=strict, context=context)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for M
a
Input should be an instance of E [type=is_instance_of, input_value='X', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/is_instance_of
```
Maybe it is described in the documentation but it looks very strange.
</div> | closed | 2024-12-09T15:30:20Z | 2025-02-12T20:25:14Z | https://github.com/pydantic/pydantic/issues/11070 | [] | Viicos | 3 |
seleniumbase/SeleniumBase | pytest | 3,567 | Simplify CDP Mode imports when using the pure CDP formats | ## Simplify CDP Mode imports when using the pure CDP formats
Currently, some examples are using this:
```python
from seleniumbase.core import sb_cdp
from seleniumbase.undetected import cdp_driver
```
By editing an `__init__.py` file, that can be simplified to this:
```python
from seleniumbase import sb_cdp
from seleniumbase import cdp_driver
```
That's easier to remember, and looks cleaner too. | closed | 2025-02-26T01:03:19Z | 2025-02-26T22:43:17Z | https://github.com/seleniumbase/SeleniumBase/issues/3567 | [
"enhancement",
"UC Mode / CDP Mode"
] | mdmintz | 3 |
gradio-app/gradio | deep-learning | 10,428 | A way to use MultimodalTextbox stop_btn with Chatbot when running events consecutively | - [Yes ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I know this functionality is implemented in ChatInterface but it would be nice to have it connected to Chatbot as well
**Describe the solution you'd like**
I am streaming output to the Chatbot and there is no way for me to connect the stop button to the Chatbot to stop the streaming chat (generator).
**Additional context**
| closed | 2025-01-24T08:41:10Z | 2025-01-24T10:45:27Z | https://github.com/gradio-app/gradio/issues/10428 | [] | git-hamza | 2 |
Urinx/WeixinBot | api | 286 | 此项目已不支持新微信号接入,做机器人,营销系统,客服系统,监管系统的可以看下这个API :https://wkteam.gitbook.io/api/ | 17年前登陆过web网页版的微信可以登录并使用此框架,17年后的新注册微信号包括以前没有登陆过web网页版微信的号无法使用此框架,想搞着自己的机器人搞着玩的,可以去购买支持web登录微信号,如果是公司开发需要,那么唯一选择就是找正规企业合作API,(因为大家github搜索出来的基本都是网页版 wxpy wechaty itchat等等都是基于网页微信开发的)。所以以寻找API提供商,不过著名的提供商入门条件较高5W起步,QQ 微信提供的一堆二手骗子, 容易封号,无法维护, 赚一波钱就跑(微信一升级,API就废了,但是价格便宜 和割韭菜一样),所以推荐大家 寻找:有官网、API、系统、有能力提供协议升级稳定的企业(二手骗子一般没有) | closed | 2020-02-16T04:12:46Z | 2020-04-08T09:25:26Z | https://github.com/Urinx/WeixinBot/issues/286 | [] | 2905683882 | 2 |
Textualize/rich | python | 3,263 | [BUG] Text inside Live with vertical_overflow="visible" duplicating when above console.height instead of scrolling | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
I try to create function to streaming from incoming text using both rich.markdown and rich.live. In this example, I use simple number loop with the delay to simulate the streaming. Here's the simple example:
```
import time
from rich.markdown import Markdown
from rich.live import Live
def stream_numbers(chunk_size=1):
for i in range(1, 11, chunk_size):
yield f"\n\n{i}"
time.sleep(0.01)
render_this = ""
with Live(render_this, auto_refresh=False, vertical_overflow="visible") as live:
print(f"Console height: {live.console.height}")
for entry in stream_numbers():
render_this += entry
live.update(Markdown(render_this), refresh=True)
```
Result:
```
1
1
(newline)
[...]
(newline)
10
```
Expected result:
Output scrolling after excess console.height,
```
1
(newline)
[...]
(newline)
10
```
Result only breaking (duplicating) only when loop are above console.height. In this example, my console height are 18, and I try to print above it (19 newline).
**Platform**
<details>
<summary>Click to expand</summary>
Windows 11 with Python 3.10, Trying in: VSCode Terminal, Windows Terminal, and Cmder
```
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=100 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 24 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=100, height=24), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=100, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=24, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=100, height=24) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 100 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭── <class 'rich._windows.WindowsConsoleFeatures'> ───╮
│ Windows features available. │
│ │
│ ╭─────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=True, truecolor=True) │ │
│ ╰─────────────────────────────────────────────────╯ │
│ │
│ truecolor = True │
│ vt = True │
╰─────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': None, │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Windows"
```
</details>
| open | 2024-01-23T08:24:48Z | 2024-01-23T09:50:47Z | https://github.com/Textualize/rich/issues/3263 | [
"Needs triage"
] | RasyiidWho | 2 |
slackapi/python-slack-sdk | asyncio | 1,601 | Export Django InstallationStore and OAuthStateStore in slack-sdk | The custom `InstallationStore` and `OAuthStateStore` classes suitable for Django are currently provided as [example code in bolt-python](https://github.com/slackapi/bolt-python/blob/main/examples/django/oauth_app/slack_datastores.py).
However, given Django's popularity, it would be be quite useful to make those classes "official" by including them in the library such that they're directly importable (eg. `from slack_sdk.oauth.installation_store import DjangoInstallationStore`).
Django does have a convention of bundling things together into apps so I could imagine that providing a Django app for Slack SDK might be necessary, such that the `SlackBot`, `SlackInstallationState`, and `SlackOAuthState` models have associated migrations. In any case, that would be much nicer v/s copy/pasting example code. Thanks!
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [x] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules. | closed | 2024-11-24T15:37:50Z | 2024-11-25T18:14:03Z | https://github.com/slackapi/python-slack-sdk/issues/1601 | [
"question",
"discussion",
"oauth",
"auto-triage-skip"
] | siddhantgoel | 2 |
python-arq/arq | asyncio | 118 | Few question from beginners. | Hello.
First, thanks for the great work.
But I can found answers for my question in doc.
So, may be some one can help me.
1. If I use RedisPool for my own data access in startup parameter like
```
async def startup(ctx):
qredis = await arq_create_pool(settings=RedisSettings(host='localhost', port=6379, database=1))
ctx['redis_cache'] = redis_cache
```
And use it in my function later
```
class WorkerSettings:
functions = [get_messages]
on_startup = startup
on_shutdown = shutdown
async def get_messages(ctx):
redis_cache = ctx['redis_cache']
print(f"LAST_MAIL_ID: {await redis_cache.get('last_id')}")
```
Does I am need close this poll in on_shutdown params?
2. How I can run workers from python file? Not from system terminal like
`# arq my_file.WorkeRname`
3. How I can work with output log? Disable it or retranslate it in to file?
Thanks | closed | 2019-04-01T13:59:57Z | 2019-04-04T17:12:01Z | https://github.com/python-arq/arq/issues/118 | [
"question"
] | kobzar | 6 |
supabase/supabase-py | flask | 850 | User session not always present | # Bug report
## Describe the bug
This is a regression from 2.4.3 where the user's session token is sometimes present whilst not at other times due to the client not triggering an `on_auth_state_change`.
This regression happened here https://github.com/supabase-community/supabase-py/pull/766
## System information
- Version of supabase-py: 2.4.3+
| closed | 2024-07-07T10:26:36Z | 2024-07-16T11:53:54Z | https://github.com/supabase/supabase-py/issues/850 | [
"bug"
] | silentworks | 0 |
JaidedAI/EasyOCR | deep-learning | 572 | readtextlang use error | What does it mean? Where from characters must be gotten?
File "/mnt/c/Users/rs/Downloads/Projects/subtitles_extract/scripts/easyocr_test.py", line 85, in extract_subs
result = reader.readtextlang(mypath + onlyfiles[i])
File "/home/rs/.local/lib/python3.8/site-packages/easyocr/easyocr.py", line 450, in readtextlang
for filename in os.listdir(directory):
FileNotFoundError: [Errno 2] No such file or directory: 'characters/' | closed | 2021-10-19T19:49:49Z | 2022-08-07T05:00:31Z | https://github.com/JaidedAI/EasyOCR/issues/572 | [] | krviolent | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 583 | M1 vr architecture model = 5_HP_karaoke-UVR 100% crash | 
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: UVR [49950]
Path: /Applications/Ultimate Vocal Remover.app/Contents/MacOS/UVR
Identifier: UVR
Version: 0.0.0 (???)
Code Type: X86-64 (Translated)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2023-05-29 18:23:51.0457 +0800
OS Version: macOS 13.2 (22D49)
Report Version: 12
Anonymous UUID: 0D18E8CE-7279-417B-9390-8FB5EE074745
Sleep/Wake UUID: 0751FCEC-D0C6-4E88-9074-8792A2CF64FA
Time Awake Since Boot: 100000 seconds
Time Since Wake: 4729 seconds
System Integrity Protection: enabled
Notes:
dyld_process_snapshot_create_for_process failed with 5
Crashed Thread: 19
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4
Terminating Process: exc handler [49950]
Error Formulating Crash Report:
dyld_process_snapshot_create_for_process failed with 5
Thread 0:: Dispatch queue: com.apple.main-thread
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e25c2 mach_msg2_trap + 10
2 libsystem_kernel.dylib 0x7ff80c0f0604 mach_msg2_internal + 82
3 libsystem_kernel.dylib 0x7ff80c0e9635 mach_msg_overwrite + 723
4 libsystem_kernel.dylib 0x7ff80c0e28a8 mach_msg + 19
5 CoreFoundation 0x7ff80c1fc00b __CFRunLoopServiceMachPort + 145
6 CoreFoundation 0x7ff80c1faa64 __CFRunLoopRun + 1387
7 CoreFoundation 0x7ff80c1f9e7f CFRunLoopRunSpecific + 560
8 libtcl8.6.dylib 0x156216fad Tcl_WaitForEvent + 278
9 libtcl8.6.dylib 0x1561cf74d Tcl_DoOneEvent + 268
10 _tkinter.cpython-310-darwin.so 0x15423aa94 _tkinter_tkapp_mainloop_impl + 228
11 Python 0x10b89739b method_vectorcall_FASTCALL + 107
12 Python 0x10b9c961f call_function + 175
13 Python 0x10b9bf90d _PyEval_EvalFrameDefault + 23981
14 Python 0x10b9b81df _PyEval_Vector + 383
15 Python 0x10b88beaf method_vectorcall + 159
16 Python 0x10b9c961f call_function + 175
17 Python 0x10b9bf99a _PyEval_EvalFrameDefault + 24122
18 Python 0x10b9b81df _PyEval_Vector + 383
19 Python 0x10b9b8042 PyEval_EvalCode + 114
20 UVR 0x10098ea39 0x10098a000 + 19001
21 UVR 0x10098f1e7 0x10098a000 + 20967
22 dyld 0x201f3e310 start + 2432
Thread 1:: com.apple.rosetta.exceptionserver
0 runtime 0x7ff7ffe18614 0x7ff7ffe14000 + 17940
1 runtime 0x7ff7ffe24530 0x7ff7ffe14000 + 66864
2 runtime 0x7ff7ffe25f30 0x7ff7ffe14000 + 73520
Thread 2:: com.apple.NSEventThread
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e25c2 mach_msg2_trap + 10
2 libsystem_kernel.dylib 0x7ff80c0f0604 mach_msg2_internal + 82
3 libsystem_kernel.dylib 0x7ff80c0e9635 mach_msg_overwrite + 723
4 libsystem_kernel.dylib 0x7ff80c0e28a8 mach_msg + 19
5 CoreFoundation 0x7ff80c1fc00b __CFRunLoopServiceMachPort + 145
6 CoreFoundation 0x7ff80c1faa64 __CFRunLoopRun + 1387
7 CoreFoundation 0x7ff80c1f9e7f CFRunLoopRunSpecific + 560
8 AppKit 0x7ff80f3e6129 _NSEventThread + 132
9 libsystem_pthread.dylib 0x7ff80c121259 _pthread_start + 125
10 libsystem_pthread.dylib 0x7ff80c11cc7b thread_start + 15
Thread 3:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0eb2da __select + 10
2 libtcl8.6.dylib 0x1562178b8 NotifierThreadProc + 880
3 libsystem_pthread.dylib 0x7ff80c121259 _pthread_start + 125
4 libsystem_pthread.dylib 0x7ff80c11cc7b thread_start + 15
Thread 4:: caulk.messenger.shared:high
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e253e semaphore_wait_trap + 10
2 caulk 0x7ff815dd88f8 caulk::mach::semaphore::wait_or_error() + 16
3 caulk 0x7ff815dbe664 caulk::concurrent::details::worker_thread::run() + 36
4 caulk 0x7ff815dbe328 void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*> > >(void*) + 41
5 libsystem_pthread.dylib 0x7ff80c121259 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff80c11cc7b thread_start + 15
Thread 5:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 6:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 7:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 8:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 9:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 10:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 11:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 12:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 13:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 libiomp5.dylib 0x156c06bb6 void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*) + 358
4 ??? 0x1 ???
5 libiomp5.dylib 0x156bfb810 kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check() + 64
6 ??? 0x1b9127400 ???
Thread 14:
0 runtime 0x7ff7ffe3687c 0x7ff7ffe14000 + 141436
Thread 15:
0 runtime 0x7ff7ffe3687c 0x7ff7ffe14000 + 141436
Thread 16:
0 runtime 0x7ff7ffe3687c 0x7ff7ffe14000 + 141436
Thread 17:
0 runtime 0x7ff7ffe3687c 0x7ff7ffe14000 + 141436
Thread 18:
0 ??? 0x7ff89b4869a8 ???
1 libsystem_kernel.dylib 0x7ff80c0e511a __psynch_cvwait + 10
2 libsystem_pthread.dylib 0x7ff80c1217e1 _pthread_cond_wait + 1243
3 Foundation 0x7ff80d58d268 -[_NSThreadPerformInfo wait] + 63
4 Foundation 0x7ff80cf807ae -[NSObject(NSThreadPerformAdditions) performSelector:onThread:withObject:waitUntilDone:modes:] + 450
5 Foundation 0x7ff80d001787 -[NSObject(NSThreadPerformAdditions) performSelectorOnMainThread:withObject:waitUntilDone:modes:] + 87
6 libtk8.6.dylib 0x1565565ad -[TKBackgroundLoop main] + 138
7 Foundation 0x7ff80cf8a3bc __NSThread__start__ + 1009
8 libsystem_pthread.dylib 0x7ff80c121259 _pthread_start + 125
9 libsystem_pthread.dylib 0x7ff80c11cc7b thread_start + 15
Thread 19 Crashed:
0 libsamplerate.dylib 0x1a3bd8f6a sinc_set_converter + 52
1 libsamplerate.dylib 0x1a3bd8772 src_new + 86
2 libsamplerate.dylib 0x1a3bd8d33 src_simple + 30
3 libffi.dylib 0x7ff81bc1c912 ffi_call_unix64 + 82
4 ??? 0x16fe69988 ???
Thread 20:
0 runtime 0x7ff7ffe3687c 0x7ff7ffe14000 + 141436
Thread 19 crashed with X86 Thread State (64-bit):
rax: 0x0000600000046940 rbx: 0x0000600000046940 rcx: 0x0000600000046960 rdx: 0xffffffffffffffe0
rdi: 0x0000000000000000 rsi: 0x0000000000000002 rbp: 0x0000000307df5ad0 rsp: 0x0000000307df5220
r8: 0x0000000000000008 r9: 0x0000000000000060 r10: 0x0000600000046940 r11: 0x00000000d301d89f
r12: 0x0000000000000002 r13: 0x000000016fe69970 r14: 0x0000000000000002 r15: 0x0000600000046940
rip: <unavailable> rfl: 0x0000000000000243
tmp0: 0x00000001a3bd8f6a tmp1: 0x8429fdc5c057fdc5 tmp2: 0x00f9c50000084024
Binary Images:
0x0 - 0xffffffffffffffff ??? (*) <00000000-0000-0000-0000-000000000000> ???
0x7ff80c0e1000 - 0x7ff80c11aff7 libsystem_kernel.dylib (*) <ca136b67-0559-3f19-8b7e-9b80438090b6> /usr/lib/system/libsystem_kernel.dylib
0x7ff80c17d000 - 0x7ff80c614fff com.apple.CoreFoundation (6.9) <be859dcd-e5ee-3aab-97e4-13231468695f> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
0x156114000 - 0x156237fff libtcl8.6.dylib (*) <486d78ca-5d7d-3146-a58e-1b7a54cb8c35> /Applications/Ultimate Vocal Remover.app/Contents/MacOS/libtcl8.6.dylib
0x154238000 - 0x15423ffff _tkinter.cpython-310-darwin.so (*) <f21dac2a-1022-354f-b21e-c2abdc084dd5> /Applications/Ultimate Vocal Remover.app/Contents/MacOS/lib-dynload/_tkinter.cpython-310-darwin.so
0x10b800000 - 0x10bb7bfff Python (*) <76e71a09-4224-360b-856b-5860f9c7f47a> /Applications/Ultimate Vocal Remover.app/Contents/MacOS/Python
0x10098a000 - 0x100995fff UVR (0.0.0) <3af20944-8338-3107-8418-34271ded1152> /Applications/Ultimate Vocal Remover.app/Contents/MacOS/UVR
0x201f38000 - 0x201fcffff dyld (*) <270c4224-a38f-3a22-9ba9-95968f487738> /usr/lib/dyld
0x7ff7ffe14000 - 0x7ff7ffe43fff runtime (*) <f066db2c-ed38-3f37-8d21-81d15fa908fe> /usr/libexec/rosetta/runtime
0x7ff80f247000 - 0x7ff81024fff2 com.apple.AppKit (6.9) <480a5693-f3e3-3b50-a1f3-169d12a12a0e> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit
0x7ff80c11b000 - 0x7ff80c126ff7 libsystem_pthread.dylib (*) <3bd433d4-15bd-3add-a612-95e4d3b20719> /usr/lib/system/libsystem_pthread.dylib
0x7ff815dbc000 - 0x7ff815de1fff com.apple.audio.caulk (1.0) <bf7582bd-4de0-3ca2-8b69-f1944725f182> /System/Library/PrivateFrameworks/caulk.framework/Versions/A/caulk
0x156adc000 - 0x156c23fff libiomp5.dylib (*) <6934e91e-bb7d-3812-a269-50db7d644483> /Applications/Ultimate Vocal Remover.app/Contents/Resources/torch/.dylibs/libiomp5.dylib
0x7ff80cf32000 - 0x7ff80d943ff6 com.apple.Foundation (6.9) <a58576df-7109-3a13-a338-617f135ce8a8> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
0x156492000 - 0x156599fff libtk8.6.dylib (*) <32c3f790-c241-3483-ad54-fd3467d44802> /Applications/Ultimate Vocal Remover.app/Contents/MacOS/libtk8.6.dylib
0x1a3bd7000 - 0x1a3d40fff libsamplerate.dylib (*) <d20a4ccc-ef66-3575-903e-b4f3dfbc80fc> /Applications/Ultimate Vocal Remover.app/Contents/Resources/samplerate/_samplerate_data/libsamplerate.dylib
0x7ff81bc1a000 - 0x7ff81bc1ffdf libffi.dylib (*) <bb553223-8d2b-3662-a6db-739b0772fd89> /usr/lib/libffi.dylib
External Modification Summary:
Calls made by other processes targeting this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by this process:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
Calls made by all processes on this machine:
task_for_pid: 0
thread_create: 0
thread_set_state: 0
-----------
Full Report
-----------
{"app_name":"UVR","timestamp":"2023-05-29 18:23:56.00 +0800","app_version":"0.0.0","slice_uuid":"3af20944-8338-3107-8418-34271ded1152","build_version":"","platform":1,"bundleID":"UVR","share_with_app_devs":1,"is_first_party":0,"bug_type":"309","os_version":"macOS 13.2 (22D49)","roots_installed":0,"name":"UVR","incident_id":"876E5706-CFC7-4CF7-9C04-38481EB0843E"}
{
"uptime" : 100000,
"procRole" : "Foreground",
"version" : 2,
"userID" : 501,
"deployVersion" : 210,
"modelCode" : "Mac14,9",
"coalitionID" : 23728,
"osVersion" : {
"train" : "macOS 13.2",
"build" : "22D49",
"releaseType" : "User"
},
"captureTime" : "2023-05-29 18:23:51.0457 +0800",
"incident" : "876E5706-CFC7-4CF7-9C04-38481EB0843E",
"pid" : 49950,
"translated" : true,
"cpuType" : "X86-64",
"roots_installed" : 0,
"bug_type" : "309",
"procLaunch" : "2023-05-29 14:16:51.8917 +0800",
"procStartAbsTime" : 2157757278515,
"procExitAbsTime" : 2510853361993,
"procName" : "UVR",
"procPath" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/UVR",
"bundleInfo" : {"CFBundleShortVersionString":"0.0.0","CFBundleIdentifier":"UVR"},
"storeInfo" : {"deviceIdentifierForVendor":"14E77089-3564-5EA5-ADA2-E1348235F820","thirdParty":true},
"parentProc" : "launchd",
"parentPid" : 1,
"coalitionName" : "UVR",
"crashReporterKey" : "0D18E8CE-7279-417B-9390-8FB5EE074745",
"throttleTimeout" : 2147483647,
"wakeTime" : 4729,
"sleepWakeUUID" : "0751FCEC-D0C6-4E88-9074-8792A2CF64FA",
"sip" : "enabled",
"exception" : {"codes":"0x0000000000000001, 0x0000000000000000","rawCodes":[1,0],"type":"EXC_BAD_INSTRUCTION","signal":"SIGILL"},
"termination" : {"flags":0,"code":4,"namespace":"SIGNAL","indicator":"Illegal instruction: 4","byProc":"exc handler","byPid":49950},
"extMods" : {"caller":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"system":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"targeted":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"warnings":0},
"faultingThread" : 19,
"threads" : [{"id":2009337,"queue":"com.apple.main-thread","frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":5570,"symbol":"mach_msg2_trap","symbolLocation":10,"imageIndex":1},{"imageOffset":62980,"symbol":"mach_msg2_internal","symbolLocation":82,"imageIndex":1},{"imageOffset":34357,"symbol":"mach_msg_overwrite","symbolLocation":723,"imageIndex":1},{"imageOffset":6312,"symbol":"mach_msg","symbolLocation":19,"imageIndex":1},{"imageOffset":520203,"symbol":"__CFRunLoopServiceMachPort","symbolLocation":145,"imageIndex":2},{"imageOffset":514660,"symbol":"__CFRunLoopRun","symbolLocation":1387,"imageIndex":2},{"imageOffset":511615,"symbol":"CFRunLoopRunSpecific","symbolLocation":560,"imageIndex":2},{"imageOffset":1060781,"symbol":"Tcl_WaitForEvent","symbolLocation":278,"imageIndex":3},{"imageOffset":767821,"symbol":"Tcl_DoOneEvent","symbolLocation":268,"imageIndex":3},{"imageOffset":10900,"symbol":"_tkinter_tkapp_mainloop_impl","symbolLocation":228,"imageIndex":4},{"imageOffset":619419,"symbol":"method_vectorcall_FASTCALL","symbolLocation":107,"imageIndex":5},{"imageOffset":1873439,"symbol":"call_function","symbolLocation":175,"imageIndex":5},{"imageOffset":1833229,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":23981,"imageIndex":5},{"imageOffset":1802719,"symbol":"_PyEval_Vector","symbolLocation":383,"imageIndex":5},{"imageOffset":573103,"symbol":"method_vectorcall","symbolLocation":159,"imageIndex":5},{"imageOffset":1873439,"symbol":"call_function","symbolLocation":175,"imageIndex":5},{"imageOffset":1833370,"symbol":"_PyEval_EvalFrameDefault","symbolLocation":24122,"imageIndex":5},{"imageOffset":1802719,"symbol":"_PyEval_Vector","symbolLocation":383,"imageIndex":5},{"imageOffset":1802306,"symbol":"PyEval_EvalCode","symbolLocation":114,"imageIndex":5},{"imageOffset":19001,"imageIndex":6},{"imageOffset":20967,"imageIndex":6},{"imageOffset":25360,"symbol":"start","symbolLocation":2432,"imageIndex":7}]},{"id":2009349,"name":"com.apple.rosetta.exceptionserver","frames":[{"imageOffset":17940,"imageIndex":8},{"imageOffset":66864,"imageIndex":8},{"imageOffset":73520,"imageIndex":8}]},{"id":2009531,"name":"com.apple.NSEventThread","frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":5570,"symbol":"mach_msg2_trap","symbolLocation":10,"imageIndex":1},{"imageOffset":62980,"symbol":"mach_msg2_internal","symbolLocation":82,"imageIndex":1},{"imageOffset":34357,"symbol":"mach_msg_overwrite","symbolLocation":723,"imageIndex":1},{"imageOffset":6312,"symbol":"mach_msg","symbolLocation":19,"imageIndex":1},{"imageOffset":520203,"symbol":"__CFRunLoopServiceMachPort","symbolLocation":145,"imageIndex":2},{"imageOffset":514660,"symbol":"__CFRunLoopRun","symbolLocation":1387,"imageIndex":2},{"imageOffset":511615,"symbol":"CFRunLoopRunSpecific","symbolLocation":560,"imageIndex":2},{"imageOffset":1700137,"symbol":"_NSEventThread","symbolLocation":132,"imageIndex":9},{"imageOffset":25177,"symbol":"_pthread_start","symbolLocation":125,"imageIndex":10},{"imageOffset":7291,"symbol":"thread_start","symbolLocation":15,"imageIndex":10}]},{"id":2009532,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":41690,"symbol":"__select","symbolLocation":10,"imageIndex":1},{"imageOffset":1063096,"symbol":"NotifierThreadProc","symbolLocation":880,"imageIndex":3},{"imageOffset":25177,"symbol":"_pthread_start","symbolLocation":125,"imageIndex":10},{"imageOffset":7291,"symbol":"thread_start","symbolLocation":15,"imageIndex":10}]},{"id":2060719,"name":"caulk.messenger.shared:high","frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":5438,"symbol":"semaphore_wait_trap","symbolLocation":10,"imageIndex":1},{"imageOffset":116984,"symbol":"caulk::mach::semaphore::wait_or_error()","symbolLocation":16,"imageIndex":11},{"imageOffset":9828,"symbol":"caulk::concurrent::details::worker_thread::run()","symbolLocation":36,"imageIndex":11},{"imageOffset":9000,"symbol":"void* caulk::thread_proxy<std::__1::tuple<caulk::thread::attributes, void (caulk::concurrent::details::worker_thread::*)(), std::__1::tuple<caulk::concurrent::details::worker_thread*> > >(void*)","symbolLocation":41,"imageIndex":11},{"imageOffset":25177,"symbol":"_pthread_start","symbolLocation":125,"imageIndex":10},{"imageOffset":7291,"symbol":"thread_start","symbolLocation":15,"imageIndex":10}]},{"id":2060722,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060723,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060724,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060725,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060726,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060727,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060728,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060729,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2060730,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":1223606,"symbol":"void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)","symbolLocation":358,"imageIndex":12},{"imageOffset":1,"imageIndex":0},{"imageOffset":1177616,"symbol":"kmp_flag_native<unsigned long long, (flag_type)1, true>::done_check()","symbolLocation":64,"imageIndex":12},{"imageOffset":7399961600,"imageIndex":0}]},{"id":2448441,"frames":[{"imageOffset":141436,"imageIndex":8}]},{"id":2458736,"frames":[{"imageOffset":141436,"imageIndex":8}]},{"id":2459539,"frames":[{"imageOffset":141436,"imageIndex":8}]},{"id":2459626,"frames":[{"imageOffset":141436,"imageIndex":8}]},{"id":2459708,"frames":[{"imageOffset":140705733831080,"imageIndex":0},{"imageOffset":16666,"symbol":"__psynch_cvwait","symbolLocation":10,"imageIndex":1},{"imageOffset":26593,"symbol":"_pthread_cond_wait","symbolLocation":1243,"imageIndex":10},{"imageOffset":6664808,"symbol":"-[_NSThreadPerformInfo wait]","symbolLocation":63,"imageIndex":13},{"imageOffset":321454,"symbol":"-[NSObject(NSThreadPerformAdditions) performSelector:onThread:withObject:waitUntilDone:modes:]","symbolLocation":450,"imageIndex":13},{"imageOffset":849799,"symbol":"-[NSObject(NSThreadPerformAdditions) performSelectorOnMainThread:withObject:waitUntilDone:modes:]","symbolLocation":87,"imageIndex":13},{"imageOffset":804269,"symbol":"-[TKBackgroundLoop main]","symbolLocation":138,"imageIndex":14},{"imageOffset":361404,"symbol":"__NSThread__start__","symbolLocation":1009,"imageIndex":13},{"imageOffset":25177,"symbol":"_pthread_start","symbolLocation":125,"imageIndex":10},{"imageOffset":7291,"symbol":"thread_start","symbolLocation":15,"imageIndex":10}]},{"triggered":true,"id":2459756,"threadState":{"flavor":"x86_THREAD_STATE","rbp":{"value":13016980176},"r12":{"value":2},"rosetta":{"tmp2":{"value":70303872992165924},"tmp1":{"value":9523421912829001157},"tmp0":{"value":7042076522}},"rbx":{"value":105553116555584},"r8":{"value":8},"r15":{"value":105553116555584},"r10":{"value":105553116555584},"rdx":{"value":18446744073709551584},"rdi":{"value":0},"r9":{"value":96},"r13":{"value":6172350832},"rflags":{"value":579},"rax":{"value":105553116555584},"rsp":{"value":13016977952},"r11":{"value":3540113567},"rcx":{"value":105553116555616},"r14":{"value":2},"rsi":{"value":2}},"frames":[{"imageOffset":8042,"symbol":"sinc_set_converter","symbolLocation":52,"imageIndex":15},{"imageOffset":6002,"symbol":"src_new","symbolLocation":86,"imageIndex":15},{"imageOffset":7475,"symbol":"src_simple","symbolLocation":30,"imageIndex":15},{"imageOffset":10514,"symbol":"ffi_call_unix64","symbolLocation":82,"imageIndex":16},{"imageOffset":6172350856,"imageIndex":0}]},{"id":2459757,"frames":[{"imageOffset":141436,"imageIndex":8}]}],
"usedImages" : [
{
"size" : 0,
"source" : "A",
"base" : 0,
"uuid" : "00000000-0000-0000-0000-000000000000"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703330865152,
"size" : 237560,
"uuid" : "ca136b67-0559-3f19-8b7e-9b80438090b6",
"path" : "\/usr\/lib\/system\/libsystem_kernel.dylib",
"name" : "libsystem_kernel.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703331504128,
"CFBundleShortVersionString" : "6.9",
"CFBundleIdentifier" : "com.apple.CoreFoundation",
"size" : 4816896,
"uuid" : "be859dcd-e5ee-3aab-97e4-13231468695f",
"path" : "\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation",
"name" : "CoreFoundation",
"CFBundleVersion" : "1953.300"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 5738938368,
"size" : 1196032,
"uuid" : "486d78ca-5d7d-3146-a58e-1b7a54cb8c35",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/libtcl8.6.dylib",
"name" : "libtcl8.6.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 5706579968,
"size" : 32768,
"uuid" : "f21dac2a-1022-354f-b21e-c2abdc084dd5",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/lib-dynload\/_tkinter.cpython-310-darwin.so",
"name" : "_tkinter.cpython-310-darwin.so"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4487905280,
"size" : 3653632,
"uuid" : "76e71a09-4224-360b-856b-5860f9c7f47a",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/Python",
"name" : "Python"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 4304969728,
"CFBundleShortVersionString" : "0.0.0",
"CFBundleIdentifier" : "UVR",
"size" : 49152,
"uuid" : "3af20944-8338-3107-8418-34271ded1152",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/UVR",
"name" : "UVR"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 8622669824,
"size" : 622592,
"uuid" : "270c4224-a38f-3a22-9ba9-95968f487738",
"path" : "\/usr\/lib\/dyld",
"name" : "dyld"
},
{
"source" : "P",
"arch" : "arm64",
"base" : 140703126601728,
"size" : 196608,
"uuid" : "f066db2c-ed38-3f37-8d21-81d15fa908fe",
"path" : "\/usr\/libexec\/rosetta\/runtime",
"name" : "runtime"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703382663168,
"CFBundleShortVersionString" : "6.9",
"CFBundleIdentifier" : "com.apple.AppKit",
"size" : 16814067,
"uuid" : "480a5693-f3e3-3b50-a1f3-169d12a12a0e",
"path" : "\/System\/Library\/Frameworks\/AppKit.framework\/Versions\/C\/AppKit",
"name" : "AppKit",
"CFBundleVersion" : "2299.40.116"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703331102720,
"size" : 49144,
"uuid" : "3bd433d4-15bd-3add-a612-95e4d3b20719",
"path" : "\/usr\/lib\/system\/libsystem_pthread.dylib",
"name" : "libsystem_pthread.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703495340032,
"CFBundleShortVersionString" : "1.0",
"CFBundleIdentifier" : "com.apple.audio.caulk",
"size" : 155648,
"uuid" : "bf7582bd-4de0-3ca2-8b69-f1944725f182",
"path" : "\/System\/Library\/PrivateFrameworks\/caulk.framework\/Versions\/A\/caulk",
"name" : "caulk"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 5749194752,
"size" : 1343488,
"uuid" : "6934e91e-bb7d-3812-a269-50db7d644483",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/Resources\/torch\/.dylibs\/libiomp5.dylib",
"name" : "libiomp5.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703345876992,
"CFBundleShortVersionString" : "6.9",
"CFBundleIdentifier" : "com.apple.Foundation",
"size" : 10559479,
"uuid" : "a58576df-7109-3a13-a338-617f135ce8a8",
"path" : "\/System\/Library\/Frameworks\/Foundation.framework\/Versions\/C\/Foundation",
"name" : "Foundation",
"CFBundleVersion" : "1953.300"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 5742600192,
"size" : 1081344,
"uuid" : "32c3f790-c241-3483-ad54-fd3467d44802",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/MacOS\/libtk8.6.dylib",
"name" : "libtk8.6.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 7042068480,
"size" : 1482752,
"uuid" : "d20a4ccc-ef66-3575-903e-b4f3dfbc80fc",
"path" : "\/Applications\/Ultimate Vocal Remover.app\/Contents\/Resources\/samplerate\/_samplerate_data\/libsamplerate.dylib",
"name" : "libsamplerate.dylib"
},
{
"source" : "P",
"arch" : "x86_64",
"base" : 140703594291200,
"size" : 24544,
"uuid" : "bb553223-8d2b-3662-a6db-739b0772fd89",
"path" : "\/usr\/lib\/libffi.dylib",
"name" : "libffi.dylib"
}
],
"legacyInfo" : {
"threadTriggered" : {
}
},
"trialInfo" : {
"rollouts" : [
{
"rolloutId" : "62b4513af75dc926494899c6",
"factorPackIds" : {
"COREOS_ICD" : "62fbe3cfa9a700130f60b3ea"
},
"deploymentId" : 240000019
},
{
"rolloutId" : "60da5e84ab0ca017dace9abf",
"factorPackIds" : {
},
"deploymentId" : 240000008
}
],
"experiments" : [
{
"treatmentId" : "79516245-b830-464e-bb6c-a2998d9e2191",
"experimentId" : "63b8ec83fd1d345f491884ba",
"deploymentId" : 400000024
},
{
"treatmentId" : "6dd670af-0633-45e4-ae5f-122ae4df02be",
"experimentId" : "64406ba83deb637ac8a04419",
"deploymentId" : 900000005
}
]
},
"reportNotes" : [
"dyld_process_snapshot_create_for_process failed with 5"
]
}
Model: Mac14,9, BootROM 8419.80.7, proc 10:6:4 processors, 16 GB, SMC
Graphics: Apple M2 Pro, Apple M2 Pro, Built-In
Display: Color LCD, 3024 x 1964 Retina, Main, MirrorOff, Online
Display: 2779, 1920 x 1080 (1080p FHD - Full High Definition), MirrorOff, Online
Memory Module: LPDDR5, Hynix
AirPort: spairport_wireless_card_type_wifi (0x14E4, 0x4388), wl0: Dec 8 2022 04:59:41 version 23.20.22.47.40.50.80 FWID 01-0c9425e4
Bluetooth: Version (null), 0 services, 0 devices, 0 incoming serial ports
Network Service: iPhone 2, Ethernet, en8
Network Service: Wi-Fi, AirPort, en0
USB Device: USB31Bus
USB Device: iPhone
USB Device: USB31Bus
USB Device: USB3.0 Hub
USB Device: 4-Port USB 3.0 Hub
USB Device: USB3.0 Card Reader
USB Device: AX88179
USB Device: USB2.0 Hub
USB Device: 4-Port USB 2.0 Hub
USB Device: USB31Bus
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Bus: MacBook Pro, Apple Inc.
Thunderbolt Bus: MacBook Pro, Apple Inc. | open | 2023-05-29T10:28:15Z | 2023-07-01T10:04:40Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/583 | [] | songzhiming | 2 |
mage-ai/mage-ai | data-science | 5,483 | To be able to modify pipeline runtime variable without losing runtime variables created before | Hi. We have some pipelines which are using runtime variables (like the screenshot below)

However, it seems like we are not able to modify the runtime variables, or add new runtime variables without losing the variables that we have created before. In the below screenshot I have edited the trigger trying to add a new one.

can you please fix this? as it would be very convenient to have.
Thank you !
| open | 2024-10-09T11:40:28Z | 2024-10-10T07:53:56Z | https://github.com/mage-ai/mage-ai/issues/5483 | [
"bug"
] | B88BB | 2 |
BeastByteAI/scikit-llm | scikit-learn | 85 | Feature request: setting seed parameter of OpenAI's chat completions API | Thank you for creating and maintaining this awesome project!
OpenAI recently introduced the `seed` parameter to make their models' text generation and chat completion behavior (more) reproducible (see https://cookbook.openai.com/examples/reproducible_outputs_with_the_seed_parameter).
I think it would be great if you could enable users of your package to control this parameter when using OpenAI models as a backend (i.e., in the files here: https://github.com/iryna-kondr/scikit-llm/tree/main/skllm/models/gpt)
The `seed` parameter could be hard-coded https://github.com/iryna-kondr/scikit-llm/blob/0bdea940fd369cdd5c5a0e625d3eea8f2b512208/skllm/llm/gpt/clients/openai/completion.py#L50 similar to setting `temperature=0.0`.
Alternatively, users could pass `seed=<SEED>` via `**kwargs`.
| open | 2024-02-14T13:56:35Z | 2024-02-14T15:29:56Z | https://github.com/BeastByteAI/scikit-llm/issues/85 | [] | haukelicht | 1 |
encode/apistar | api | 681 | Doing async requests | Just found out to my surprise that the apistar client does not support async requests.
As this project seems rather dead, maybe someone knows a similar one that implement async?
did some overriding to achieve it in a hacky way:
https://gist.github.com/kelvan/49e3efb99c329b4c2476d49458b19c19 | open | 2020-10-29T14:39:13Z | 2020-11-10T18:15:27Z | https://github.com/encode/apistar/issues/681 | [] | kelvan | 3 |
kennethreitz/responder | graphql | 361 | Documentation Error | In the Feature Tour (tour.rst) under the Trusted Hosts heading, shouldn't
` api = responder.API(allowed_hosts=[example.com, tenant.example.com])
`
be
` api = responder.API(allowed_hosts=['example.com', 'tenant.example.com'])
`
with quotes around the host names. | closed | 2019-06-04T12:17:11Z | 2019-06-04T15:53:35Z | https://github.com/kennethreitz/responder/issues/361 | [] | mtcronin99 | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,391 | how much epoch I need to get the realistic results? | Hi everybody
I trained my network for 30 epochs. Is it normal that I don't have good results?
| open | 2022-03-07T15:03:54Z | 2022-07-19T13:44:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1391 | [] | elsobhano | 5 |
d2l-ai/d2l-en | pytorch | 2,570 | MLX support | I plan on contributing for the new ML framework by Apple for silicon https://github.com/ml-explore/mlx
I tried setting up jupyter notebook to directly edit markdown using these resources:
1. https://d2l.ai/chapter_appendix-tools-for-deep-learning/contributing.html
2. https://github.com/d2l-ai/d2l-en/blob/master/CONTRIBUTING.md
I still can't run the code .md files as jupyter notebook is opening md files in text format only.
What is the recommended approach to add new framework support? | open | 2023-12-10T06:52:32Z | 2024-01-17T05:19:37Z | https://github.com/d2l-ai/d2l-en/issues/2570 | [] | rahulchittimalla | 1 |
StratoDem/sd-material-ui | dash | 444 | Add accordion component | <!--- Provide a general summary of your changes in the Title above -->
<!--- MANDATORY -->
<!--- Always fill out a description, even if you are reporting a simple issue. If it is something truly trivial or simple, it is okay to keep it short and sweet. -->
## Description
<!--- A clear and concise description of what the issue is about. Include things like expected/desired behavior, actual behavior, motivation or rational for a new feature, what files it concerns, etc. -->
https://material-ui.com/components/accordion/ | closed | 2020-08-18T13:01:45Z | 2020-08-19T14:00:49Z | https://github.com/StratoDem/sd-material-ui/issues/444 | [] | coralvanda | 0 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 346 | Spherical Embedding Constraint (SEC) | Do you plan to add the Spherical Embedding Constraint (SEC) proposed in the following paper? (https://arxiv.org/pdf/2011.02785.pdf) | open | 2021-06-28T07:21:24Z | 2021-09-06T19:27:23Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/346 | [
"new algorithm request"
] | StefanoSalvatori | 2 |
miguelgrinberg/flasky | flask | 129 | No module named app | I am trying the following command in cmd
python manage.py shell
and its giving me error no module named app!
any help please
| closed | 2016-04-20T18:17:56Z | 2016-06-01T16:23:27Z | https://github.com/miguelgrinberg/flasky/issues/129 | [
"question"
] | Mohamad1994HD | 1 |
python-restx/flask-restx | api | 416 | Can't specified api doc body for a different input | ### ***** **BEFORE LOGGING AN ISSUE** *****
- Is this something you can **debug and fix**? Send a pull request! Bug fixes and documentation fixes are welcome.
- Please check if a similar issue already exists or has been closed before. Seriously, nobody here is getting paid. Help us out and take five minutes to make sure you aren't submitting a duplicate.
- Please review the [guidelines for contributing](https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst)
### **Code**
```python
address_ns = Namespace(name="address", validate=True)
fields = {
"street": String(attribute="street"),
"number": String(attribute="number"),
"zip_code": String(attribute="zip_code"),
"user_id": Integer(attribute="user_id"),
"cep_id": Integer(attribute="cep_id"),
}
fields["cep"] = Nested(cep_model)
fields["user"] = Nested(user_model)
model_with_netsted_fields = Model('Address', fields)
class AddressPostFields(Raw):
def format(self, value):
return {
"street": value.street,
"number": value.number,
"zip_code": value.zip_code,
"user_id": value.user_id,
"cep_id": value.cep_id,
}
@address_ns.route("", endpoint="address_create")
class AddressResource(Resource):
@address_ns.response(HTTPStatus.OK, "Retrieved unit list.")
@address_ns.doc(model=model_with_netsted_fields)
def get(self):
return '{}'
@address_ns.response(int(HTTPStatus.CREATED), "Added new unit.")
@address_ns.doc(model=model_with_netsted_fields, body=AddressPostFields)
def post(self):
return '{}'
```
### **Expected Behavior**
Specify a 'model' for input methods and another 'model' for output
### **Actual Behavior**
with the code above, i'm not allowed to add a body on post.
if I change the `body` param for `model_with_netsted_fields`,
swagger shows all fields with the nested ones , but it should be omitted with `AddressPostFields`
I'm following the [restx docs](https://flask-restx.readthedocs.io/en/latest/swagger.html#input-and-output-models) but couldn't get it work...
### **Environment**
- Python 3.8.10
- Flask 2.0.2
- Werkzeug 2.0.3
- Flask-RESTX 0.5.1
| open | 2022-02-25T14:06:22Z | 2022-02-25T14:06:22Z | https://github.com/python-restx/flask-restx/issues/416 | [
"bug"
] | plenzjr | 0 |
home-assistant/core | asyncio | 140,434 | Roborock - No more control via these control buttons | ### The problem
Since Beta Core 2025-03-0bx it is no longer possible to control my Roborock manually with these buttons, nor can I use them in automations.

I saw that there was a PR #139845 that was supposed to fix this issue. But you still can't use these routines anymore.
### What version of Home Assistant Core has the issue?
core-2025-3-2
### What was the last working version of Home Assistant Core?
core-2025-2-x
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-12T06:02:51Z | 2025-03-12T13:36:16Z | https://github.com/home-assistant/core/issues/140434 | [
"integration: roborock"
] | Revilo91 | 3 |
jupyter-book/jupyter-book | jupyter | 1,371 | nbconvert pinned at <6 | Is there a reason why setup requires `nbconvert<6` https://github.com/executablebooks/jupyter-book/blob/0ecd3300494959a065ef226356203dfa6ec4927f/setup.cfg#L45 ?
Myst-nb is more generous (`nbconvert>=5.6,<7`); `nbconvert` has been at 6.x for a long time now. | closed | 2021-06-24T08:08:06Z | 2021-06-25T16:34:22Z | https://github.com/jupyter-book/jupyter-book/issues/1371 | [
"bug"
] | psychemedia | 1 |
apache/airflow | automation | 48,083 | xmlsec==1.3.15 update on March 11/2025 breaks apache-airflow-providers-amazon builds in Ubuntu running Python 3.11+ | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
Looks like a return of https://github.com/apache/airflow/issues/39437
```
uname -a
Linux airflow-worker-qg8nn 6.1.123+ #1 SMP PREEMPT_DYNAMIC Sun Jan 12 17:02:52 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
airflow@airflow-worker-qg8nn:~$ cat /etc/issue
Ubuntu 24.04.2 LTS \n \l
```
When installing apache-airflow-providers-amazon
`
********************************************************************************
Please consider removing the following classifiers in favor of a SPDX license expression:
License :: OSI Approved :: MIT License
See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details.
********************************************************************************
!!
self._finalize_license_expression()
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-311/xmlsec
copying src/xmlsec/__init__.pyi -> build/lib.linux-x86_64-cpython-311/xmlsec
copying src/xmlsec/template.pyi -> build/lib.linux-x86_64-cpython-311/xmlsec
copying src/xmlsec/tree.pyi -> build/lib.linux-x86_64-cpython-311/xmlsec
copying src/xmlsec/constants.pyi -> build/lib.linux-x86_64-cpython-311/xmlsec
copying src/xmlsec/py.typed -> build/lib.linux-x86_64-cpython-311/xmlsec
running build_ext
error: xmlsec1 is not installed or not in path.
[end of output]
```
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for xmlsec
Building wheel for pyhive (setup.py): started
Building wheel for pyhive (setup.py): finished with status 'done'
Created wheel for pyhive: filename=PyHive-0.7.0-py3-none-any.whl size=53933 sha256=3db46c1d80f77ee8782f517987a0c1fc898576faf2efc3842475b53df6630d2f
Stored in directory: /tmp/pip-ephem-wheel-cache-nnezwghj/wheels/11/32/63/d1d379f01c15d6488b22ed89d257b613494e4595ed9b9c7f1c
Successfully built maxminddb-geolite2 thrift pure-sasl pyhive
Failed to build xmlsec
ERROR: Could not build wheels for xmlsec, which is required to install pyproject.toml-based projects
```
Pinning pip install xmlsec==1.3.14 resolve the issue
### Apache Airflow version
2.10.5
### Operating System
Ubuntu 24.04.2
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
pip install apache-airflow-providers-amazon
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-21T21:24:51Z | 2025-03-23T20:02:27Z | https://github.com/apache/airflow/issues/48083 | [
"kind:bug",
"area:providers",
"area:dependencies",
"needs-triage"
] | kmarutya | 4 |
PaddlePaddle/models | computer-vision | 4,732 | 度量学习模块改变图像大小 | 您好,大神。我将度量学习中的图像大小做了改变。由原先的(224,224)改为(64,128)。相应的图像预处理部分也做修改,但是运行到train_exe.run()的时候报错:
ValueError: The fed Variable 'image' should have dimensions = 4, shape = (-1, 3, 64, 128), but received fed shape [256, 3, 128, 64] on each device
请问一下,这个应该如何修改?谢谢 | open | 2020-07-01T02:48:18Z | 2024-02-26T05:11:07Z | https://github.com/PaddlePaddle/models/issues/4732 | [] | baigang666 | 2 |
STVIR/pysot | computer-vision | 133 | EAO=0.415,vot2018 | can anyone achieve EAO=0.415 in new four datasets? can you share your experience? | closed | 2019-07-29T03:08:55Z | 2019-12-19T02:10:34Z | https://github.com/STVIR/pysot/issues/133 | [
"duplicate"
] | mengmeng18 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.