repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | nlp | 7,269 | Memory leak when streaming | ### Describe the bug
I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable.
I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher.
### Steps to reproduce the bug
You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended.
```py
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("WaveGenAI/dataset", streaming=True)
dataloader = DataLoader(dataset["train"], num_workers=3)
for i, data in enumerate(dataloader):
print(i, end="\r")
```
### Expected behavior
The Ram usage should be always the same (just 3 shards loaded in the ram).
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | open | 2024-10-31T13:33:52Z | 2024-11-18T11:46:07Z | https://github.com/huggingface/datasets/issues/7269 | [] | Jourdelune | 2 |
TencentARC/GFPGAN | deep-learning | 427 | Segmentation fault (core dumped) | Hi. I get this warning:
/home/khafan/anaconda3/envs/GPU/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Segmentation fault (core dumped)
all the depencies are installed. this is a pip freeze for my environment.
absl-py==1.4.0
addict==2.4.0
basicsr==1.4.2
cachetools==5.3.1
certifi==2022.12.7
charset-normalizer==2.1.1
cmake==3.25.0
contourpy==1.1.0
cycler==0.11.0
facexlib==0.3.0
filelock==3.9.0
filterpy==1.4.5
fonttools==4.42.0
future==0.18.3
-e git+https://github.com/TencentARC/GFPGAN.git@2eac2033893ca7f427f4035d80fe95b92649ac56#egg=gfpgan
google-auth==2.22.0
google-auth-oauthlib==1.0.0
grpcio==1.56.2
idna==3.4
imageio==2.31.1
importlib-metadata==6.8.0
importlib-resources==6.0.1
Jinja2==3.1.2
kiwisolver==1.4.4
lazy_loader==0.3
lit==15.0.7
llvmlite==0.40.1
lmdb==1.4.1
Markdown==3.4.4
MarkupSafe==2.1.2
matplotlib==3.7.2
mpmath==1.2.1
networkx==3.0
numba==0.57.1
numpy==1.24.1
oauthlib==3.2.2
opencv-python==4.8.0.74
packaging==23.1
Pillow==9.3.0
platformdirs==3.10.0
protobuf==4.24.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pyparsing==3.0.9
python-dateutil==2.8.2
pytorch-triton-rocm==2.0.1
PyWavelets==1.4.1
PyYAML==6.0.1
realesrgan==0.3.0
requests==2.28.1
requests-oauthlib==1.3.1
rsa==4.9
scikit-image==0.21.0
scipy==1.10.1
six==1.16.0
sympy==1.11.1
tb-nightly==2.14.0a20230808
tensorboard-data-server==0.7.1
tifffile==2023.7.10
tomli==2.0.1
torch==2.0.1+rocm5.4.2
torchaudio==2.0.2+rocm5.4.2
torchvision==0.15.2+rocm5.4.2
tqdm==4.65.2
typing_extensions==4.4.0
urllib3==1.26.13
Werkzeug==2.3.6
yapf==0.40.1
zipp==3.16.2
| open | 2023-08-09T05:36:31Z | 2024-03-30T01:09:23Z | https://github.com/TencentARC/GFPGAN/issues/427 | [] | meatloaf4u | 2 |
pytorch/vision | machine-learning | 8,713 | `torchvision.ops.boxes.batched_nms` slow on large box numbers | ### 🐛 Describe the bug
## Description
`torchvision.ops.boxes.batched_nms` on CUDA GPU slows down considerably when then number of bounding boxes involved increases.
The slow down is associated with Device -> Host transfer, and is linked to the iterative part of the Non Maximum Suppression (NMS) algorithm. In a nutshell the IoU map is computed on the device, then the mask is copied to the CPU to perform the iterative unwrap, which result is copied back to the device (from [here and below](https://github.com/pytorch/vision/blob/868a3b42f4bffe29e4414ad7e4c7d9d0b4690ecb/torchvision/csrc/ops/cuda/nms_kernel.cu#L136)).
The mask size grows quadratically with the number of input bounding boxes and we see a large TX rate when running on 30_000+ boxes.
In comparison the [OpenLabs mmcv](https://github.com/open-mmlab/mmcv) solution does the same thing for the IoU map but runs a custom kernel to do the unwrap directly on the device. The[ implemented kernel](https://github.com/open-mmlab/mmcv/blob/71437a361cc8918fc398ae408267cf019f4ca03f/mmcv/ops/csrc/common/cuda/nms_cuda_kernel.cuh#L76) is not very efficient compute wise but save the data transfer cost, which is the main bottleneck.
I benchmarked `torchvision` batched_nms against `mmcv`'s on `V100` and `A100` GPUs.


Both figures show the speed factor when comparing a solution to `torchvision.ops.boxes._batched_nms_vanilla` (there is 2 nms in torchvision, selected based on the number of elements. Here , `torchvision.ops.boxes._batched_nms_vanilla` is used a base comparison and we compare `torchvision.ops.boxes._batched_nms_coordinate_trick` and `mmcv` batched_nms). From 30k boxes and above `mmcv` NMS is x20+ faster.
Is there a reason why we keep this GPU -> CPU transfer ?
Could we improve the scalability by having a similar on-device additional kernel ?
## Additional informations
* All boxes are from the same class
* Benchmark has been done using `torch.utils.benchmark.Timer` on 100 examples for each NMS.
* I did not know if this should be put as Bug report or Feature request.
### Versions
```
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.24.1
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 14 2024, 06:11:20) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.219-208.866.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 5999.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.0
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.5.0
[pip3] torch-optimizer==0.3.0
[pip3] torchaudio==2.5.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] No relevant packages
``` | closed | 2024-11-06T12:58:13Z | 2025-02-20T17:16:10Z | https://github.com/pytorch/vision/issues/8713 | [] | Ghelfi | 1 |
open-mmlab/mmdetection | pytorch | 11,683 | Hydranet - separate prediction heads per class | **Describe the feature**
**Motivation**
I'd love to be able to train an instance segmentation and semantic segmentation model with the same backbone.
Or train a model with the same backbone, but a prediction head per class, so that I could update their weights independently. I guess I could already achieve this by doing something like freezing the backbone, exporting to ONNX, splitting the model appropriately, the same backbone repeatedly, but different heads as needed.
Do any models in MMDetection already make it easy to get started with this?
Any guidance at all would be very helpful.
| open | 2024-05-06T12:10:13Z | 2024-05-06T12:59:23Z | https://github.com/open-mmlab/mmdetection/issues/11683 | [] | GeorgePearse | 2 |
tqdm/tqdm | pandas | 1,194 | tqdm.notebook not rendering | I have an issue where tqdm works fine but tqdm.notebook shows unformatted (the standard tqdm) progress bar and does not update at all. I have tried it in a virtual env with pip installing only jupyter and tqdm.
Perhaps there is a clash with some other package. Be pleased if anyone can tell me how to fix.
| closed | 2021-06-25T18:02:06Z | 2021-07-29T15:22:53Z | https://github.com/tqdm/tqdm/issues/1194 | [
"question/docs ‽",
"submodule-notebook 📓"
] | simonm3 | 2 |
python-restx/flask-restx | api | 19 | Marshal not renaming fields with attribute | Either I am misunderstanding how to use the marshal feature, and the documentation or the following is a bug.
### **Code**
```
m = api.model('mymodel', {'name': fields.String(attribute='MyName')})
@api.route('/test')
class Test(Resource):
@api.expect(m)
def post(self, **kwargs):
logger.debug(api.payload)
logger.debug(request.get_json())
logger.debug(marshal(api.payload, m))
return api.payload
```
Output
```
127.0.0.1 - - [26/Jan/2020 22:57:36] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swaggerui/swagger-ui-bundle.js HTTP/1.1" 304 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swaggerui/swagger-ui.css HTTP/1.1" 304 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swaggerui/droid-sans.css HTTP/1.1" 304 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swaggerui/swagger-ui-standalone-preset.js HTTP/1.1" 304 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swaggerui/favicon-16x16.png HTTP/1.1" 200 -
127.0.0.1 - - [26/Jan/2020 22:57:37] "GET /swagger.json HTTP/1.1" 200 -
[2020-01-26 22:58:09,111] DEBUG in switchvox: {'name': '987'}
[2020-01-26 22:58:09,111] DEBUG in switchvox: {'name': '987'}
[2020-01-26 22:58:09,111] DEBUG in switchvox: {'name': None}
```
### **Expected Behavior**
I expected that using the fields.String(attribute="...") would cause the rewrite of the data being entered so that instead of receiving {'name': '987'} I would have received {'MyName': '987'}
Either when calling api.payload or request.get_json()
### **Actual Behavior**
No change in field names.
### **Error Messages/Stack Trace**
Does not error, just does not return expected results.
### **Environment**
- Python 3.7
- Flask version current
- Flask-RESTX version current
- Other installed Flask flask_restplus
Maybe I am calling it wrong, even using marshal() did not return any data. Unfortunately the documentation is not really clear to me around this part. | closed | 2020-01-27T07:07:03Z | 2022-11-18T09:02:04Z | https://github.com/python-restx/flask-restx/issues/19 | [] | voice1 | 6 |
mljar/mercury | data-visualization | 373 | Pinned django version 4.2 has CVE-2023-31047 | Newer versions of django (>= 4.2.2) no longer have this CVE. This is blocking me using mercury in an enterprise environment. | closed | 2023-10-10T15:36:53Z | 2023-10-11T08:01:44Z | https://github.com/mljar/mercury/issues/373 | [] | savagej | 1 |
gradio-app/gradio | machine-learning | 10,752 | Remove scrollers in dataframe | hey all,
trying to remove scrollers from the data frame. Is they a way?

seems they're displaying by default. tried css, styling, max_height... all didn't work out
| closed | 2025-03-07T10:38:50Z | 2025-03-13T07:05:09Z | https://github.com/gradio-app/gradio/issues/10752 | [
"💾 Dataframe"
] | angelica-ignateva | 9 |
marcomusy/vedo | numpy | 361 | What are the methods in Vedo to clean a pointcloud data or to remove outlier removal.... |
I have a noisy pointcloud data and i want to to clean this pointcloud for surface Reconstruction......
What are the methods available in vedo to clean or outlier removal of pointcloud.
If possible please provide examples....
Thanks
[PointClouds.zip](https://github.com/marcomusy/vedo/files/6268895/PointClouds.zip)
| open | 2021-04-07T04:09:24Z | 2021-10-27T04:25:26Z | https://github.com/marcomusy/vedo/issues/361 | [] | sonumathur | 6 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,113 | [Bug]: Using full path to python executable in webui-user.sh cause problems with venv on macOS and Linux | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
If the user, for some reason, sets the full path to the Python executable, venv will be created as usual. Everything will work and look normal. But after closer inspection, you will see that almost all packages are actually installed globally and not in the venv. This basically makes venv useless in this case.
### Steps to reproduce the problem
1. Open webui-user.sh in editor
2. Set python_cmd="/opt/homebrew/bin/python3.10" (or any other full path)
3. Run ./webui.sh
4. Wait for the installation to finish
5. Close the web UI
6. Check libraries installed in venv folder and those installed globally
### What should have happened?
Since venv was created, I expected that all packages would be installed in venv and not globally.
### What browsers do you use to access the UI ?
Brave
### Sysinfo
It is not helpful in this case
### Console logs
```Shell
> ./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on viking user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.3.9.4)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
Collecting torch==2.1.2
Using cached torch-2.1.2-cp310-none-macosx_11_0_arm64.whl.metadata (25 kB)
Collecting torchvision==0.16.2
Using cached torchvision-0.16.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (6.6 kB)
Collecting filelock (from torch==2.1.2)
Using cached filelock-3.15.4-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions (from torch==2.1.2)
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.1.2)
Using cached sympy-1.12.1-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.1.2)
Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.1.2)
Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.1.2)
Using cached fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB)
Collecting numpy (from torchvision==0.16.2)
Using cached numpy-2.0.0-cp310-cp310-macosx_14_0_arm64.whl.metadata (60 kB)
Collecting requests (from torchvision==0.16.2)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.2)
Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.2 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.2)
Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.16.2)
Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision==0.16.2)
Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.16.2)
Using cached urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.2)
Using cached certifi-2024.6.2-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath<1.4.0,>=1.1.0 (from sympy->torch==2.1.2)
Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Using cached torch-2.1.2-cp310-none-macosx_11_0_arm64.whl (59.6 MB)
Using cached torchvision-0.16.2-cp310-cp310-macosx_11_0_arm64.whl (1.5 MB)
Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB)
Using cached filelock-3.15.4-py3-none-any.whl (16 kB)
Using cached fsspec-2024.6.1-py3-none-any.whl (177 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Using cached numpy-2.0.0-cp310-cp310-macosx_14_0_arm64.whl (5.2 MB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached sympy-1.12.1-py3-none-any.whl (5.7 MB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached certifi-2024.6.2-py3-none-any.whl (164 kB)
Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl (120 kB)
Using cached idna-3.7-py3-none-any.whl (66 kB)
Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl (18 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached urllib3-2.2.2-py3-none-any.whl (121 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.6.2 charset-normalizer-3.3.2 filelock-3.15.4 fsspec-2024.6.1 idna-3.7 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 numpy-2.0.0 pillow-10.3.0 requests-2.32.3 sympy-1.12.1 torch-2.1.2 torchvision-0.16.2 typing-extensions-4.12.2 urllib3-2.2.2
[notice] A new release of pip is available: 24.0 -> 24.1.1
[notice] To update, run: python3.10 -m pip install --upgrade pip
Installing clip
Installing open_clip
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --opt-sub-quad-attention --upcast-sampling --no-half --lowvram --use-cpu all
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from /Users/viking/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /Users/viking/stable-diffusion-webui/configs/v1-inference.yaml
/opt/homebrew/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 77.3s (prepare environment: 35.7s, import torch: 5.8s, import gradio: 7.7s, setup paths: 12.3s, initialize shared: 0.2s, other imports: 14.5s, load scripts: 0.2s, initialize extra networks: 0.2s, create ui: 0.3s, gradio launch: 0.3s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 17.2s (load weights from disk: 0.2s, create model: 0.8s, apply weights to model: 16.0s, calculate empty prompt: 0.1s).
```
### Additional information
`A1111 works perfectly fine without any issues.`
The problem is where Python libraries are installed in this case. | closed | 2024-06-29T15:44:30Z | 2024-07-06T18:22:12Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16113 | [
"bug-report"
] | viking1304 | 4 |
httpie/http-prompt | api | 13 | Define a python/REPL syntax | Another thing I would find SUPER useful would be the ability to use http-prompt as a normal Python REPL.
I'm imagining that this would either be with back ticks, a python() function, or a python statement.
For instance, if I could do something like:
```
https://api.amazon.com> `import settings.API_KEY as api_secret_key`
https://api.amazon.com> api_key=`api_secret_key`
https://api.amazon.com> nonce=`import random; random.randint(0, 99)`
https://api.amazon.com> post
```
This would be so, so amazing.
| open | 2016-05-20T19:33:04Z | 2016-09-18T11:38:40Z | https://github.com/httpie/http-prompt/issues/13 | [
"enhancement"
] | Miserlou | 1 |
jmcnamara/XlsxWriter | pandas | 401 | Feature request: Ability to customize Chartsheet and Worksheet | would be useful in case of the need to extend `Workbook`, allows to easily customize the classes used by `_add_sheet` (`Chartsheet` and `Worksheet`). This will allow to create 'sheet templates' to reuse when needed.
Currently creation of sheet is hardcoded in
```
if is_chartsheet:
worksheet = Chartsheet()
else:
worksheet = Worksheet()
```
I see 3 possible implementations:
- add a parameter to `_add_sheet`
- create to class attributes 'chartsheet_class' and 'worksheet_class' and use them in `_add_sheet`
- mix of both
I'm going to propose a pull request with option 3 that seems the most flexible
base idea:
```
def _add_sheet(self, name, is_chartsheet, klass=None):`
...
...
if klass:
worksheet = klass()
else:
if is_chartsheet:
worksheet = self.chartsheet_class()
else:
worksheet = self.worksheet_class()
```
| closed | 2016-12-16T06:13:11Z | 2018-03-18T14:30:00Z | https://github.com/jmcnamara/XlsxWriter/issues/401 | [
"feature request",
"short term"
] | saxix | 2 |
strawberry-graphql/strawberry | django | 3,289 | Strawberry cannot resolve type by inheriting a generic type with union type applied to it. | Hello! I tried to create a type inheriting a generic type with a union type applied and caught a `TypeError: Response fields cannot be resolved.`
## Describe the Bug
There's a code fragment the bug can be reproduced with:
```python
from typing import Annotated, Generic, TypeVar, Union
import strawberry
T = TypeVar("T")
@strawberry.type
class User:
name: str
age: int
@strawberry.type
class ProUser:
name: str
age: float
@strawberry.type
class GenType(Generic[T]):
data: T
GeneralUser = Annotated[Union[User, ProUser], strawberry.union("GeneralUser")]
@strawberry.type
class Response(GenType[GeneralUser]):
...
@strawberry.type
class Query:
@strawberry.field
def user(self) -> Response:
return Response(data=User(age=1, name="John"))
schema = strawberry.Schema(query=Query)
```
This code raises the following exception: `TypeError: Response fields cannot be resolved. Unexpected type 'typing.Annotated[typing.Union[__main__.User, __main__.ProUser], <strawberry.union.StrawberryUnion object at 0x14f8a90>]'`
But if you replace `GeneralUser = Annotated[Union[User, ProUser], strawberry.union("GeneralUser")]` with `GeneralUser = strawberry.union("GeneralUser", (User, ProUser))` the code works as expected and doesn't raise any exception.
It looks like union types cannot be applied to generic types if they're declared with `Annotated`, because this bug isn't reproducible in case the old-style approach with `strawberry.union` is used.
## System Information
- Strawberry version (if applicable): 0.216.1
- Python version: 3.9 | open | 2023-12-13T10:08:42Z | 2025-03-20T15:56:31Z | https://github.com/strawberry-graphql/strawberry/issues/3289 | [
"bug"
] | HrMathematiker | 1 |
axnsan12/drf-yasg | rest-api | 359 | How to change input and return Serializer | Most of my APIs input and outputs are different 👍
An example:
Input is {'A':1, 'B':2} and returns {A:{B:[1,2,3,4]}, }
The API works correctly, but the documentation page shows both the same as input (data)
How can I configure that? | closed | 2019-05-01T21:06:25Z | 2019-06-12T23:59:31Z | https://github.com/axnsan12/drf-yasg/issues/359 | [] | oneandonlyonebutyou | 2 |
nschloe/tikzplotlib | matplotlib | 333 | Empty tikz code for plot done with Seaborn package | When I use the package with standard matplotlib plots it works like a charm. I tried to do a simple scatterplot with seaborn. The plot is showing up and is saved with matplotlib but the tikzplotlib package is not producing any tikz code.
| closed | 2019-09-22T18:44:49Z | 2021-04-11T11:56:53Z | https://github.com/nschloe/tikzplotlib/issues/333 | [] | svretina | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 208 | sony xperia 1 ii (root) is not supported... | 打开aidlux后黑屏。。为什么同样是安卓手机,我的就不行,已按照24后的zygisk隐藏root | closed | 2022-03-14T08:26:58Z | 2022-03-29T02:48:20Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/208 | [] | ubun222 | 3 |
huggingface/datasets | nlp | 6,591 | The datasets models housed in Dropbox can't support a lot of users downloading them | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://www.dropbox.com/s/e2us0hcs3ilr20e/MInDS-14.zip?dl=1 (error 429)`
My question is if we can somehow host these files elsewhere or can you change the limit of simultaneous users accessing those resources or any other solution?
Also, has anyone had this issue before?
Thanks
### Steps to reproduce the bug
1: Create a python script like so:
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
2: Execute this by a certain number of users at the same time
### Expected behavior
I woudl expect that this shouldnt happen unless its a huge amount of users, which it is not the case
### Environment info
This was done in an Ubuntu 22 environment. | closed | 2024-01-15T16:43:38Z | 2024-01-22T23:18:09Z | https://github.com/huggingface/datasets/issues/6591 | [] | RDaneelOlivav | 1 |
tensorpack/tensorpack | tensorflow | 986 | Train Faster-rcnn error by using one GTX1080Ti. | (1) $cd examples/FasterRCNN and run follow:
./train.py --config \
MODE_MASK=False MODE_FPN=True \
DATA.BASEDIR=/disk1/DataSet/COCO \
BACKBONE.WEIGHTS=pretrain_model/ImageNet-R50-AlignPadding.npz
(2) error log as follow:
2018-11-22 11:23:07.634189: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1305, in _run_fn
self._extend_graph()
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1340, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'MaxBytesInUse' with these attrs. Registered devices: [CPU], Registered kernels:
device='GPU'
[[Node: PeakMemoryTracker/MaxBytesInUse = MaxBytesInUse[_device="/device:GPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./train.py", line 618, in <module>
launch_train_with_config(traincfg, trainer)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/interface.py", line 97, in launch_train_with_config
extra_callbacks=config.extra_callbacks)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 341, in train_with_defaults
steps_per_epoch, starting_epoch, max_epoch)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 312, in train
self.initialize(session_creator, session_init)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 176, in wrapper
return func(*args, **kwargs)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/tower.py", line 144, in initialize
super(TowerTrainer, self).initialize(session_creator, session_init)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 176, in wrapper
return func(*args, **kwargs)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 229, in initialize
self.sess = session_creator.create_session()
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/tfutils/sesscreate.py", line 43, in create_session
sess.run(tf.global_variables_initializer())
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'MaxBytesInUse' with these attrs. Registered devices: [CPU], Registered kernels:
device='GPU'
[[Node: PeakMemoryTracker/MaxBytesInUse = MaxBytesInUse[_device="/device:GPU:0"]()]]
Caused by op 'PeakMemoryTracker/MaxBytesInUse', defined at:
File "./train.py", line 618, in <module>
launch_train_with_config(traincfg, trainer)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/interface.py", line 97, in launch_train_with_config
extra_callbacks=config.extra_callbacks)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 341, in train_with_defaults
steps_per_epoch, starting_epoch, max_epoch)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 311, in train
self.setup_callbacks(callbacks, monitors)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 176, in wrapper
return func(*args, **kwargs)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/train/base.py", line 211, in setup_callbacks
self._callbacks.setup_graph(weakref.proxy(self))
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/callbacks/base.py", line 52, in setup_graph
self._setup_graph()
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/callbacks/group.py", line 70, in _setup_graph
cb.setup_graph(self.trainer)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/callbacks/base.py", line 52, in setup_graph
self._setup_graph()
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorpack/callbacks/prof.py", line 211, in _setup_graph
ops.append(MaxBytesInUse())
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/memory_stats/python/ops/memory_stats_ops.py", line 41, in MaxBytesInUse
return gen_memory_stats_ops.max_bytes_in_use()
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/memory_stats/ops/gen_memory_stats_ops.py", line 152, in max_bytes_in_use
"MaxBytesInUse", name=name)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/disk3/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'MaxBytesInUse' with these attrs. Registered devices: [CPU], Registered kernels:
device='GPU'
[[Node: PeakMemoryTracker/MaxBytesInUse = MaxBytesInUse[_device="/device:GPU:0"]()]]
MultiProcessMapDataZMQ successfully cleaned-up.
(4) tensorflow-gpu == 1.8.0,cuda=9.0 cudnn=7.0.5,gpu-hardware is one GTX1080TI.,cpu-memory is 8G
| closed | 2018-11-22T03:28:23Z | 2020-08-08T20:00:14Z | https://github.com/tensorpack/tensorpack/issues/986 | [
"installation/environment"
] | xtanitfy | 8 |
deeppavlov/DeepPavlov | nlp | 1,230 | Data set creation routine for gobot DSTC 2 format | Hi,
I want to create data set creation routine for gobot DSTC 2 format. I know that that there is an on going refactoring of the codebase for the Goal-oriented bot (gobot).
Also, there is a new DSTC 8 challenge and Alexa Prize socialbot which is to be open sourced.
So I want to ask if this feature would be needed or is it duplication of work?
Ideally, I want to pull the routine to the deeppavlov repo, so I need some guidance/advice before jumping into the implementation.
Things I want to clarify:
1) Is this routine needed to be developed? Or is it already underway and it would be duplication of work?
2) What format would be best (DSTC 2 json, DSTC 8, etc)?
3) I want to create CLI with python, is it good?
Anything else you think might be appropriate. | closed | 2020-05-25T10:18:05Z | 2020-06-30T12:33:16Z | https://github.com/deeppavlov/DeepPavlov/issues/1230 | [
"feature request"
] | Eugen2525 | 17 |
numba/numba | numpy | 9,673 | Errors not being raised when running code in parallel | When running the following code I'm finding some inconsistencies in error behaviour when running code in prange
```python
@njit
def sim_once(x, y):
raise ValueError("Invalid value")
@njit(parallel=True)
def numba_func(x):
n_sims = 20
y = np.zeros(n_sims)
for i in prange(n_sims):
y[i] += 1
sim_once(x=x, y=y)
return y
@njit(parallel=False)
def non_parallel_numba_func(x):
n_sims = 20
y = np.zeros(n_sims)
for i in prange(n_sims):
y[i] += 1
sim_once(x=x, y=y)
return y
numba_func(2.0) # No Error
non_parallel_numba_func(2.0) # Raises ValueError
```
When running `sim_once(x=0, y=1)` a value error is raised as expected. However, when running the `numba_func` with prange there's no error raised and the return value is `np.array([1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 0. 1. 1. 1. 1.])`. Using the `non_parallel_numba_func` a ValueError is raised as expected.
Interestingly if I replace the `sim_once` function call with `raise ValueError("Invalid vaue"), then the parallel function raises an error.
Running numba 0.60.0 on python 3.11.8. | open | 2024-07-24T17:13:15Z | 2024-08-02T11:56:51Z | https://github.com/numba/numba/issues/9673 | [
"bug - incorrect behavior"
] | MitchBond | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,247 | encoder error | Preparing the encoder, the synthesizer and the vocoder...
Loaded encoder "encoder.pt" trained to step 1564501
Synthesizer using device: cuda
Building Wave-RNN
Trainable Parameters: 4.481M
Loading model weights at saved_models\default\vocoder.pt
Testing your configuration with small inputs.
Testing the encoder...
Traceback (most recent call last):
File "C:\voice\demo_cli.py", line 83, in <module>
embedding = encoder.embed_utterance(audio_waveform)
File "C:\voice\encoder\inference.py", line 144, in embed_utterance
frames = audio.wav_to_mel_spectrogram(wav)
File "C:\voice\encoder\audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given | open | 2023-08-30T08:49:42Z | 2023-09-30T17:17:37Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1247 | [] | bantikumarsatlokashram | 2 |
keras-team/autokeras | tensorflow | 834 | AutoKeras 1.0 much slower than 0.4 on Google Colab | When I try to run a simple MNIST example on Google Colab with GPU with Autokeras 0.4 it runs very fast (1 epoch of the first model takes < 2 s) but with 1.0 it runs much slower (1 epoch of the first model takes > 80 s). When I disable the GPU 0.4 runs as slow as 1.0 which suggests that 1.0 isn’t using the GPU. How can I make Autokeras 1.0 run as fast as 0.4 with GPU?
To reproduce go to [colab.research.google.com](url), choose a Python 3 runtime with GPU accelerator, and execute the following
0.4 code
```
%tensorflow_version 1.x
!pip install autokeras
import autokeras
import tensorflow
( ( x, y ), validation_data ) = tensorflow.keras.datasets.mnist.load_data( )
model = autokeras.ImageClassifier( verbose = True )
model.fit( x, y )
```
1.0 code
```
%tensorflow_version 2.x
!pip install git+git://github.com/keras-team/keras-tuner@master#egg=keras-tuner
!pip install git+git://github.com/keras-team/autokeras@master#egg=autokeras
import tensorflow
import autokeras
( ( x, y ), validation_data ) = tensorflow.keras.datasets.mnist.load_data( )
model = autokeras.ImageClassifier( )
model.fit( x, y, validation_data = validation_data )
```
The issue is breakdown to the following issues.
After solving them, the speed should be improved.
#906, #907, #908, #909, #910. | closed | 2019-11-13T10:29:59Z | 2020-01-19T20:42:26Z | https://github.com/keras-team/autokeras/issues/834 | [
"bug report",
"pinned"
] | m-pescador | 11 |
scrapy/scrapy | web-scraping | 5,735 | S3 backend can't handle uploads larger than 5GB | ### Description
When feeds larger than 5GB are sent using AWS S3 backend, I'm receiving the follow error:
```bash
2022-11-24 18:45:31 [scrapy.extensions.feedexport] ERROR: Error storing csv feed (55 items) in: s3://scrapy-test/large_export.csv
Traceback (most recent call last):
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/twisted/python/threadpool.py", line 244, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/twisted/python/threadpool.py", line 260, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/twisted/python/context.py", line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/twisted/python/context.py", line 82, in callWithContext
return func(*args, **kw)
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/scrapy/extensions/feedexport.py", line 196, in _store_in_thread
self.s3_client.put_object(
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/botocore/client.py", line 530, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/ogabrielsantos/crawler-scrapy/venv/lib/python3.10/site-packages/botocore/client.py", line 960, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (EntityTooLarge) when calling the PutObject operation: Your proposed upload exceeds the maximum allowed size
```
### Steps to Reproduce
I've write a minimum exemple code to simulate this issue:
```python
from scrapy.spiders import Spider
class LargeExportSpider(Spider):
name = "large_export"
start_urls = ["http://news.ycombinator.com/"]
custom_settings = {
"FEEDS": {
"s3://scrapy-test/large_export.csv": {
"format": "csv",
},
},
}
def parse(self, response, **kwargs):
text = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque iaculis odio efficitur, ultricies"
for _ in range(0, 55): # creates a 5.3GB csv file
yield {"name": "John Doe", "text": text * 1000000}
```
### Versions
`scrapy version --verbose`:
```bash
Scrapy : 2.7.1
lxml : 4.9.1.0
libxml2 : 2.9.13
cssselect : 1.2.0
parsel : 1.7.0
w3lib : 2.0.1
Twisted : 22.10.0
Python : 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
pyOpenSSL : 22.1.0 (OpenSSL 3.0.5 5 Jul 2022)
cryptography : 38.0.1
Platform : macOS-13.0.1-arm64-arm-64bit
```
`requirements.txt`:
```
botocore==1.29.16
Scrapy==2.7.1
```
### Additional context
Doing some investigation, I've seen that `S3FeedStorage` uses `put_object` which, as [per documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html), has a limit of 5GB per uploaded object:
https://github.com/scrapy/scrapy/blob/6ded3cf4cd134b615239babe28bb28c3ff524b05/scrapy/extensions/feedexport.py#L196
Looks like `boto3` [already have an upload method](https://boto3.amazonaws.com/v1/documentation/api/1.16.53/guide/s3-uploading-files.html) which handles multipart files, but scrapy relies on `botocore`. | closed | 2022-11-24T22:14:37Z | 2023-06-13T14:44:10Z | https://github.com/scrapy/scrapy/issues/5735 | [] | ogabrielsantos | 4 |
stanfordnlp/stanza | nlp | 903 | AttributeError: Can't get attribute 'SentenceBoundary' on <module 'stanza.models.constituency.lstm_model' [QUESTION] | I have download the latest 'en' package, and try to build a pipeline:
'''
nlp = stanza.Pipeline('en',use_gpu=False)
'''
but i got an error:
'''
AttributeError: Can't get attribute 'SentenceBoundary' on <module 'stanza.models.constituency.lstm_model' from '/Users/didi/opt/anaconda3/lib/python3.8/site-packages/stanza/models/constituency/lstm_model.py'>
'''
I don't know how to solve it ,(all the model and package are latest version. | closed | 2021-12-20T01:30:53Z | 2021-12-20T02:34:02Z | https://github.com/stanfordnlp/stanza/issues/903 | [
"question"
] | pandali1 | 1 |
holoviz/panel | jupyter | 7,551 | Tabulator : tooltips | Hello all,
#### ALL software version info
MacOs with Chrome, Safari or FireFox
bokeh 3.6.1 and panel >= 1.5.2
#### Description of expected behavior and the observed behavior
The issue occurs in Tabulator when using `header_tooltips` with a `FastListTemplate`. The background and font colors of the tooltips are both dark, making the text unreadable.
I couldn't find the CSS responsible for the background color.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import panel as pn
import random
import numpy as np
pn.extension('tabulator')
n = 100
data = {
"ID": range(1, n + 1),
"Name": [f"Name_{i}" for i in range(1, n + 1)],
"Age": [random.randint(18, 70) for _ in range(n)],
"Score": [round(random.uniform(50, 100), 2) for _ in range(n)],
"Category": [random.choice(["A", "B", "C"]) for _ in range(n)],
"Active": [random.choice([True, False]) for _ in range(n)],
"Date": pd.date_range("2023-01-01", periods=n),
"Comment": [f"Comment_{i}" for i in range(1, n + 1)],
"Rating": [round(random.uniform(1, 5), 1) for _ in range(n)],
"Value": np.random.randint(100, 500, size=n)}
df = pd.DataFrame(data)
htt = {x: x for x in data.keys()}
tabulator = pn.widgets.Tabulator(df, page_size=10, sizing_mode='stretch_width', header_tooltips=htt)
# app = tabulator # OK
template = pn.template.FastListTemplate(title="Tabulator test", main=[tabulator]) # bug
# template = pn.template.BootstrapTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
app = template
app.servable()
app.show()
```
#### Screenshots or screencasts of the bug in action
<img width="770" alt="Image" src="https://github.com/user-attachments/assets/76b5606e-03cc-4505-85b6-ef379496675a" />
| open | 2024-12-13T10:26:04Z | 2025-03-11T14:36:00Z | https://github.com/holoviz/panel/issues/7551 | [] | symelmu | 0 |
seleniumbase/SeleniumBase | pytest | 2,471 | Signing a signature on a canvas with SeleniumBase | Hi,
I am trying to mock-up a movement of mouse on a signature module for example on https://www.signwell.com/online-signature/draw/
here's my code currently (which was modified from the mkrec feature on sbase as well to record steps):
```
from seleniumbase import BaseCase
class RecorderTests(BaseCase):
def test_recording(self):
self.open("https://www.signwell.com/online-signature/draw/")
self.drag_and_drop_with_offset("(//canvas[@id='canvas_signature'])[1]", 200, 0)
self.sleep(10)
self.click("button#signature-save")
```
i have also tried changing "drag_and_drop_with_offset" with "click_with_offset" however its not working as well.
I also tried to simulate the whole process by using mkrec and here's the code i got from it
from seleniumbase import BaseCase
```
class RecorderTests(BaseCase):
def test_recording(self):
self.open("https://www.signwell.com/online-signature/draw/")
self.click_with_offset("canvas#canvas_signature", 177.66665649414062, 113.27082824707031)
self.click_with_offset("canvas#canvas_signature", 524.6666564941406, 136.2708282470703)
self.click_with_offset("canvas#canvas_signature", 510.6666564941406, 180.2708282470703)
self.click_with_offset("canvas#canvas_signature", 306.6666564941406, 188.2708282470703)
self.click_with_offset("canvas#canvas_signature", 318.6666564941406, 192.2708282470703)
self.click("button#signature-save")
```
upon rerunning the recorded script, the movements were not visible as well but the results are passing.
would greatly appreciate if there's any help that can be provided by anyone that has done this before.
Thank you so much in advance!

| closed | 2024-02-07T03:35:08Z | 2024-04-03T02:31:59Z | https://github.com/seleniumbase/SeleniumBase/issues/2471 | [
"question"
] | mastafadhil | 1 |
blb-ventures/strawberry-django-plus | graphql | 73 | Merge to strawberry / strawberry-django | Hello @bellini666, I am willing to help with the process of merging this repo to strawberry and i have some suggestions / questions.
1.could you provide list of the features that are divided to what to contribute to strawberry what to strawberry-django and what should stay here?
2. do you have any plan on how to do this?
3. I want to create a site for the documentation. [I am familiar ](https://nrbnlulu.github.io/strawberry-django-auth/index)with mkdocs served by gh-pages what are your thoughts? Also, should they live inside the main strawberry site or should they be in sole strawberry-django site?
4. should we save the current API or prefer the strawberry-django API?
5. any other things I should know before I dive into this? | closed | 2022-07-03T04:44:09Z | 2023-07-05T17:07:56Z | https://github.com/blb-ventures/strawberry-django-plus/issues/73 | [
"enhancement",
"help wanted"
] | nrbnlulu | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,179 | admin password lost, gl-admin resetpass admin bugs | **Describe the bug**
Hello,
i have lost the admin password,
i use this command:
# gl-admin resetpass admin
but i recived this error:
Failed! The user 'admin' does not exist or encryption key is set
the admin user exists!
my version OS: Ubuntu 20.04.2 LTS (GNU/Linux 5.8.0-63-generic x86_64)
My version globaleaks 4.2.12
please help me,
thanks
| closed | 2022-02-21T16:45:10Z | 2023-12-13T15:58:13Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3179 | [] | fwppe | 14 |
prkumar/uplink | rest-api | 26 | `client` parameter in `Consumer` constructor doesn't work as documented | ## Precondition
Consider the following consumer:
```python
class GitHub(uplink.Consumer):
@uplink.get("/users/{username}")
def get_user(self, username):
"""Get a single user."""
```
## Steps to recreate
Instantiate this consumer with a specific client instance:
```python
GitHub(base_url="https://api.github.com/", client=uplink.RequestsClient())
```
## Expected
Consumer instance builds properly and uses the given client instance.
**Note: when the `client` parameter is given a `uplink.clients.interfaces.HttpClientAdapter` subclass, it should instantiate a client instance; otherwise the provided value should be used as given.**
## Actual
Exception raised on instantiation:
```python
Traceback (most recent call last):
File "/Users/prkumar/Library/Preferences/PyCharm2017.2/scratches/scratch_1.py", line 11, in <module>
GitHub(base_url="https://api.github.com/", client=uplink.RequestsClient())
File "/Users/prkumar/Developer/uplink/uplink/builder.py", line 170, in __init__
self._build(builder)
File "/Users/prkumar/Developer/uplink/uplink/builder.py", line 175, in _build
caller = call_builder.build(self, definition_builder)
File "/Users/prkumar/Developer/uplink/uplink/builder.py", line 149, in build
RequestPreparer(self, definition),
File "/Users/prkumar/Developer/uplink/uplink/builder.py", line 42, in __init__
if issubclass(self._client, clients.interfaces.HttpClientAdapter):
TypeError: issubclass() arg 1 must be a class
``` | closed | 2017-11-21T05:29:21Z | 2017-12-06T01:30:03Z | https://github.com/prkumar/uplink/issues/26 | [
"Bug",
"help wanted",
"good first issue"
] | prkumar | 3 |
jina-ai/serve | deep-learning | 5,824 | bug: running flow on windows | On the latest master, jina is hanging when deploying the following flow:
```yml
jtype: Flow
with:
port: 8080
protocol: http
jcloud:
version: 3.14.2.dev18
labels:
creator: microchain
name: gptdeploy
executors:
- name: printhelloexecutor4715887
uses: jinaai+docker://auth0-unified-448f11965ce142b6/PrintHelloExecutor4715887:latest
jcloud:
resources:
instance: C2
capacity: spot
```
error:
```txt
C:\Users\hoenicke\jina\gptdeploy\venv\Scripts\python.exe C:\Users\hoenicke\jina\gptdeploy\gptdeploy.py run --path microservice
Run a jina flow locally
⠋ Fetching auth0-unified-448f11965ce142b6/PrintHelloExecutor4715887 from Jina
⠋ Fetching auth0-unified-448f11965ce142b6/PrintHelloExecutor4715887 from Jina
Hub ...
WARNI… printhelloexecutor4715887/rep-0@18720 [04/24/23 10:36:57]
<jina.orchestrate.pods.container.ContainerPod object
at 0x000001E8A04D1350> timeout after waiting for
600000ms, if your executor takes time to load, you
may increase --timeout-ready
🐳 Process terminated, the container fails to start, check the arguments or
entrypoint
ERROR Flow@18720 Flow is aborted due to [04/24/23 10:36:59]
['printhelloexecutor4715887'] can not be started.
WARNI… gateway/rep-0@18720 Pod was forced to close after 1 [04/24/23 10:37:00]
second. Graceful closing is not available on
Windows.
Traceback (most recent call last):
File "C:\Users\hoenicke\jina\gptdeploy\gptdeploy.py", line 6, in <module>
main()
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\src\cli.py", line 39, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\src\cli.py", line 84, in run
Runner().run(path)
File "C:\Users\hoenicke\jina\gptdeploy\src\options\run\runner.py", line 10, in run
run_locally(executor_name, latest_version_path)
File "C:\Users\hoenicke\jina\gptdeploy\src\apis\jina_cloud.py", line 204, in run_locally
with flow:
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\jina\orchestrate\orchestrator.py", line 14, in __enter__
return self.start()
^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\jina\orchestrate\flow\builder.py", line 33, in arg_wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\jina\orchestrate\flow\base.py", line 1832, in start
self._wait_until_all_ready()
File "C:\Users\hoenicke\jina\gptdeploy\venv\Lib\site-packages\jina\orchestrate\flow\base.py", line 1975, in _wait_until_all_ready
raise RuntimeFailToStart
jina.excepts.RuntimeFailToStart
```
| closed | 2023-04-24T08:53:48Z | 2023-07-10T08:49:45Z | https://github.com/jina-ai/serve/issues/5824 | [] | florian-hoenicke | 4 |
sinaptik-ai/pandas-ai | data-science | 1,349 | docs: add AWS Bedrock tutorial to the example | ### 🚀 The feature
Add example of AWS Bedrock
### Motivation, pitch
I could not find an example on how to start with Bedrock. So I followed the same patter of Azure to create one for AWS Bedrock
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-09-01T20:51:40Z | 2024-10-16T08:14:27Z | https://github.com/sinaptik-ai/pandas-ai/issues/1349 | [
"documentation"
] | dimwael | 0 |
pyqtgraph/pyqtgraph | numpy | 2,381 | Initial dragging behaviour of InfiniteLine/LinearRegionItem broken | If an `InfiniteLine` or a `LinearRegionItem` is added to a `PlotWidget` without any other initial values (only set movable) the dragging of the items don't work initially.
To reproduce:
1. **Run the mwe** below.
We see an empty plot with x-axis ranging from -0.5 to +0.5 and the item (`InfiniteLine` or `LinearRegionItem`, whatever is enabled) centered at or around x=0.0
3. Now, **drag the item to the left or right**. I.e. left click and hold on the item and move the mouse cursor left or right (still holding the mouse button).
**Instead of the item moving, we observe the axis moving!**
If we click and drag the axis itself one time or spin the mouse wheel, i.e. scroll the x-range, this broken behaviour is not reproducible anymore. From now on, the item will correctly be moved and the axis will stand still.
```python
import pyqtgraph as pg
app = pg.mkQApp()
plot = pg.PlotWidget()
plot.show()
if 1:
line = pg.InfiniteLine()
line.setMovable(True)
plot.addItem(line)
else:
region = pg.LinearRegionItem()
region.setMovable(True)
plot.addItem(region)
if __name__ == '__main__':
pg.exec()
```
Tested with current master on commit `8be2d6a88edc1e26b1f6f85c47a72c400db9a28b`,
python 3.10 and PySide2. | open | 2022-08-01T07:41:28Z | 2022-12-02T11:30:18Z | https://github.com/pyqtgraph/pyqtgraph/issues/2381 | [
"InfiniteLine"
] | bbc131 | 3 |
LAION-AI/Open-Assistant | python | 3,111 | Remove `@next/font` | NextJS warn this
> Your project has `@next/font` installed as a dependency, please use the built-in `next/font` instead. The `@next/font` package will be removed in Next.js 14. You can migrate by running `npx @next/codemod@latest built-in-next-font .`. Read more: https://nextjs.org/docs/messages/built-in-next-font
| closed | 2023-05-09T21:10:20Z | 2023-05-13T15:40:55Z | https://github.com/LAION-AI/Open-Assistant/issues/3111 | [
"website",
"good first issue"
] | notmd | 2 |
Neoteroi/BlackSheep | asyncio | 253 | Static files handling bypasses the router fallback route. | **Describe the bug**
The router fallback route does not work properly when the application is also configured to serve static files.
The same applies to the 404 exception handler.
Kindly reported by @nico-vromans.
| closed | 2022-04-28T16:35:17Z | 2022-04-28T16:56:44Z | https://github.com/Neoteroi/BlackSheep/issues/253 | [] | RobertoPrevato | 0 |
pydantic/pydantic-ai | pydantic | 1,217 | pass custom headers to MCPServerHTTP to support auth use casess | ### Description
Currently the MCPServerHTTP leverages the mcp sdk client, but only uses the url param as input.
The mcp sdk client support passing in custom headers.
Pydantic-ai should support passing headers into MCPServerHTTP, that are then passed to the mcp client.
This would allow support auth setups that rely on setting headers.
from mcp python sdk for sse client:
```
@asynccontextmanager
async def sse_client(
url: str,
headers: dict[str, Any] | None = None,
timeout: float = 5,
sse_read_timeout: float = 60 * 5,
):
"""
Client transport for SSE.
```
Also makes sense to expose the timeout and read timeout as well.
I'll raise a PR for this.
### References
_No response_ | open | 2025-03-24T01:28:07Z | 2025-03-24T01:28:07Z | https://github.com/pydantic/pydantic-ai/issues/1217 | [] | JohnUiterwyk | 0 |
ultralytics/yolov5 | deep-learning | 13,515 | code for the yaml file | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, I'm trying to pre-train the yolov5m model downloaded from the ultralitics site on my custom dataset which has 3 classes: car,truck person. Is it necessary to match the class ID with the class ID of the coco dataset in yaml file? For example:
yaml file:
train: /path_to_train_images/
val: /path_to_val_images/
nc: 8 # Broj klasa od 0 do 7 (moraš uključiti sve klase do najvećeg ID-a!)
names: ["Person", "Unknown", "Car", "Unknown", "Unknown", "Unknown", "Unknown", "Truck"]
or its just okej like this:
yaml file:
train: /path_to_train_images/
val: /path_to_val_images/
nc: 8
names: ["Person", "Unknown", "Car", "Unknown", "Unknown", "Unknown", "Unknown", "Truck"]
because in coco dataset person has ID=0 car=2 and truck=7.
### Additional
_No response_ | open | 2025-02-19T08:22:56Z | 2025-02-19T09:52:06Z | https://github.com/ultralytics/yolov5/issues/13515 | [
"question"
] | rmarkovic00 | 3 |
python-visualization/folium | data-visualization | 1,262 | HeatMapTime Time Slider broken styling | Link to site: https://aowangdrexel.github.io/ceres/website/map.html
Link to repo: https://github.com/AoWangDrexel/ceres
Link to Python code: https://github.com/AoWangDrexel/ceres/blob/master/map_data/map.py
```python
HeatMapWithTime(formatted[0], formatted[1]).add_to(m)
```
#### Problem description
When using the HeatMapWithTime pluggin, the time slider works on my localhost, but once I started hosting with Github, the slider started to have graphical issues. The slider becomes vertical and the words for the buttons all scrunch down to the bottom of the screen.

#### Expected Output

#### Output of ``folium.__version__``: 0.10.1
| closed | 2020-02-13T04:16:12Z | 2020-05-23T16:31:50Z | https://github.com/python-visualization/folium/issues/1262 | [] | AoWangPhilly | 3 |
microsoft/MMdnn | tensorflow | 590 | tf2caffe problem "tensorflow.MetaGraphDef" has no field named "Placeholder".. | Platform (like ubuntu 16.04/win10): ubuntu 16.04
Python version: 2.7
Source framework with version (like Tensorflow 1.4.1 with GPU): tensorflow1.12.0
Destination framework with version (like CNTK 2.3 with GPU):caffe
Pre-trained model path (webpath or webdisk path):https://github.com/Simplesss/Face-attribue
Running scripts:mmconvert -sf tensorflow -in ~/PycharmProjects/Tensorflow-TCDCN/pretrained/frozen_model.pb --inNodeName Placeholder Placeholder_6 --inputShape 1,40,40,3 1 --dstNodeName add_5 add_6 add_7 add_8 add_9 -df caffe -om tf_tensorflow
Info: Trying to parse file [/home/cui/PycharmProjects/Tensorflow-TCDCN/pretrained/utf8.pb] with binary format but failed with error [Error parsing message].
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 10, in <module>
sys.exit(_main())
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 66, in _convert
parser = TensorflowParser(args.network, args.weights, args.dstNodeName, inputshape[0], args.inNodeName)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 189, in __init__
model = TensorflowParser._load_meta(meta_file)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 84, in _load_meta
load_protobuf_from_file(meta_graph, model_network_path)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/common/IR/IR_graph.py", line 31, in load_protobuf_from_file
raise IOError("Cannot parse file %s: %s." % (filename, str(e)))
IOError: Cannot parse file /home/cui/PycharmProjects/Tensorflow-TCDCN/pretrained/utf8.pb: 2:2 : Message type "tensorflow.MetaGraphDef" has no field named "Placeholder"..
I have two inputs:Placeholder and Placeholder_6, six outputs:add_5,add_6,add_7,add_8,add_9.
There is something wrong with Placeholder,how can I fix it?
thanks a lot for help.
| open | 2019-02-21T02:55:04Z | 2020-12-29T08:02:50Z | https://github.com/microsoft/MMdnn/issues/590 | [] | Simplesss | 4 |
matterport/Mask_RCNN | tensorflow | 2,521 | Mistake in calculating mAP? | On the left is my predicted result and on the right is the ground truth. The results is okay (class prediction is correct, mask overlap is correct) but still, the AP for this particular image is **0** for some reason. I am using iou=0.5, like voc.
This kind of thing is affecting the overall result when it happens to other images also.

Code to compute the mAP: (very similar to the nucleus example)
def compute_batch_ap(dataset, image_ids, verbose=1):
APs = []
precisions = []
recalls = []
overlaps = []
for image_id in image_ids:
# Load image
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config,
image_id, use_mini_mask=False)
# Run object detection
results = model.detect_molded(image[np.newaxis], image_meta[np.newaxis], verbose=0)
# Compute AP over 0.5
r = results[0]
ap, precisions_out, recalls_out, overlaps_out = utils.compute_ap(
gt_bbox, gt_class_id, gt_mask,
r['rois'], r['class_ids'], r['scores'], r['masks'])
APs.append(ap)
precisions.append(precisions_out)
recalls.append(recalls_out)
overlaps.append(overlaps_out)
if verbose:
info = dataset.image_info[image_id]
meta = modellib.parse_image_meta(image_meta[np.newaxis,...])
print("{:3} {} AP: {:.2f}".format(
meta["image_id"][0], meta["original_image_shape"][0], ap))
return APs, precisions, recalls, overlaps
limit = 513
APs, precisions, recalls, overlaps = compute_batch_ap(dataset, dataset.image_ids[:limit])
print("Mean AP over {} images: {:.4f}".format(len(APs), np.mean(APs))) | closed | 2021-03-30T04:16:27Z | 2021-03-30T07:17:32Z | https://github.com/matterport/Mask_RCNN/issues/2521 | [] | UsmanAfzaal | 1 |
AntonOsika/gpt-engineer | python | 174 | Files are not created. | **Hi All
My main_prompt:**
`# Project Outline: Kodi Subtitle Translation Plugin using Python and OpenAI API
## Overview
The goal of the project is to develop a Kodi plugin that performs the following tasks whenever a film is started from any source:
1. Checks if the film has embedded English subtitles
2. Translates these subtitles using the OpenAI API "gpt-3.5-turbo-16k" into the target language set in the plugin's settings, with Polish or the system language as the default
3. Sets the translated subtitles as activated in the film
The plugin should provide information in the form of a Kodi requester about what it is currently executing, and finally confirm that it has set the subtitles in the chosen language.
## Pre-requisites:
- Python
- Kodi Python API
- OpenAI API
- Knowledge of Kodi add-on structure
## Plugin Directory Structure:
```
kodi-subtitle-translation-plugin
│
├─── resources
│ ├─── language
│ │ ├─── English
│ │ └─── Polish
│ └─── settings.xml
│
├─── lib
│ ├─── openai_translation.py
│ └─── subtitle_management.py
│
├─── LICENSE.txt
├─── addon.xml
├─── default.py
└─── install.md
```
## Instructions:
### 1. Create the Kodi Plugin Base
1. **addon.xml:** This is the main descriptor file, containing metadata about the plugin, including its name, version, and the list of python libraries that your plugin will need.
2. **default.py:** This is the entry point for the plugin, where the Kodi API interacts with your plugin. This will include the main functions to get the movie's file path, extract subtitles, translate them and reinsert them.
3. **LICENSE.txt:** The license file for your plugin. Typically this would be the GPL v2.0 or later, as Kodi is also GPL v2.0 licensed.
4. **resources/settings.xml:** This XML file contains all the configurable settings for the plugin, such as the preferred language for translations.
5. **resources/language:** This directory contains localization strings for the addon. At a minimum, you should include English.
6. **lib/openai_translation.py:** This file will contain the functions that will use OpenAI API to translate the subtitles.
7. **lib/subtitle_management.py:** This file will handle the extraction and reinsertion of subtitles from and into the movie file.
### 2. Plugin Implementation
**default.py:**
- Implement the `run()` method which will be the entry point for the Kodi plugin.
- Fetch the currently playing movie's file path using the Kodi Python API.
- Extract the subtitles using the `subtitle_management.py` module.
- Check if the subtitles exist and if they are in English.
- If they exist, translate the subtitles using the `openai_translation.py` module.
- Reinsert the translated subtitles back into the movie file using the `subtitle_management.py` module.
- Throughout each stage, update the Kodi requester with the current status.
**openai_translation.py:**
- Connect to the OpenAI API using the given key.
- Implement the `translate()` function that takes the subtitles as input and translates them to the target language using OpenAI API.
**subtitle_management.py:**
- Implement the `extract_subtitles()` function to extract the subtitles from the movie file.
- Implement the `insert_subtitles()` function to insert the translated subtitles back into the movie file.
### 3. Plugin Installation Instruction (install.md)
Document the installation process for your Kodi plugin. This should include instructions for compressing the
plugin into a .zip file, transferring it to the Kodi device, and how to use the Kodi interface to install the plugin from the .zip file.
### 4. Makefile
Consider writing a `make.py` script to automate the process of compressing the plugin files into a .zip file.
---
Remember, thorough testing is important to ensure the functionality of your plugin. Test your plugin on multiple platforms and a variety of films to ensure everything works as expected. Good luck!`
########################################
**Output from the script:**
`(.venv) d:\_Python\GitHub\gpt-engineer>python -m gpt_engineer.main projects/kodi_napisy
Program Specification: Kodi Subtitle Translation Plugin using Python and OpenAI API
## Overview
The goal of the program is to develop a Kodi plugin that performs the following tasks whenever a film is started from any source:
1. Checks if the film has embedded English subtitles
2. Translates these subtitles using the OpenAI API "gpt-3.5-turbo-16k" into the target language set in the plugin's settings, with Polish or the system language as the default
3. Sets the translated subtitles as activated in the film
The plugin should provide information in the form of a Kodi requester about what it is currently executing, and finally confirm that it has set the subtitles in the chosen language.
## Features
1. Detect embedded English subtitles in the currently playing movie
2. Extract the English subtitles from the movie file
3. Translate the extracted subtitles into the target language using OpenAI API
4. Reinsert the translated subtitles back into the movie file
5. Activate the translated subtitles in the movie
6. Display progress information using Kodi requester
## Core Classes, Functions, and Methods
1. `run()`: The main entry point for the Kodi plugin, responsible for coordinating the entire subtitle translation process.
2. `get_movie_file_path()`: Fetches the currently playing movie's file path using the Kodi Python API.
3. `extract_subtitles(movie_file_path)`: Extracts the subtitles from the movie file.
4. `translate_subtitles(subtitles, target_language)`: Translates the subtitles into the target language using the OpenAI API.
5. `insert_subtitles(movie_file_path, translated_subtitles)`: Inserts the translated subtitles back into the movie file.
6. `activate_translated_subtitles()`: Activates the translated subtitles in the movie.
7. `update_requester(status)`: Updates the Kodi requester with the current status of the subtitle translation process.
## Non-standard Dependencies
1. Kodi Python API: The Kodi Python API is required to interact with the Kodi media player and perform tasks such as fetching the movie file path and activating subtitles.
2. OpenAI API: The OpenAI API is used to translate the extracted subtitles into the target language.
3. Python libraries for subtitle extraction and insertion: Libraries such as pysubs2 or pysrt can be used to extract and insert subtitles in various formats (e.g., SRT, ASS, SSA).
## Additional Notes
- The plugin should be configurable through a settings menu, allowing users to set their preferred target language for subtitle translation.
- The plugin should handle errors gracefully, such as when the OpenAI API is unavailable or when the movie file does not contain subtitles.
- The plugin should be compatible with various movie file formats and subtitle formats.
- The plugin should be tested on multiple platforms and with a variety of films to ensure proper functionality.
To generate tests based on the above specification, we will use the `pytest` library for Python. We will create a test file named `test_kodi_subtitle_translation_plugin.py` and write test functions for each core function mentioned in the specification.
[FILENAME]
```python
test_kodi_subtitle_translation_plugin.py
```
[CODE]
```python
import pytest
from unittest.mock import MagicMock
from default import run, get_movie_file_path, update_requester
from lib.openai_translation import translate_subtitles
from lib.subtitle_management import extract_subtitles, insert_subtitles, activate_translated_subtitles
def test_get_movie_file_path():
# Test if get_movie_file_path() returns a valid file path
movie_file_path = get_movie_file_path()
assert isinstance(movie_file_path, str)
assert len(movie_file_path) > 0
def test_extract_subtitles():
# Test if extract_subtitles() returns subtitles when given a valid movie file path
movie_file_path = "path/to/movie/file.mkv"
subtitles = extract_subtitles(movie_file_path)
assert isinstance(subtitles, str)
assert len(subtitles) > 0
def test_translate_subtitles():
# Test if translate_subtitles() returns translated subtitles when given valid input
subtitles = "This is a test subtitle."
target_language = "pl"
translated_subtitles = translate_subtitles(subtitles, target_language)
assert isinstance(translated_subtitles, str)
assert len(translated_subtitles) > 0
def test_insert_subtitles():
# Test if insert_subtitles() successfully inserts translated subtitles into the movie file
movie_file_path = "path/to/movie/file.mkv"
translated_subtitles = "To jest testowy napis."
result = insert_subtitles(movie_file_path, translated_subtitles)
assert result is True
def test_activate_translated_subtitles():
# Test if activate_translated_subtitles() successfully activates the translated subtitles
result = activate_translated_subtitles()
assert result is True
def test_update_requester():
# Test if update_requester() successfully updates the Kodi requester with the current status
status = "Extracting subtitles..."
result = update_requester(status)
assert result is True
def test_run():
# Test if run() successfully coordinates the entire subtitle translation process
get_movie_file_path_mock = MagicMock(return_value="path/to/movie/file.mkv")
extract_subtitles_mock = MagicMock(return_value="This is a test subtitle.")
translate_subtitles_mock = MagicMock(return_value="To jest testowy napis.")
insert_subtitles_mock = MagicMock(return_value=True)
activate_translated_subtitles_mock = MagicMock(return_value=True)
update_requester_mock = MagicMock(return_value=True)
with pytest.raises(SystemExit) as pytest_wrapped_e:
run()
assert pytest_wrapped_e.type == SystemExit
assert pytest_wrapped_e.value.code == 0
get_movie_file_path_mock.assert_called_once()
extract_subtitles_mock.assert_called_once()
translate_subtitles_mock.assert_called_once()
insert_subtitles_mock.assert_called_once()
activate_translated_subtitles_mock.assert_called_once()
update_requester_mock.assert_called()
```
These tests cover the core functionality of the Kodi Subtitle Translation Plugin. Make sure to run the tests using `pytest` and adjust the implementation as needed to ensure all tests pass.
[FILENAME]
```python
requirements.txt
```
[CODE]
```ini
kodi-python
openai
pysubs2
pytest
```
[FILENAME]
```python
addon.xml
```
[CODE]
```xml
<?xml version="1.0" encoding="UTF-8"?>
<addon id="plugin.video.subtitle_translation"
name="Kodi Subtitle Translation Plugin"
version="1.0.0"
provider-name="Your Name">
<requires>
<import addon="xbmc.python" version="3.0.0"/>
</requires>
<extension point="xbmc.python.pluginsource"
library="default.py">
<provides>video</provides>
</extension>
<extension point="xbmc.addon.metadata">
<summary lang="en_GB">Kodi plugin to translate movie subtitles using OpenAI API</summary>
<description lang="en_GB">This plugin translates embedded English subtitles into the target language using the OpenAI API.</description>
<platform>all</platform>
<license>GPL-2.0-or-later</license>
<source>https://github.com/yourusername/kodi-subtitle-translation-plugin</source>
</extension>
</addon>
```
[FILENAME]
```python
default.py
```
[CODE]
```python
import sys
import xbmc
import xbmcgui
import xbmcplugin
from lib.openai_translation import translate_subtitles
from lib.subtitle_management import extract_subtitles, insert_subtitles, activate_translated_subtitles
def get_movie_file_path():
player = xbmc.Player()
movie_file_path = player.getPlayingFile()
return movie_file_path
def update_requester(status):
xbmcgui.Dialog().notification("Subtitle Translation", status, xbmcgui.NOTIFICATION_INFO, 5000)
def run():
movie_file_path = get_movie_file_path()
update_requester("Extracting subtitles...")
subtitles = extract_subtitles(movie_file_path)
if subtitles:
update_requester("Translating subtitles...")
target_language = xbmcplugin.getSetting("target_language")
translated_subtitles = translate_subtitles(subtitles, target_language)
if translated_subtitles:
update_requester("Inserting translated subtitles...")
insert_subtitles(movie_file_path, translated_subtitles)
activate_translated_subtitles()
update_requester("Subtitle translation completed.")
else:
update_requester("Subtitle translation failed.")
else:
update_requester("No subtitles found.")
if __name__ == "__main__":
run()
```
[FILENAME]
```python
lib/openai_translation.py
```
[CODE]
```python
import openai
def translate_subtitles(subtitles, target_language):
openai.api_key = "your_openai_api_key"
response = openai.Completion.create(
engine="gpt-3.5-turbo-16k",
prompt=f"Translate the following English subtitles to {target_language}:\n{subtitles}",
max_tokens=1000,
n=1,
stop=None,
temperature=0.5,
)
translated_subtitles = response.choices[0].text.strip()
return translated_subtitles
```
[FILENAME]
```python
lib/subtitle_management.py
```
[CODE]
```python
import pysubs2
def extract_subtitles(movie_file_path):
subs = pysubs2.load(movie_file_path, encoding="utf-8")
subtitles = "\n".join([sub.text for sub in subs])
return subtitles
def insert_subtitles(movie_file_path, translated_subtitles):
subs = pysubs2.load(movie_file_path, encoding="utf-8")
translated_subs = translated_subtitles.split("\n")
for i, sub in enumerate(subs):
sub.text = translated_subs[i]
subs.save(movie_file_path, encoding="utf-8")
return True
def activate_translated_subtitles():
xbmc.executebuiltin("ActivateSubtitle")
return True
```
[FILENAME]
```python
resources/settings.xml
```
[CODE]
```xml
<settings>
<category label="Subtitle Translation Settings">
<setting id="target_language" type="enum" label="Target Language" values="Polish|French|German|Spanish" default="0" />
</category>
</settings>
```
[FILENAME]
```python
resources/language/English/strings.po
```
[CODE]
```ini
msgid ""
msgstr ""
"Project-Id-Version: Kodi Subtitle Translation Plugin\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: YEAR-MO-DA HO:MI+ZONE\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: Your Name <your.email@example.com>\n"
"Language-Team: English\n"
"Language: en_GB\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
msgctxt "#30000"
msgid "Subtitle Translation Settings"
msgstr ""
msgctxt "#30001"
msgid "Target Language"
msgstr ""
```
[FILENAME]
```python
resources/language/Polish/strings.po
```
[CODE]
```ini
msgid ""
msgstr ""
"Project-Id-Version: Kodi Subtitle Translation Plugin\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: YEAR-MO-DA HO:MI+ZONE\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: Your Name <your.email@example.com>\n"
"Language-Team: Polish\n"
"Language: pl_PL\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
msgctxt "#30000"
msgid "Subtitle Translation Settings"
msgstr "Ustawienia tłumaczenia napisów"
msgctxt "#30001"
msgid "Target Language"
msgstr "Język docelowy"
```
[FILENAME]
```python
LICENSE.txt
```
[CODE]
```
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[...]
```
[FILENAME]
```markdown
install.md
```
[CODE]
```markdown
# Kodi Subtitle Translation Plugin Installation Instructions
1. Compress the `kodi-subtitle-translation-plugin` folder into a .zip file.
2. Transfer the .zip file to your Kodi device.
3. Open Kodi and navigate to the Add-ons menu.
4. Select "Install from zip file" and locate the .zip file you transferred.
5. Kodi will install the plugin and notify you when the installation is complete.
After installation, the plugin will automatically translate subtitles whenever a movie is played.
```
These are the complete files for the Kodi Subtitle Translation Plugin. Make sure to replace the placeholder `your_openai_api_key` in `lib/openai_translation.py` with your actual OpenAI API key. Test the plugin on multiple platforms and a variety of films to ensure everything works as expected.
```bash
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
```bash
python default.py &
```
Do you want to execute this code?
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python default.py &
If yes, press enter. Otherwise, type "no"`
########################################
but files strukture looks like this:

Something is wrong. | closed | 2023-06-18T23:14:59Z | 2023-06-21T13:10:54Z | https://github.com/AntonOsika/gpt-engineer/issues/174 | [] | BaGRoS | 1 |
ray-project/ray | data-science | 51,442 | [Umbrella] Revisit Ray dashboard API status code | ### Description
Before https://github.com/ray-project/ray/pull/51417, the Ray dashboard APIs only returned 200 for success and 500 for errors; they didn't support status codes such as 404. Take #51417 as an example, it returns 404 when users try to kill a non-existent actor.
### Use case
_No response_ | open | 2025-03-18T02:46:11Z | 2025-03-18T02:47:03Z | https://github.com/ray-project/ray/issues/51442 | [
"good-first-issue",
"enhancement",
"dashboard",
"core"
] | kevin85421 | 0 |
iperov/DeepFaceLab | machine-learning | 849 | Same Errors on the all training models | My graphic card is opencl_intel_hd_graphics_620.0. on window.
It's not so good graphic card so I'm trying only 5 seconds video both for data_src and data_dst.
Extracting faces was fine but at the training step, it doesn't work. I tried all the 8 models but nothing worked and showing same errors.
There was errors for my 5 seconds videos so I tried the initial data_src and data_dst videos from the first download of the program but there was the same errors.
## Actual behavior
Process Process-1:
Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\utils\iter_utils.py", line 49, in process_func
gen_data = next (self.generator_func)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 101, in batch_func
raise Exception ("Exception occured in sample %s. Error: %s" % (sample.filename, traceback.format_exc() ) )
Exception: Exception occured in sample C:\DeepFaceLab_OpenCL\workspace\data_src\aligned\00003_0.jpg. Error: Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
Process Process-2:
Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\utils\iter_utils.py", line 49, in process_func
gen_data = next (self.generator_func)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 101, in batch_func
raise Exception ("Exception occured in sample %s. Error: %s" % (sample.filename, traceback.format_exc() ) )
Exception: Exception occured in sample C:\DeepFaceLab_OpenCL\workspace\data_src\aligned\00018_0.jpg. Error: Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
Process Process-3:
Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\utils\iter_utils.py", line 49, in process_func
gen_data = next (self.generator_func)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 101, in batch_func
raise Exception ("Exception occured in sample %s. Error: %s" % (sample.filename, traceback.format_exc() ) )
Exception: Exception occured in sample C:\DeepFaceLab_OpenCL\workspace\data_src\aligned\00005_0.jpg. Error: Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
Loading: 0%| | 0/156 [00:00<?, ?it/s]Process Process-4:
Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "multiprocessing\process.py", line 258, in _bootstrap
File "multiprocessing\process.py", line 93, in run
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\utils\iter_utils.py", line 49, in process_func
gen_data = next (self.generator_func)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 101, in batch_func
raise Exception ("Exception occured in sample %s. Error: %s" % (sample.filename, traceback.format_exc() ) )
Exception: Exception occured in sample C:\DeepFaceLab_OpenCL\workspace\data_src\aligned\00015_0.jpg. Error: Traceback (most recent call last):
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 99, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 112, in process
params = imagelib.gen_warp_params(sample_bgr, sample_process_options.random_flip, rotation_range=sample_process_options.rotation_range, scale_range=sample_process_options.scale_range, tx_range=sample_process_options.tx_range, ty_range=sample_process_options.ty_range, rnd_seed=sample_rnd_seed )
File "C:\DeepFaceLab_OpenCL\_internal\DeepFaceLab\imagelib\warp.py", line 8, in gen_warp_params
raise ValueError ('gen_warp_params accepts only square images.')
ValueError: gen_warp_params accepts only square images.
Loading: 100%|#######################################################################| 156/156 [00:02<00:00, 57.95it/s]
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows
- **Python version:** 3.7.4 | closed | 2020-08-04T02:53:45Z | 2020-08-04T14:17:15Z | https://github.com/iperov/DeepFaceLab/issues/849 | [] | orangedid | 1 |
sczhou/CodeFormer | pytorch | 324 | ImportError: cannot import name 'brush_stroke_mask' from 'basicsr.data.data_util' | Hello, I am getting this error when I run the command to process an image.
Any solutions would be great!
Thanks in advance. | open | 2023-11-18T21:39:33Z | 2023-11-18T21:39:33Z | https://github.com/sczhou/CodeFormer/issues/324 | [] | MarsEverythingTech | 0 |
zappa/Zappa | flask | 1,262 | Add Python 3.11 support | <!--- Provide a general summary of the issue in the Title above -->
## Context
AWS Lambda now supports Python 3.11. We should add support for that in Zappa.
https://aws.amazon.com/about-aws/whats-new/2023/07/aws-lambda-python-3-11/
## Expected Behavior
<!--- Tell us what should happen -->
Python 3.11 would be supported.
## Actual Behavior
<!--- Tell us what happens instead -->
An error is raised when using Python 3.11:
```
Zappa (and AWS Lambda) support the following versions of Python: ['3.6', '3.7', '3.8', '3.9', '3.10']
```
## Possible Fix
Is it necessary for Zappa to error based on the Python version in the first place? I think Zappa shouldn't need to be updated every time Lambda releases a new version, and instead could not check the Python version at all.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.json`:
| closed | 2023-07-28T15:38:18Z | 2023-08-15T09:48:07Z | https://github.com/zappa/Zappa/issues/1262 | [
"next-release-candidate"
] | grantmcconnaughey | 0 |
open-mmlab/mmdetection | pytorch | 11,678 | Many CPU cores are unused | Hello, I have encountered the same problem as https://github.com/open-mmlab/mmdetection/issues/10761.
I am launching the following script:
```
./mmdetection/tools/dist_train.sh ./mmdetection/configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py 4
```
Conda env summary:
- python=3.8.19=h955ad1f_0
- numpy==1.23.5
- opencv-python==4.9.0.80
- pytorch=1.13.1=py3.8_cuda11.7_cudnn8.5.0_0
- pytorch-cuda=11.7=h778d358_5
- mmcv==2.1.0
- mmengine==0.10.4
- mmpretrain==1.2.0
- mmdet: '3.3.0'
Train batch size: 20
Hardware setup:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 6
CPU max MHz: 3600,0000
CPU min MHz: 800,0000
BogoMIPS: 6000.00
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 1,1 MiB (24 instances)
L1i: 768 KiB (24 instances)
L2: 30 MiB (24 instances)
L3: 36 MiB (2 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerabilities:
Gather data sampling: Mitigation; Microcode
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Srbds: Not affected
Tsx async abort: Not affected
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
4 GPU NVIDIA RTX 6000 Ada Generation 49140MiB
Driver Version: 535.104.05 CUDA Driver Version: 12.2
The less workers I use, the faster training goes and GPU utilization is more stable.
With many workers:

With only 2 workers:

Using NVIDIA Nsight Systems profiler I see that many CPUs are just not utilized.
I have conducted the same experiment on another hardware setup and increasing number of workers also increase the train speed.
Could you give any advice? Shall I update any drivers? | open | 2024-05-03T14:43:06Z | 2024-05-03T14:43:24Z | https://github.com/open-mmlab/mmdetection/issues/11678 | [] | anastasia-spb | 0 |
pallets/flask | flask | 4,956 | ipv6 address not accessible | Hi,
I want to run the server on ipv6 address. I can see the port used by ipv6 address, however when called, it does not respond.
1) app.run(host='::', port=5000)
2) python -m flask run -h ::
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (::)
* Running on http://[::1]:5000
* Running on http://[2003:c6:4f1f:cd00:e4b0:a972:fdbb:ac6a]:5000
output of netstate
tcp6 0 0 :::5000 :::* LISTEN 64170/python
Does not work for ipv6, works for ipv4 however.
Environment:
- Python version:3.8.10
- Flask version:.2.2.2
| closed | 2023-01-30T11:03:53Z | 2023-02-14T00:06:45Z | https://github.com/pallets/flask/issues/4956 | [] | saquibntt | 1 |
ets-labs/python-dependency-injector | flask | 76 | Refactoring of Catalogs using metaclasses | closed | 2015-07-17T10:14:29Z | 2015-07-17T16:50:20Z | https://github.com/ets-labs/python-dependency-injector/issues/76 | [
"enhancement",
"optimization",
"refactoring"
] | rmk135 | 0 | |
opengeos/streamlit-geospatial | streamlit | 37 | Adding Timelapse GIF to the map | Hi There!
I'm working on app which enables the user to create timelapse from Sentinel-1 SAR. When I try to add the gif to the app map, it doesn't show on the existing map, and to show it, I need to use map.to_streamlit again to show any updates on the map and that generate a new map. So finally, I have two maps: the app map and another map with the gif created.
Is there anyway to add the gif to the map?
Also, after using the geemap.Map method (to_streamlit), the map instance loses all its interactive fuctionalities. I can't add any ipyleaflet controls or any widget control.
| closed | 2022-03-25T00:22:40Z | 2022-03-25T03:33:01Z | https://github.com/opengeos/streamlit-geospatial/issues/37 | [] | MuhammedM294 | 1 |
sgl-project/sglang | pytorch | 4,045 | logger "Receive: obj=GenerateReqInput()" part with text rather than input_ids. | sglang 0.4.3 logger sample is as follows:
[2025-03-03 17:53:04] INFO: 10.27.1.1:65179 - "POST /v1/chat/completions HTTP/1.1" 200 OK
[2025-03-03 17:53:04] Receive: obj=GenerateReqInput(text=None, input_ids=[151646, 198, 5405, 1614, 25, 5538, 25713, 3795, 16, 25, 18, 17, 65, 198, 5405, 2400, 25, 220, 17, 15, 17, 20, 12, 15, 18, 12, 15, 18, 51, 15, 24, 25, 20, 18, 25, 16, 24, 13, 23, 21, 16, 57, 271, 2610, 525, 264, 10950, 17847, 13, 151644, 100633, 47815, 101562, 107380, 82894, 101437, 100968, 3837, 104719, 101914, 102513, 100371, 11319, 151645], input_embeds=None, image_data=None, sampling_params={'temperature': 0.1, 'max_new_tokens': None, 'min_new_tokens': 0, 'stop': None, 'stop_token_ids': None, 'top_p': 0.9, 'top_k': -1, 'min_p': 0.0, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'regex': None, 'ebnf': None, 'n': 1, 'no_stop_trim': False, 'ignore_eos': False, 'skip_special_tokens': True}, rid='ced00776101841e180bf04c8dbdc4ec2', return_logprob=False, logprob_start_len=-1, top_logprobs_num=0, return_text_in_logprobs=True, stream=True, log_metrics=True, modalities=[], lora_path=None, session_params=None, custom_logit_processor=None)
[2025-03-03 17:53:04 TP0] Prefill batch. #new-seq: 1, #new-token: 63, #cached-token: 1, cache hit rate: 1.41%, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-03-03 17:53:05 TP0] Decode batch. #running-req: 1, #token: 97, token usage: 0.00, gen throughput (token/s): 1.58, #queue-req: 0
[2025-03-03 17:53:06 TP0] Decode batch. #running-req: 1, #token: 137, token usage: 0.00, gen throughput (token/s): 55.91, #queue-req: 0
It show input_ids=[151646, 198, 5405, 1614, 25, 5538, 25713, 3795, 16, 25, 18, 17, 65, 198, 5405, 2400, 25, 220, 17, 15, 17, 20, 12, 15, 18, 12, 15, 18, 51, 15, 24, 25, 20, 18, 25, 16, 24, 13, 23, 21, 16, 57, 271, 2610, 525, 264, 10950, 17847, 13, 151644, 100633, 47815, 101562, 107380, 82894, 101437, 100968, 3837, 104719, 101914, 102513, 100371, 11319, 151645] in the log, could you please add text as substutition? | closed | 2025-03-04T01:28:52Z | 2025-03-04T11:47:10Z | https://github.com/sgl-project/sglang/issues/4045 | [] | 9dian | 2 |
PaddlePaddle/ERNIE | nlp | 71 | 怎么去使用LCQMC | closed | 2019-04-01T02:34:39Z | 2019-04-04T07:23:15Z | https://github.com/PaddlePaddle/ERNIE/issues/71 | [] | jtyoui | 0 | |
huggingface/transformers | pytorch | 36,579 | AutoModel failed with empty tensor error | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.50.0.dev0
- Platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_CPU
- mixed_precision: bf16
- use_cpu: True
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 4
- main_process_ip: 127.0.0.1
- main_process_port: 29500
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- ipex_config: {'ipex': False}
- mpirun_config: {'mpirun_ccl': '1', 'mpirun_hostfile': '/home/jiqingfe/jiqing_hf/HuggingFace/tests/workloads/fine-tune/hostfile'}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@SunMarc @ArthurZucker @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following codes:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", device_map="auto")
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jiqingfe/transformers/src/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 271, in _wrapper
return func(*args, **kwargs)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 4535, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/jiqingfe/accelerate/src/accelerate/big_modeling.py", line 496, in dispatch_model
model.to(device)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 3262, in to
return super().to(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1343, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 930, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1336, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
### Expected behavior
Expect got a base model. | closed | 2025-03-06T07:57:25Z | 2025-03-13T17:18:16Z | https://github.com/huggingface/transformers/issues/36579 | [
"bug"
] | jiqing-feng | 1 |
Lightning-AI/pytorch-lightning | pytorch | 20,235 | Token throughput monitor assumes batch size is fixed but does not raise meaningful error | ### Bug description
If using token throughput monitor with variable batch size the samples counter will be incorrect leading to a possibly non-monotonically increasing sample count. Although the docs do say that batch size should be fixed, there is no explicit check for this, leading to an error message that is hard to understand.
e.g. if batch sizes are 1, 2, 1
then samples passed to throughput in update are 1, 4, 3, and a value error is raised:
ValueError: Expected the value to increase, last: 4, current: 3
Is there any reason not to support variable batch size on throughput monitor?
### What version are you seeing the problem on?
v2.4
| open | 2024-08-29T12:51:28Z | 2024-11-12T23:19:57Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20235 | [
"bug",
"callback: throughput",
"ver: 2.4.x"
] | alex-hh | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 178 | faster_rcnn_res50_fpn.pth 模型转换为pt文件的时候出错 | **System information**
* Have I written custom code:
* OS Platform(Linux Ubuntu 16.04):
* Python version:anaconda3 python3.7.6
* Deep learning framework and version(e.g., Pytorch1.6):
* Use GPU or not:GPU
* CUDA/cuDNN version(if you use GPU):CUDA10.1 Cudnn7.5
* The network you trained(e.g., Resnet34 network):fater_rcnn_resnet50_fpn
**Describe the current behavior**
when I write a script trace test.pth to test.pt ,It occurred this errors
模型转换代码如下: 代码在另外一台电脑上跑,用sunlogin remote control的,没法复制就先截图了,后面补上代码(苦笑)

**Error info / logs**

| closed | 2021-03-14T15:44:18Z | 2021-03-17T11:51:12Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/178 | [] | ihg1992 | 2 |
benbusby/whoogle-search | flask | 676 | [BUG] replit wake-up failure | **Describe the bug**
After whoogle hibernation, replit wake-up fails
**To Reproduce**
Steps to reproduce the behavior:
1. Click on 'https://repl.it/github/benbusby/whoogle-search'
2. Wait a while
3. See error
**Deployment Method**
- [x] Replit
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
**Desktop (please complete the following information):**
- OS: [Windows]
- Browser [Edge]
- Version [99]
**Additional context**
```output
~/whoogle-search$ ./run
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.8/runpy.py", line 144, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.8/runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "/home/runner/whoogle-search/app/__init__.py", line 1, in <module>
from app.filter import clean_query
File "/home/runner/whoogle-search/app/filter.py", line 9, in <module>
from cryptography.fernet import Fernet
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/cryptography/fernet.py", line 18, in <module>
from cryptography.hazmat.primitives import hashes, padding
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/cryptography/hazmat/primitives/padding.py", line 13, in <module>
from cryptography.hazmat.bindings._padding import lib
ModuleNotFoundError: No module named '_cffi_backend'
``` | closed | 2022-03-11T08:15:41Z | 2022-03-11T09:32:44Z | https://github.com/benbusby/whoogle-search/issues/676 | [
"bug"
] | Lumysia | 5 |
sktime/sktime | scikit-learn | 7,784 | [BUG] EnsembleForecaster( (str, estimator, count) ) is broken | **Describe the bug**
EnsembleForecaster( [(str, estimator, count)] ) is supposed to create an ensemble of count instances of the given estimator. It is failing. (It used to work, I believe.)
**To Reproduce**
```
from sktime.forecasting.compose import EnsembleForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.datasets import load_airline
y = load_airline()
forecasters = [("trend", PolynomialTrendForecaster(), 2) ]
forecaster = EnsembleForecaster(forecasters=forecasters)
forecaster.fit(y=y)
y_pred = forecaster.predict(fh=[1,2,3])
print(f"y_pred = {y_pred}")
```
**The following works**
If you replace the forecasters= statement with the following, it will work
```
forecasters = [("trend1", PolynomialTrendForecaster()),
("trend2", PolynomialTrendForecaster())
]
``` | closed | 2025-02-08T08:18:22Z | 2025-02-08T19:45:15Z | https://github.com/sktime/sktime/issues/7784 | [
"bug",
"module:forecasting"
] | ericjb | 5 |
litestar-org/litestar | api | 3,552 | Bug: normal usage of route handler decorators causes deprecation warnings | ### Description
Using any of the route handler decorators get, post, etc now causes the warning "Semantic HTTP route handler classes are deprecated and will be replaced by functional decorators in Litestar 3.0.
I was told [here](https://github.com/orgs/litestar-org/discussions/3551) that this is not intended behavior and I should create this issue.
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/84f51c8afc3203cd4914922b2ec3c1e92d5d40ba/litestar/handlers/http_handlers/decorators.py#L49
### MCVE
```python
# Run this file with pytest
from litestar import Litestar, get
@get()
async def root_handler() -> None: ...
app = Litestar(route_handlers=[root_handler])
def test_nothing():
...
```
### Steps to reproduce
```bash
1. Install litestar 2.9.0
2. Put the MCVE in a file
3. Run pytest on that file
4. Deprecation warnings appear in the output.
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
2.9.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-06-07T09:55:32Z | 2025-03-20T15:54:45Z | https://github.com/litestar-org/litestar/issues/3552 | [
"Bug :bug:"
] | bunny-therapist | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,354 | [Bug]: image generation will fail or will succeed with some tags but not with others. I could not found a consistent rule on what tags it does this | ### Checklist
- [X] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
image generation will fail or will succeed with some tags but not with others. I could not found a consistent rule on what tags it does this
### Steps to reproduce the problem
i have no idea why some tags work and other don't
### What should have happened?
generate image regardless of tags
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
"traceback": [
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 74, f",
"res = list(func(*args, **kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 53, f",
"res = func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 37, f",
"res = func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 847, process_images",
"res = process_images_inner(p)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 988, process_images_inner",
"samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 1346, sample",
"samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_kdiffusion.py, line 230, sample",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_common.py, line 272, launch_sampling",
"return func()"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_kdiffusion.py, line 230, <lambda>",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py, line 115, decorate_context",
"return func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\sampling.py, line 145, sample_euler_ancestral",
"denoised = model(x, sigmas[i] * s_in, **extra_args)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_cfg_denoiser.py, line 249, forward",
"x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\external.py, line 112, forward",
"eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\external.py, line 138, get_eps",
"return self.inner_model.apply_model(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_models_xl.py, line 43, apply_model",
"return self.model(x, t, cond)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1568, _call_impl",
"result = forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_utils.py, line 34, __call__",
"return self.__sub_func(self.__orig_func, *args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_unet.py, line 50, apply_model",
"result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\wrappers.py, line 28, forward",
"return self.diffusion_model("
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_unet.py, line 91, UNetModel_forward",
"return original_forward(self, x, timesteps, context, *args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\openaimodel.py, line 998, forward",
"h = module(h, emb, context)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\openaimodel.py, line 100, forward",
"x = layer(x, context)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 627, forward",
"x = block(x, context=context[i])"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 459, forward",
"return checkpoint("
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\util.py, line 167, checkpoint",
"return func(*inputs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 483, _forward",
"x = self.ff(self.norm3(x)) + x"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 108, forward",
"return self.net(x)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\container.py, line 215, forward",
"input = module(input)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 89, forward",
"return x * F.gelu(gate)"
]
]
}
],
### Console logs
```Shell
"traceback": [
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 74, f",
"res = list(func(*args, **kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 53, f",
"res = func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\call_queue.py, line 37, f",
"res = func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 847, process_images",
"res = process_images_inner(p)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 988, process_images_inner",
"samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\processing.py, line 1346, sample",
"samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_kdiffusion.py, line 230, sample",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_common.py, line 272, launch_sampling",
"return func()"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_kdiffusion.py, line 230, <lambda>",
"samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py, line 115, decorate_context",
"return func(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\sampling.py, line 145, sample_euler_ancestral",
"denoised = model(x, sigmas[i] * s_in, **extra_args)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_samplers_cfg_denoiser.py, line 249, forward",
"x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\external.py, line 112, forward",
"eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\k-diffusion\\k_diffusion\\external.py, line 138, get_eps",
"return self.inner_model.apply_model(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_models_xl.py, line 43, apply_model",
"return self.model(x, t, cond)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1568, _call_impl",
"result = forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_utils.py, line 22, <lambda>",
"setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_utils.py, line 34, __call__",
"return self.__sub_func(self.__orig_func, *args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_hijack_unet.py, line 50, apply_model",
"result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\wrappers.py, line 28, forward",
"return self.diffusion_model("
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\modules\\sd_unet.py, line 91, UNetModel_forward",
"return original_forward(self, x, timesteps, context, *args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\openaimodel.py, line 998, forward",
"h = module(h, emb, context)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\openaimodel.py, line 100, forward",
"x = layer(x, context)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 627, forward",
"x = block(x, context=context[i])"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 459, forward",
"return checkpoint("
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\diffusionmodules\\util.py, line 167, checkpoint",
"return func(*inputs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 483, _forward",
"x = self.ff(self.norm3(x)) + x"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 108, forward",
"return self.net(x)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\container.py, line 215, forward",
"input = module(input)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1518, _wrapped_call_impl",
"return self._call_impl(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py, line 1527, _call_impl",
"return forward_call(*args, **kwargs)"
],
[
"E:\\AI\\stable-diffusion-webui\\repositories\\generative-models\\sgm\\modules\\attention.py, line 89, forward",
"return x * F.gelu(gate)"
]
]
}
],
```
### Additional information
_No response_ | open | 2024-08-09T00:06:05Z | 2024-08-09T02:51:56Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16354 | [
"bug-report"
] | dilectiogames | 1 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 258 | pydantic_settings package not supported | When I include the pydantic_settings package in my requirements.txt with the tiangolo/uvicorn-gunicorn-fastapi:python3.11 image, I am not able to build the project. | closed | 2023-11-29T06:14:43Z | 2024-08-25T04:06:30Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/258 | [] | bhutanict | 0 |
sanic-org/sanic | asyncio | 2,576 | Sentry integration and background tasks | **Describe the bug**
Looking at the Sentry integration, a hub is created on request ('http.lifecycle.request' => `_hub_enter`) and removed at exit ('http.lifecycle.response' => `_hub_exit`).
If I understand this correctly, it means that if an exception occurs outside of the request, no Hub will be define and the exception won't be caught by Sentry.
This can happen when running a long background_task from a request handler that has already returned a response.
**Environment (please complete the following information):**
- Sanic Version: 22.9.0
- Sentry SDK : 1.9.9
| closed | 2022-10-18T12:45:41Z | 2022-10-18T14:10:14Z | https://github.com/sanic-org/sanic/issues/2576 | [
"bug"
] | cnicodeme | 5 |
apache/airflow | machine-learning | 47,567 | Issue with bundle lock in dag bundle versioning | ### Body
Hey team :slightly_smiling_face:
Is the GitDagBundle working yet? And if so, what am I missing?
I set up a git_default connection with content read permissions for all my repos and this config:
[dag_processor]
dag_bundle_config_list=[{"name": "dags-folder", "classpath": "airflow.dag_processing.bundles.local.LocalDagBundle", "kwargs": {}}, {"name": "tjanif/dynamic-task-mapping-tutorial", "classpath": "airflow.dag_processing.bundles.git.GitDagBundle", "kwargs": {"subdir": "dags", "tracking_ref": "main", "refresh_interval": 30}}]
and my dag-processor crashes with:
dag-processor | FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/hm/tgdl9dqs2j18vhrsk7s6c1q80000gn/T/airflow/dag_bundles/_locks/tjanif/dynamic-task-mapping-tutorial.lock'
directly after starting it. (Full error in :thread: )
```
dag-processor | [2025-03-10T12:04:34.805+0100] {manager.py:246} INFO - Checking for new files in bundle dags-folder every 300 seconds
dag-processor | [2025-03-10T12:04:34.805+0100] {manager.py:246} INFO - Checking for new files in bundle tjanif/dynamic-task-mapping-tutorial every 30 seconds
dag-processor | [2025-03-10T12:04:34.806+0100] {manager.py:504} INFO - Refreshing bundle dags-folder
dag-processor | [2025-03-10T12:04:34.807+0100] {manager.py:554} INFO - Searching for files in dags-folder at /Users/tamara.fingerlin/airflow/dags_manning
dag-processor | [2025-03-10T12:04:34.815+0100] {manager.py:556} INFO - Found 6 files for bundle dags-folder
dag-processor | [2025-03-10T12:04:34.823+0100] {dag_processor_job_runner.py:63} ERROR - Exception when executing DagProcessorJob
dag-processor | Traceback (most recent call last):
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/jobs/dag_processor_job_runner.py", line 61, in _execute
dag-processor | self.processor.run()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 252, in run
dag-processor | return self._run_parsing_loop()
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 323, in _run_parsing_loop
dag-processor | self._refresh_dag_bundles(known_files=known_files)
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 474, in _refresh_dag_bundles
dag-processor | bundle.initialize()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/git.py", line 205, in initialize
dag-processor | self._initialize()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/git.py", line 186, in _initialize
dag-processor | with self.lock():
dag-processor | ^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/.pyenv/versions/3.12.8/lib/python3.12/contextlib.py", line 137, in __enter__
dag-processor | return next(self.gen)
dag-processor | ^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/base.py", line 338, in lock
dag-processor | with open(lock_file_path, "w") as lock_file:
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/hm/tgdl9dqs2j18vhrsk7s6c1q80000gn/T/airflow/dag_bundles/_locks/tjanif/dynamic-task-mapping-tutorial.lock'
dag-processor | Traceback (most recent call last):
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/bin/airflow", line 12, in <module>
dag-processor | sys.exit(main())
dag-processor | ^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/__main__.py", line 58, in main
dag-processor | args.func(args)
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/cli/cli_config.py", line 49, in command
dag-processor | return func(*args, **kwargs)
dag-processor | ^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/utils/cli.py", line 112, in wrapper
dag-processor | return f(*args, **kwargs)
dag-processor | ^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function
dag-processor | return func(*args, **kwargs)
dag-processor | ^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/cli/commands/local_commands/dag_processor_command.py", line 54, in dag_processor
dag-processor | run_command_with_daemon_option(
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/cli/commands/local_commands/daemon_utils.py", line 86, in run_command_with_daemon_option
dag-processor | callback()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/cli/commands/local_commands/dag_processor_command.py", line 57, in <lambda>
dag-processor | callback=lambda: run_job(job=job_runner.job, execute_callable=job_runner._execute),
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/utils/session.py", line 101, in wrapper
dag-processor | return func(*args, session=session, **kwargs)
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/jobs/job.py", line 342, in run_job
dag-processor | return execute_job(job, execute_callable=execute_callable)
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/jobs/job.py", line 371, in execute_job
dag-processor | ret = execute_callable()
dag-processor | ^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/jobs/dag_processor_job_runner.py", line 61, in _execute
dag-processor | self.processor.run()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 252, in run
dag-processor | return self._run_parsing_loop()
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 323, in _run_parsing_loop
dag-processor | self._refresh_dag_bundles(known_files=known_files)
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/manager.py", line 474, in _refresh_dag_bundles
dag-processor | bundle.initialize()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/git.py", line 205, in initialize
dag-processor | self._initialize()
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/git.py", line 186, in _initialize
dag-processor | with self.lock():
dag-processor | ^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/.pyenv/versions/3.12.8/lib/python3.12/contextlib.py", line 137, in __enter__
dag-processor | return next(self.gen)
dag-processor | ^^^^^^^^^^^^^^
dag-processor | File "/Users/tamara.fingerlin/0_PARA/Projects/Airflow3.0/a wheel/beta2/lib/python3.12/site-packages/airflow/dag_processing/bundles/base.py", line 338, in lock
dag-processor | with open(lock_file_path, "w") as lock_file:
dag-processor | ^^^^^^^^^^^^^^^^^^^^^^^^^
dag-processor | FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/hm/tgdl9dqs2j18vhrsk7s6c1q80000gn/T/airflow/dag_bundles/_locks/tjanif/dynamic-task-mapping-tutorial.lock'
```
@ephraimbuddy @jedcunningham
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-10T14:03:47Z | 2025-03-12T03:20:10Z | https://github.com/apache/airflow/issues/47567 | [
"kind:bug",
"kind:meta",
"area:core",
"area:dynamic-task-mapping",
"affected_version:3.0.0beta"
] | dstandish | 3 |
sebp/scikit-survival | scikit-learn | 29 | Kaplan Meier output consistent regardless of time | Very useful article, thanks guys. I'm having problems getting the Kaplan Meier Estimator to give a meaningful output. I've saved my results in a record array as shown below, with the event of a cancellation happening as boolean and the number of days before cancellation / number of days so far if there has not yet been a cancellation as a float. I don't understand why the kaplan meier estimaor always predicts the cancellations to be at a steady rate of one regardless of the time, and I don't have the code for the Kaplan Meier Estimator to check. Has anyone else had this problem and how can it be solved?

| closed | 2018-03-26T15:54:07Z | 2018-07-02T20:53:51Z | https://github.com/sebp/scikit-survival/issues/29 | [
"awaiting response"
] | thatemmagirl | 6 |
sinaptik-ai/pandas-ai | pandas | 1,471 | The code generated by the agent modifies the original data (dfs) | ### System Info
pandasai 2.4.0
### 🐛 Describe the bug
This is the data that is input to the agent
<img width="806" alt="image" src="https://github.com/user-attachments/assets/e720db07-8e52-4e4b-a2c9-33f28859126d" />
then the agent run this code:
```
# Convert the 'date' column to datetime format and set it as the index
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
# Sort the DataFrame by date
df.sort_index(inplace=True)
# Drop any rows with missing values
df = df.dropna()
# Extract the 'close' prices for analysis
close_prices = df['close']
```
The original data is modified.
<img width="714" alt="image" src="https://github.com/user-attachments/assets/4795b306-8756-437f-bccc-f531cf07fa6d" />
How to avoid this problem
| closed | 2024-12-12T08:18:11Z | 2024-12-16T09:37:03Z | https://github.com/sinaptik-ai/pandas-ai/issues/1471 | [
"bug"
] | XJTU-JP | 3 |
HumanSignal/labelImg | deep-learning | 628 | QAction::eventFilter: Ambiguous shortcut overload: Ctrl+D | QAction::eventFilter: Ambiguous shortcut overload: Ctrl+D
Can not duplicate the rect box using ctrl + D......? | open | 2020-08-07T07:39:54Z | 2020-08-21T08:11:29Z | https://github.com/HumanSignal/labelImg/issues/628 | [] | Stephenfang51 | 3 |
yunjey/pytorch-tutorial | deep-learning | 27 | Tutorial 09: Issues converting encoder and decoder models to CPUs pytorch 0.1.11 | Thanks for a fantastic tutorial. Really clear and easy to follow!
I'd like to run the pretrained models on a CPU, and am trying to convert the models as follows:
encoder.load_state_dict(torch.load(args.encoder_path, map_location=lambda storage, loc: storage))
decoder.load_state_dict(torch.load(args.decoder_path, map_location=lambda storage, loc: storage))
I also tried:
encoder = torch.load(args.encoder_path, map_location=lambda storage, loc: storage)
decoder = torch.load(args.decoder_path, map_location=lambda storage, loc: storage)
as advised by the pytorch developers (https://discuss.pytorch.org/t/on-a-cpu-device-how-to-load-checkpoint-saved-on-gpu-device/349/8)
Based on the discussions it seems that it might be a pytorch versioning issue. Could you let me know what version of pytorch you used to train and save the models? Or if it's something else, would really appreciate any guidance in getting it fixed.
Many thanks! | closed | 2017-04-27T20:48:46Z | 2017-04-30T18:09:00Z | https://github.com/yunjey/pytorch-tutorial/issues/27 | [] | lgraesser | 2 |
huggingface/transformers | python | 36,660 | [FEAT] [non-CUDA]: Support alternative implementation for `constraints.positive_definite.check` | ### Feature request
Could there be an alternative implementation for
```
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean
is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all()
```
the `torch.linalg.cholesky` only exists for CUDA in pytorch.
### Motivation
To support vision language embedding model (llava model) on vLLM for ROCm.
When I am trying to enable vision_language embedding model support on vLLM for ROCm, I encounter this issue.
```
tests/models/embedding/vision_language/test_llava_next.py:134:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/models/embedding/vision_language/test_llava_next.py:63: in _run_test
hf_model.model.resize_token_embeddings(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2109: in resize_token_embeddings
model_embeds = self._resize_token_embeddings(new_num_tokens, pad_to_multiple_of, mean_resizing)
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2134: in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2291: in _get_resized_embeddings
self._init_added_embeddings_weights_with_mean(
/usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean
is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PositiveDefinite()
value = tensor([[ 8.4661e-14, -9.3146e-17, 5.4274e-16, ..., -1.2541e-16,
8.1008e-16, 2.6355e-16],
[-9.314... [ 2.6355e-16, -5.6042e-16, 5.1984e-16, ..., -1.9993e-16,
-2.7124e-16, 8.5429e-14]], device='cuda:0')
def check(self, value):
sym_check = super().check(value)
if not sym_check.all():
return sym_check
> return torch.linalg.cholesky_ex(value).info.eq(0)
E RuntimeError: Calling torch.linalg.cholesky on a CUDA tensor requires compiling PyTorch with MAGMA. Please use PyTorch built with MAGMA support.
```
the `torch.linalg.cholesky` only exists for CUDA in pytorch.
### Your contribution
By helping to test on AMD GPUs with the fixed and providing feedback. | open | 2025-03-12T09:38:30Z | 2025-03-15T18:19:37Z | https://github.com/huggingface/transformers/issues/36660 | [
"Feature request"
] | tjtanaa | 10 |
ets-labs/python-dependency-injector | asyncio | 556 | [delete] Core container as singletone for entire app | [App Package (Container) Diagramm](https://online.visual-paradigm.com/community/share/example-app-ub4sde1um)
The `CoreContainer` container contains:
```python
class CoreContainer( containers.DeclarativeContainer ):
arguments = providers.Resource( parse_arguments )
config = providers.Resource( parse_config, arguments=arguments )
_logging = providers.Resource( init_logging,
arguments=arguments,
config=config )
logger_factory = providers.Factory( logging.getLogger ).provider
```
The main question is about `CoreContainer`. Is there a way to make it
share for the entire application? The only way I found is when
importing top-level containers (`CleanContainer`, `DatabaseInitContainer`,
`ParseContainer`, ...) specify the `CoreContainer` container as a
dependency and pass it into the chuld containers.
If I do this:
```python
class DatabaseContainer( containers.DeclarativeContainer ):
core: CoreContainer = providers.Container( CoreContainer )
...
class DownloadContainer( containers.DeclarativeContainer ):
core: CoreContainer = providers.Container( CoreContainer )
...
class ParseContainer( containers.DeclarativeContainer ):
core: CoreContainer = providers.Container( CoreContainer )
```
then all elements in `CoreContainer` are initialized multiple times.
It was it would be convenient if the container itself was like a
`Singletone` for the entire application.
| closed | 2022-02-01T16:14:43Z | 2022-02-02T11:27:48Z | https://github.com/ets-labs/python-dependency-injector/issues/556 | [] | VasyaGaikin | 0 |
aiogram/aiogram | asyncio | 925 | Translate docs | It would be nice to have the docs translated to different languages to make easy dive into aiogram and bot development, it's possible with sphinx and ReadTheDocs native using I18n and this instruments is already used in this project.
In due to 2.x branch of development soon will be finished and only 3.x will be supported (after public release) we don't need to translate 2.x docs, so, the only new docs should be translated.
## Goals
- [ ] Configure Sphinx and ReadTheDocs project to use I18n
- [ ] Describe how to translate the docs in contribution guide
- [ ] Translate texts to different languages:
- [ ] Ukrainian (by @JrooTJunior)
- [ ] Portuguese (by *maybe you*)
- [ ] ... (you can ask to add new language to this list if you can make translation to this language)
## Related docs
- Sphinx I18n: https://www.sphinx-doc.org/en/master/usage/advanced/intl.html
- ReadTheDocs Internationalization: https://docs.readthedocs.io/en/stable/localization.html
| closed | 2022-06-18T01:07:36Z | 2022-10-16T02:45:10Z | https://github.com/aiogram/aiogram/issues/925 | [
"help wanted",
"docs",
"3.x",
"docs-i18n"
] | JrooTJunior | 0 |
deezer/spleeter | deep-learning | 750 | Spleeter(C++)_dynamic_library: Solutions and steps to implement the executable file of the algorithm | <!-- Please respect the title [Discussion] tag. -->
Spleeter(C++)_dynamic_library: Solutions and steps to implement the executable file of the algorithm
https://github.com/KangChou/spleeter_dynamic_library
| open | 2022-04-13T07:38:27Z | 2022-04-13T07:38:50Z | https://github.com/deezer/spleeter/issues/750 | [
"question"
] | KangChou | 0 |
moshi4/pyCirclize | data-visualization | 52 | Enable wrapping/bending the text around a circle | It would be nice if we can bend the text
reference: [r - Wrapping / bending text around a circle in plot - Stack Overflow](https://stackoverflow.com/questions/27638826/wrapping-bending-text-around-a-circle-in-plot)
This would be especially useful when the text is long.
FYI, circlize has `facing="bending"` flag.
```
circos.text(x = 0.5, y = 0.5, labels = as.character(deg), facing = "bending")
``` | open | 2024-01-24T22:15:40Z | 2024-05-02T06:10:07Z | https://github.com/moshi4/pyCirclize/issues/52 | [
"enhancement"
] | grepinsight | 1 |
keras-rl/keras-rl | tensorflow | 51 | Unable to learn simple catch game | I've made custom environment, where the fruit is falling and you control a paddle to catch it: https://github.com/hmate9/gym-catch/blob/master/gym_catch/envs/catch_env.py
I've tried to use keras-rl to reimplement this: https://gist.github.com/EderSantana/c7222daa328f0e885093
The same game, catching a fruit, and their implementation finds a good model in a couple of minutes which catches nearly 100% of the time.
Here is the code for learning with keras-rl that I wrote: https://gist.github.com/hmate9/49758ee1117ae55616f45d72186834a5
The code with keras-rl does not converge, and does not even seem to be better than random even after running for hours.
Does anyone know why this is? Did I write the environment wrong or am I using keras-rl wrong?
Your answer is greatly appreciated, I have not been able to solve this for a day now. | closed | 2016-12-04T20:25:46Z | 2016-12-05T10:24:12Z | https://github.com/keras-rl/keras-rl/issues/51 | [] | hmate9 | 13 |
huggingface/datasets | nlp | 6,827 | Loading a remote dataset fails in the last release (v2.19.0) | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token)
```
I get the following error

Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue.
I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months.
### Steps to reproduce the bug
Since this happened with one particular dataset for me, I am listing steps to use that dataset.
1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access.
2. Create a token on your huggingface account with read access.
3. Run the following line, substituing `<your_token_here>` with your token.
```
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>")
```
### Expected behavior
Be able to load the dataset in question.
### Environment info
datasets == 2.19.0
python == 3.10
kernel == Linux 6.1.58+ | open | 2024-04-19T21:11:58Z | 2024-04-19T21:13:42Z | https://github.com/huggingface/datasets/issues/6827 | [] | zrthxn | 0 |
ipython/ipython | jupyter | 14,303 | Unexpected exception formatting exception in Python 3.13.0a3 | I appreciate that Python 3.13 is still in alpha, but some incompatibility seems to have been introduced with the way that exception data is produced that causes `ipython`'s pretty execution formatting to fail, cause the raising of a separate "Unexpected exception formatting exception".
## Steps to reproduce
1) Build Python 3.13.0a3 from source and install it somewhere.
2) Create a venv using the new Python 3.13 interpreter.
3) Build the latest master branch of [`parso`](https://github.com/davidhalter/parso) from source and install it into the venv.
4) Install `ipython` using `pip`.
5) Run `ipython` in a way that triggers an exception (such as `ipython -c 'print(1/0)'`)
## Expected result
`ipython` should print a nicely formatted exception. For instance, on Python 3.12 the result is:
```
(venv_3.12) nicko@testvm ~ % ipython -c 'print(1/0)'
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
Cell In[1], line 1
----> 1 print(1/0)
ZeroDivisionError: division by zero
```
## Actual result
It appears that `ipython`, or possibly the `executing` library, is choking on the stack data and generates an `Unexpected exception formatting exception` message:
```
(venv_3.13) nicko@testvm ~ % ipython -c 'print(1/0)'
Unexpected exception formatting exception. Falling back to standard exception
Traceback (most recent call last):
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<ipython-input-1-2fc232d1511a>", line 1, in <module>
print(1/0)
~^~
ZeroDivisionError: division by zero
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/interactiveshell.py", line 2144, in showtraceback
stb = self.InteractiveTB.structured_traceback(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
etype, value, tb, tb_offset=tb_offset
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/ultratb.py", line 1435, in structured_traceback
return FormattedTB.structured_traceback(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, etype, evalue, etb, tb_offset, number_of_lines_of_context
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/ultratb.py", line 1326, in structured_traceback
return VerboseTB.structured_traceback(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, etype, value, tb, tb_offset, number_of_lines_of_context
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/ultratb.py", line 1173, in structured_traceback
formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tb_offset)
^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/ultratb.py", line 1063, in format_exception_as_a_whole
self.get_records(etb, number_of_lines_of_context, tb_offset) if etb else []
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/IPython/core/ultratb.py", line 1160, in get_records
res = list(stack_data.FrameInfo.stack_data(etb, options=options))[tb_offset:]
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/stack_data/core.py", line 597, in stack_data
yield from collapse_repeated(
...<4 lines>...
)
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/stack_data/utils.py", line 83, in collapse_repeated
yield from map(mapper, original_group)
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/stack_data/core.py", line 587, in mapper
return cls(f, options)
~~~^^^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/stack_data/core.py", line 551, in __init__
self.executing = Source.executing(frame_or_tb)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/executing/executing.py", line 283, in executing
assert_(new_stmts <= stmts)
~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/nvansomeren/python_tests/venv_3.13/lib/python3.13/site-packages/executing/executing.py", line 80, in assert_
raise AssertionError(str(message))
AssertionError
```
| open | 2024-01-24T04:52:46Z | 2024-02-03T22:33:33Z | https://github.com/ipython/ipython/issues/14303 | [] | nickovs | 3 |
kizniche/Mycodo | automation | 423 | Sensor support request: Thermocouple | I may have just missed it but I couldn't find any reference to Mycodo supporting a thermocouple input.
If it could support one of the common/cheap k-type interface chips/boards like the MAX31855 then it would be useful for higher temperature applications such as heat treat kilns, BBQs, smokers, coffee roasters, reflow ovens etc
Most of the breakout boards I have seen use SPI and there is a python library available from adafruit:
https://github.com/adafruit/Adafruit_Python_MAX31855 | closed | 2018-03-09T16:08:32Z | 2018-04-06T00:55:01Z | https://github.com/kizniche/Mycodo/issues/423 | [] | samsixtysix | 5 |
plotly/dash | plotly | 2,679 | test_devtools_error_handling.py fails on Python 3.11 | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.14.1 ~/github.com/plotly/dash
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
```
**Describe the bug**
I'm following the instructions on https://github.com/plotly/dash/blob/dev/CONTRIBUTING.md, and running`pytest tests/integration/devtools/test_devtools_error_handling.py` on Python 3.11, and the test fails due to missing traceback:
```
> assert "in update_output" in error0
E assert 'in update_output' in '<!doctype html>\n<html lang=en>\n <head>\n <title>Exception: Special 2 clicks exception\n // Werkzeug Debugger</title>\n <link rel="stylesheet" href="?__debugger__=yes&cmd=resource&f=style.css">\n <link rel="shortcut icon"\n href="?__debugger__=yes&cmd=resource&f=console.png">\n <script src="?__debugger__=yes&cmd=resource&f=debugger.js"></script>\n <script>\n var CONSOLE_MODE = false,\n EVALEX = true,\n EVALEX_TRUSTED = true,\n SECRET = "Qcq5G0KKmhgH01zx3BBl";\n </script>\n </head>\n <body style="background-color: #fff">\n <div class="debugger">\n<h1>Exception</h1>\n<div class="detail">\n <p class="errormsg">Exception: Special 2 clicks exception\n</p>\n</div>\n<h2 class="traceback">Traceback <em>(most recent call last)</em></h2>\n<div class="traceback noframe-traceback">\n <h3></h3>\n <ul></ul>\n <blockquote>Exception: Special 2 clicks exception\n</blockquote>\n</div>\n\n<div class="plain">\n <p>\n This is the Copy/Paste friendly version of the traceback.\n </p>\n <textarea cols="50" rows="10" name="code" readonly>Exception: Special 2 clicks exception\n</textarea>\n</div>\n<div class="explanation">\n The debugger caught an exception in your WSGI application. You can now\n look at the traceback which led to the error. <span class="nojavascript">\n If you enable JavaScript you can also use additional features such as code\n execution (if the evalex feature is enabled), automatic pasting of the\n exceptions and much more.</span>\n</div>\n <div class="footer">\n Brought to you by <strong class="arthur">DON\'T PANIC</strong>, your\n friendly Werkzeug powered traceback interpreter.\n </div>\n </div>\n\n <div class="pin-prompt">\n <div class="inner">\n <h3>Console Locked</h3>\n <p>\n The console is locked and needs to be unlocked by entering the PIN.\n You can find the PIN printed out on the standard output of your\n shell that runs the server.\n <form>\n <p>PIN:\n <input type=text name=pin size=14>\n <input type=submit name=btn value="Confirm Pin">\n </form>\n </div>\n </div>\n </body>\n</html>\n\n<!--\n\nException: Special 2 clicks exception\n\n\n-->\n'
```
I tested it on Python 3.10 and it worked fine.
**Expected behavior**
The test should pass on Python 3.11 too.
It appears dash works mostly fine on Python 3.11, but it's not officially declared? Plus this test fails.
Regarding the failure, maybe this comment is relevant? https://github.com/pallets/werkzeug/pull/2640#issuecomment-1503454090:
> It's actually not sufficient right now for Python 3.11's new traceback features. | open | 2023-10-30T19:09:12Z | 2024-08-13T19:42:00Z | https://github.com/plotly/dash/issues/2679 | [
"bug",
"P3"
] | yilei | 0 |
autogluon/autogluon | computer-vision | 3,853 | Feature Request: Make Deletion of lightning_logs Directory Optional in TimeSeries Models | ## Description
The current implementation of AutoGluon's time series models, specifically those using the GluonTS torch backend, automatically deletes the `lightning_logs` directory after each training run. This directory contains logs that are essential for users who utilize TensorBoard to monitor and compare different training sessions. The automatic deletion of these logs makes it difficult to use TensorBoard effectively, as it relies on historical log data for comparison.
## Proposal
I propose a feature for the `timeseries` module where the deletion of the `lightning_logs` directory is made optional. This could be implemented by adding a parameter to the model training functions and the `fit()` method, allowing users to choose whether to preserve the logs. By default, this parameter could be `True` to maintain the current behavior, but when set to `False`, it would keep the logs intact for further analysis with TensorBoard.
## Code
this code is responsible for deletion of the logs:
if lightning_logs_dir.exists() and lightning_logs_dir.is_dir(): logger.debug(f"Removing lightning_logs directory {lightning_logs_dir}") shutil.rmtree(lightning_logs_dir) | closed | 2024-01-09T11:37:14Z | 2024-04-05T18:45:59Z | https://github.com/autogluon/autogluon/issues/3853 | [
"enhancement",
"module: timeseries"
] | obwohl | 1 |
lepture/authlib | flask | 305 | ResourceProtector decorator doesn't work with class-based Django views | **Describe the bug**
When using the ResourceProtector decorator (as documented [here](https://docs.authlib.org/en/latest/django/2/resource-server.html)) on a Django REST Framework **class-based view**'s method:
```python
class MyView(APIView):
@require_oauth("order")
def post(self, request, *args, **kwargs):
return super().post(request, *args, **kwargs)
```
I get the following error:
> 'MyView' object has no attribute 'get_raw_uri'
This is because in this case, the first parameter in the [decorator's `decorated` function](https://github.com/lepture/authlib/blob/ffeeaa9fd7b5bc4ea7cae9fcf0c2ad9d7f5cf22a/authlib/integrations/django_oauth2/resource_protector.py#L36), will be the **view object**, rather than the request.
Adding a `view` parameter as the first parameter in the function fixes this.
```python
def __call__(self, scopes=None, optional=False):
def wrapper(f):
@functools.wraps(f)
def decorated(view, request, *args, **kwargs): # <= Change here
try:
token = self.acquire_token(request, scopes)
request.oauth_token = token
```
**Error Stacks**
```
File "/.venv/lib/python3.6/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated
token = self.acquire_token(request, scopes)
File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token
url = request.get_raw_uri()
AttributeError: 'MyView' object has no attribute 'get_raw_uri'
```
**To Reproduce**
See code example in the bug description above.
**Expected behavior**
The decorator to work the same way as it does for function-based views.
**Environment:**
- OS: OSX
- Python Version: 3.6.9
- Authlib Version: 1.0.0.dev0
**Additional context**
I'm available to create a PR to fix this if you tell me the approach you want to take here. | closed | 2020-12-20T16:08:15Z | 2022-11-17T09:36:30Z | https://github.com/lepture/authlib/issues/305 | [
"bug"
] | thatguysimon | 3 |
aio-libs/aiomysql | sqlalchemy | 463 | pool lose closed property | ```
pool = await aiopg.create_pool()
pool.closed
pool = await aiomysql.create_pool()
pool._closed
```
keep the same ?
| closed | 2020-02-04T20:01:58Z | 2022-02-02T22:35:28Z | https://github.com/aio-libs/aiomysql/issues/463 | [
"enhancement"
] | cole-dda | 1 |
fastapi/sqlmodel | fastapi | 385 | Issue with many-to-many relation with extra fields | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from fastapi import FastAPI, status, Depends, HTTPException, Query
from typing import Optional, List
from sqlmodel import SQLModel, Field, Relationship, Session, select, create_engine
from fastapi import status, Depends, HTTPException, Query
app = FastAPI()
sqlite_file_name = "database_with_extra_fields.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)
@app.on_event("startup")
def on_startup():
SQLModel.metadata.drop_all(engine)
SQLModel.metadata.create_all(engine)
create_coffees()
def get_session():
with Session(engine) as session:
yield session
def create_coffees():
with Session(engine) as session:
ingredient_1 = Ingredient(name="Espresso")
ingredient_2 = Ingredient(name="Semi Skimmed Milk")
coffee_1 = Coffee(
name="Coffee 1",
teaser= "Nice cup of coffee",
collection="Foundations",
origin="Summer 2020",
color="#444",
description= "",
price=200,
image= "//coffee_1.png",
)
coffee_1_ingredient_1_link = CoffeeIngredient(coffee=coffee_1, ingredient=ingredient_1, quantity=1)
coffee_1_ingredient_2_link = CoffeeIngredient(coffee=coffee_1, ingredient=ingredient_2, quantity=4)
session.add(coffee_1_ingredient_1_link)
session.add(coffee_1_ingredient_2_link)
session.commit()
session.refresh(coffee_1_ingredient_1_link)
session.refresh(coffee_1_ingredient_2_link)
class CoffeeIngredient(SQLModel, table=True):
coffee_id: Optional[int] = Field(default=None, foreign_key="coffee.id", primary_key=True)
ingredient_id: Optional[int] = Field(default=None, foreign_key="ingredient.id", primary_key=True)
quantity: Optional[int]
coffee: "Coffee" = Relationship(back_populates="ingredient_links")
ingredient: "Ingredient" = Relationship(back_populates="coffee_links")
class IngredientBase(SQLModel):
name: str = Field(index=True)
class Ingredient(IngredientBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
coffee_links: List[CoffeeIngredient] = Relationship(back_populates="ingredient")
class IngredientRead(IngredientBase):
id: int
class CoffeeBase(SQLModel):
name: str = Field(index=True)
teaser: Optional[str] = Field()
collection: Optional[str] = Field()
origin: Optional[str] = Field()
color: Optional[str] = Field()
description: Optional[str] = Field()
price: int = Field()
image: Optional[str] = Field()
class Coffee(CoffeeBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
ingredient_links: List[CoffeeIngredient] = Relationship(back_populates="coffee")
class CoffeeRead(CoffeeBase):
id: int
class CoffeeReadWithIngredients(CoffeeRead):
ingredients: List[IngredientRead] = []
class IngredientsReadWithCoffee(IngredientRead):
coffees: List[CoffeeRead] = []
@app.get("/coffees", response_model=List[Coffee], status_code=status.HTTP_200_OK)
async def get_all_coffees(offset: int = 0, limit: int = Query(default=100, lte=100), session: Session = Depends(get_session)):
statement = select(Coffee).offset(offset).limit(limit)
coffees = session.exec(statement).all()
if not coffees:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="No coffees found")
return coffees
@app.get("/ingredients", response_model=List[Ingredient], status_code=status.HTTP_200_OK)
async def get_all_ingredients(*, session: Session = Depends(get_session), offset: int = 0, limit: int = Query(default=100, lte=100)):
statement = select(Ingredient).offset(offset).limit(limit)
ingredients = session.exec(statement).all()
if not ingredients:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="No ingredients found")
return ingredients
@app.get("/coffees/{coffee_id}", response_model=CoffeeReadWithIngredients, status_code=status.HTTP_200_OK)
async def get_coffee(coffee_id: int, session: Session = Depends(get_session)):
statement = select(Coffee).where(Coffee.id == coffee_id)
coffee = session.exec(statement).first()
if not coffee:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Coffee not found")
return coffee
@app.get("/ingredients/{ingredient_id}", response_model=IngredientsReadWithCoffee, status_code=status.HTTP_200_OK)
async def get_ingredient(*, session: Session = Depends(get_session), ingredient_id: int, ):
ingredient = session.get(Ingredient, ingredient_id)
print(ingredient)
if not ingredient:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Coffee not found")
return ingredient
```
### Description
I have a many to many relation between coffees and ingredients but **_with extra fields_** in the association table. I'm following [this](https://sqlmodel.tiangolo.com/tutorial/many-to-many/link-with-extra-fields/) documentation as a basis.
The example code above will always return an empty list of ingredients when asking coffee details. Also the case vice versa: I always get back an empty list of coffees when asking ingredient details.
Note: I'm using SQLAlchemy version 1.4.35 as I know there are issues with later versions.
Two example responses below (can run on the example code above)
Request: `http://localhost:8000/coffees/1`
Response:
```
{
"name": "Coffee 1",
"teaser": "Nice cup of coffee",
"collection": "Foundations",
"origin": "Summer 2020",
"color": "#444",
"description": "",
"price": 200,
"image": "//coffee_1.png",
"id": 1,
"ingredients": []
}
```
Request: `http://localhost:8000/ingredients/1`
Response:
```
{
"name": "Semi Skimmed Milk",
"id": 1,
"coffees": []
}
```
### Operating System
macOS
### Operating System Details
Apple M1 Pro - macOS Monterey Version 12.4
### SQLModel Version
0.0.6
### Python Version
Python 3.8.9
### Additional Context
The issue described above is only for many-to-many relations **_with extra fields_**. A 'normal' many-to-many relation (=without extra fields in the association table) as explained [here](https://sqlmodel.tiangolo.com/tutorial/many-to-many/create-models-with-link/) is working fine. The example code for a 'normal' many-to-many relation is below. This code works.
```
from fastapi import FastAPI, status, Depends, HTTPException, Query
from typing import Optional, List
from sqlmodel import SQLModel, Field, Relationship, Session, select, create_engine
from fastapi import status, Depends, HTTPException, Query
app = FastAPI()
sqlite_file_name = "database_without_extra_fields.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)
@app.on_event("startup")
def on_startup():
SQLModel.metadata.drop_all(engine)
SQLModel.metadata.create_all(engine)
create_coffees()
def get_session():
with Session(engine) as session:
yield session
def create_coffees():
with Session(engine) as session:
ingredient_1 = Ingredient(name="Espresso")
ingredient_2 = Ingredient(name="Semi Skimmed Milk")
coffee_1 = Coffee(
name="Coffee 1",
teaser= "Nice cup of coffee",
collection="Foundations",
origin="Summer 2020",
color="#444",
description= "",
price=200,
image= "//coffee_1.png",
ingredients=[ingredient_1, ingredient_2]
)
session.add(coffee_1)
session.commit()
session.refresh(coffee_1)
class CoffeeIngredient(SQLModel, table=True):
coffee_id: Optional[int] = Field(default=None, foreign_key="coffee.id", primary_key=True)
ingredient_id: Optional[int] = Field(default=None, foreign_key="ingredient.id", primary_key=True)
class IngredientBase(SQLModel):
name: str = Field(index=True)
class Ingredient(IngredientBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
coffees: List["Coffee"] = Relationship(back_populates="ingredients", link_model=CoffeeIngredient)
class IngredientRead(IngredientBase):
id: int
class CoffeeBase(SQLModel):
name: str = Field(index=True)
teaser: Optional[str] = Field()
collection: Optional[str] = Field()
origin: Optional[str] = Field()
color: Optional[str] = Field()
description: Optional[str] = Field()
price: int = Field()
image: Optional[str] = Field()
class Coffee(CoffeeBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
ingredients: List[Ingredient] = Relationship(back_populates="coffees", link_model=CoffeeIngredient)
class CoffeeRead(CoffeeBase):
id: int
class CoffeeReadWithIngredients(CoffeeRead):
ingredients: List[IngredientRead] = []
class IngredientsReadWithCoffee(IngredientRead):
coffees: List[CoffeeRead] = []
@app.get("/coffees", response_model=List[Coffee], status_code=status.HTTP_200_OK)
async def get_all_coffees(offset: int = 0, limit: int = Query(default=100, lte=100), session: Session = Depends(get_session)):
statement = select(Coffee).offset(offset).limit(limit)
coffees = session.exec(statement).all()
if not coffees:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="No coffees found")
return coffees
@app.get("/ingredients", response_model=List[Ingredient], status_code=status.HTTP_200_OK)
async def get_all_ingredients(*, session: Session = Depends(get_session), offset: int = 0, limit: int = Query(default=100, lte=100)):
statement = select(Ingredient).offset(offset).limit(limit)
ingredients = session.exec(statement).all()
if not ingredients:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="No ingredients found")
return ingredients
@app.get("/coffees/{coffee_id}", response_model=CoffeeReadWithIngredients, status_code=status.HTTP_200_OK)
async def get_coffee(coffee_id: int, session: Session = Depends(get_session)):
statement = select(Coffee).where(Coffee.id == coffee_id)
coffee = session.exec(statement).first()
if not coffee:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Coffee not found")
return coffee
@app.get("/ingredients/{ingredient_id}", response_model=IngredientsReadWithCoffee, status_code=status.HTTP_200_OK)
async def get_ingredient(*, session: Session = Depends(get_session), ingredient_id: int, ):
ingredient = session.get(Ingredient, ingredient_id)
print(ingredient)
if not ingredient:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Coffee not found")
return ingredient
``` | closed | 2022-07-27T09:48:17Z | 2022-08-05T17:06:40Z | https://github.com/fastapi/sqlmodel/issues/385 | [
"question"
] | wiwa1978 | 4 |
scrapy/scrapy | web-scraping | 6,558 | Refactor long lines for better readability | ## Summary
Several lines in `scheduler.py` exceed the PEP 8 recommendation of a maximum line length of 79 characters (or 99 for team agreements).
## Motivation
Long character count for lines [59](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L59), [61](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L61), [66](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L66), [72](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L72), [164](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L164), and [174](https://github.com/scrapy/scrapy/blob/8c23da943c5e892515f4fa2eb57229839802010a/scrapy/core/scheduler.py#L174) were found in `scheduler.py`. Refactoring these line lengths would be consistent in keeping with PEP 8 recommendations and will improve readability and maintainability.
Adhering to PEP 8 guidelines helps maintain consistency across the project and ensures the code is accessible. Making these changes will also help align the project with best practices
## Describe alternatives you've considered
The only alternative would be leaving the lines as they are, given that PEP 8 allows for line lengths up to 99 characters when agreed upon by a team.
* This may still reduce readability and could potentially create inconsistencies in the codebase
* This is especially true for new contributors or those using standard tooling that expects shorter lines.
## Additional context
I appreciate you taking into consideration my suggestion. While this obviously wouldn't impact the functionality of the project, it would keep the code base consisent. There were no other files I could find where the line length surpassed 100 characters.
| closed | 2024-11-23T01:23:04Z | 2024-11-23T07:39:34Z | https://github.com/scrapy/scrapy/issues/6558 | [] | Patrick-Culley | 0 |
jina-ai/clip-as-service | pytorch | 232 | Doubt: Can I use this service to obtain docvecs/Paragraph vector of an entire article | Hi all,
I am trying to obtain fixed length doc vectors/ Paragraph vectors with this implementation. As mentioned in docs I can increase `max_seq_len` from 25 to the desired length and pass my article as input. I want to know if this approach is right or is there a downside to it. Also, is there another better approach to obtain docvecs using Bert Model?
Currently, we use gensim library to obtain docvecs for an article. Another approach could be to use word vecs obtained from bert model, one hot encode paragraph ids and obtain its vector (similar to gensim paragraph vec implementation).
What do you guys suggest?
**Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):
- TensorFlow version:
- Python version:
- `bert-as-service` version:
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
... | closed | 2019-02-08T09:33:08Z | 2020-10-30T18:53:18Z | https://github.com/jina-ai/clip-as-service/issues/232 | [] | kapilkd13 | 4 |
Lightning-AI/pytorch-lightning | deep-learning | 19,738 | Loading from a checkpoint does not work properly in distributed training | ### Bug description
I train my model on multiple GPUs and save it with the `checkpoint callback` and `save_hyperparameters()`.
I get a directory which looks like this, so this part seems to work flawlessly:
```
/epoch=5
--/checkpoint
----mp_rank_00_model_states.pt
----zero_pp_rank_0_mp_rank_00_optim_states.pt
----zero_pp_rank_1_mp_rank_00_optim_states.pt
...
```
When I try to load the checkpoint with `MyModel.load_from_checkpoint()` on any of these files I only get errors. The tutorials only point me to some apparently outdated examples with a .ckpt file, which does not exist in these log directories.
Loading from the model directory does not help either.
When I load mp_rank_00_model_states.pt I get:
```
File "/.../projects/classifier_lightning/venv/lib/python3.10/site-packages/lightning/pytorch/core/saving.py", line 180, in _load_state
keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
KeyError: 'state_dict'
```
When i load the other other files I get a lightning version error, even though it runs on the same environment.
So - how do I load my model from a checkpoint?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-04T14:31:02Z | 2024-04-04T14:31:02Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19738 | [
"bug",
"needs triage"
] | asusdisciple | 0 |
onnx/onnx | pytorch | 5,793 | Generic representation of datatypes | A way to specify data types by their size; mantissa etc. for floating types.
## Ref
- https://github.com/onnx/onnx/issues/5776 | closed | 2023-12-04T20:14:26Z | 2024-12-26T06:44:42Z | https://github.com/onnx/onnx/issues/5793 | [
"module: spec",
"stale"
] | justinchuby | 0 |
plotly/dash | dash | 2,436 | [BUG] html.iframe not loading properly when not in initial active tab | Unsure if this behaviour is intentional, so making an issue just in case.
```
dash 2.8.1
dash-bootstrap-components 1.4.0
dash-core-components 2.0.0
dash-extensions 0.0.71
```
- OS: Ubuntu 20.04.4 LTS
- Browser Firefox 110.0 (also tried on Chrome)
**Describe the bug**
I have a tab with an `html.iframe`. When the tab is the default active tab on startup, everything works correctly. But if the tab is not the active tab, the html in the iframe fails to load properly as things like the component height and width are zero, which are used in some code. Can/should the `html.iframe` component loading be deferred until it is visible, in case any contained code relies on the parent container being fully loaded? I get around this currently by only creating the `html.frame` when the tab is selected for the first time.
MWE not loading correctly
```
from dash import Dash, html, Output, Input
import dash_bootstrap_components as dbc
app = Dash(external_stylesheets=[dbc.themes.DARKLY])
app.layout= html.Div([
dbc.Tabs(
children=[
dbc.Tab(
label="test tab"
),
dbc.Tab(
html.Iframe(
src="assets/test.html",
),
label="html iframe tab",
),
]
),
]
)
if __name__ == "__main__":
app.run_server(host="127.0.0.1",debug=True, port=8052)
```
MWE loading correctly
```
from dash import Dash, html, Output, Input
import dash_bootstrap_components as dbc
app = Dash(external_stylesheets=[dbc.themes.DARKLY])
app.layout= html.Div([
dbc.Tabs(
children=[
dbc.Tab(
html.Iframe(
src="assets/test.html",
),
label="html iframe tab",
),
dbc.Tab(
label="test tab"
),
]
),
]
)
if __name__ == "__main__":
app.run_server(host="127.0.0.1",debug=True, port=8052)
```
Callback workaround
```
from dash import Dash, html, Output, Input
import dash_bootstrap_components as dbc
app = Dash(external_stylesheets=[dbc.themes.DARKLY])
app.layout= html.Div([
dbc.Tabs(
children=[
dbc.Tab(
label="test tab"
),
dbc.Tab(
label="html iframe tab",
id="html-frame-tab"
),
]
),
]
)
@app.callback(
Output("html-iframe-tab", "children"),
Input("state-tabs", "active_tab"),
State("html-iframe-tab", "children")
)
def update_html_iframe(active_tab, html_iframe):
if active_tab == "tab-1":
return html.Iframe(src="assets/test.html")
return html_iframe
if __name__ == "__main__":
app.run_server(host="127.0.0.1",debug=True, port=8052)
```
**Expected behavior**
Behaviour of the `html.iframe` component is the same regardless of if the parent tab is active on load.
| closed | 2023-02-28T14:23:05Z | 2024-07-25T13:04:51Z | https://github.com/plotly/dash/issues/2436 | [] | toastisme | 2 |
vastsa/FileCodeBox | fastapi | 307 | 验证码复制 | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
复制文件取件码时,能否加上网站地址 | open | 2025-03-18T03:27:57Z | 2025-03-18T03:27:57Z | https://github.com/vastsa/FileCodeBox/issues/307 | [] | gulugulubengbang | 0 |
widgetti/solara | flask | 508 | task decorator & cancelled error | Hello! Nice work on this. very cool.
I am using (potentially misusing) the task decorator functionality, and I am getting the errors below. Using bleeding edge version. You can see it takes a few times of sliding the slider until it errors. Maybe because Im creating so many tasks? Tried wrapping different parts of the code in try/excepts to no avail.
```python
from solara.lab import task
from reacton import use_state
import numpy as np
import solara
@task()
async def debounce_update(value):
await asyncio.sleep(1)
return value
@solara.component
def Page():
im_idx, set_im_idx = use_state(0)
plotly_im_idx, set_plotly_im_idx = use_state(0)
def on_slider_change(value):
set_im_idx(value)
if debounce_update.pending:
debounce_update.cancel()
debounce_update(value)
if debounce_update.finished:
new_idx = debounce_update.value
if new_idx == im_idx:
set_plotly_im_idx(new_idx)
slider = solara.SliderInt(
label="Image Index",
min=0,
max=len(image_data) - 1,
step=1,
value=im_idx,
on_value=on_slider_change,
)
with solara.Card() as main:
solara.VBox([slider])
if debounce_update.finished:
print("finished")
if debounce_update.cancelled:
print("cancelled")
if debounce_update.pending:
print("pending")
if debounce_update.error:
print("ERRRRRROOOOOOOR")
return main
```
<details><summary>Details</summary>
<p>
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
finished
finished
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
finished
finished
pending
pending
pending
pending
pending
pending
pending
pending
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
finished
finished
</p>
</details>
| open | 2024-02-18T05:51:34Z | 2024-02-19T18:21:23Z | https://github.com/widgetti/solara/issues/508 | [
"bug",
"good first issue",
"help wanted"
] | swelborn | 3 |
automl/auto-sklearn | scikit-learn | 1,071 | AutoSklearn2Regressor | Any plans on adding an AutoSklearn2Regressor based on the latest paper? | open | 2021-02-01T00:40:42Z | 2021-11-17T10:39:45Z | https://github.com/automl/auto-sklearn/issues/1071 | [
"enhancement"
] | kaiserdan | 3 |
davidteather/TikTok-Api | api | 503 | [BUG] - Empty response from Tiktok | I am trying to call the `byUser` API endpoint with either
```
tik_tok_api = TikTokApi.get_instance()
tik_tok_api.byUsername(handle, 1)
```
Or
```
tik_tok_api = TikTokApi.get_instance(use_selenium=True)
return tik_tok_api.byUsername(handle, 1)
```
Both of these give me the error `Empty response from Tiktok to https://m.tiktok.com/api/item_list/?aid=1988&app_name=tiktok_web&device_platform=web&referer=&root_referer=&user_agent=Mozilla%252F5.0%2B%28iPhone%253B%2BCPU%2BiPhone%2BOS%2B12_2%2Blike%2BMac%2BOS%2BX%29%2BAppleWebKit%252F605.1.15%2B%28KHTML%2C%2Blike%2BGecko%29%2BVersion%252F13.0%2BMobile%252F15E148%2BSafari%252F604.1&cookie_enabled=true&screen_width=1168&screen_height=1449&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&ac=4g&timezone_name=&appId=1233&appType=m&isAndroid=False&isMobile=False&isIOS=False&OS=windows&count=1&id=6747935906352907269&type=1&secUid=MS4wLjABAAAAv3zolJLlWp-WbKXqSZwVSflDdwcbjPADRG-dhb68k30dQjkFpkRs4HiMvWeeIyVv&maxCursor=0&minCursor=0&sourceType=8&appId=1233®ion=US&priority_region=US&language=en&verifyFp=verify_khr3jabg_V7ucdslq_Vrw9_4KPb_AJ1b_Ks706M8zIJTq&did=9097006475678094878&_signature=_02B4Z6wo00f01Ulb0UgAAIBDdvNPPswkFIlJStXAADKA23`
Any thoughts on how to fix this? It seems that this API endpoint is broken. And I would like to be able to retrieve TikToks for a given handle.
Is there a possibility that TikTok is blocking requests from us now that there have been too many?
Is there some kind of workaround for this by logging in to TikTok and doing something with VerifyFP (I saw this in other issues). | closed | 2021-02-16T17:00:47Z | 2021-03-19T16:20:02Z | https://github.com/davidteather/TikTok-Api/issues/503 | [
"bug"
] | jaoxford | 5 |
yeongpin/cursor-free-vip | automation | 14 | 今天对话了两次后一直弹出要求登录 | 
用了 手動運行重置機器 的代码还是一样


| closed | 2025-01-13T06:38:25Z | 2025-01-14T06:54:24Z | https://github.com/yeongpin/cursor-free-vip/issues/14 | [] | lookoupai | 16 |
mjhea0/flaskr-tdd | flask | 59 | delete_entry doesn't check if a post actually exists | Not sure if this is intended behavior, but: The `test_delete_message` test will always succeed (prior to implementing `login_required`) regardless of whether or not there are existing posts as long as that route is correctly defined because `delete_entry` doesn't send an error if the `post_id` doesn't actually exist in the database, meaning that regardless of how many posts you actually have, if you go to `/delete/1` or `/delete/9999999999999`, you'll always get the JSON saying Post Deleted. (The flash message doesn't do anything, either, as it doesn't send you to an HTML page.)
Found this out coz I was having problems implementing tests for search, until I remembered that `setUp` and `tearDown` are run before and after _every_ test and not at the start and at the end of the entire test suite. | closed | 2020-02-28T13:34:04Z | 2020-10-14T00:04:32Z | https://github.com/mjhea0/flaskr-tdd/issues/59 | [] | ren1982 | 1 |
streamlit/streamlit | data-visualization | 10,347 | Support Polars objects for cache hashing | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
When passing a Polars dataframe to a function that is decorated with `@st.cache_data`, you get the following error:
```
UnhashableParamError: Cannot hash argument 'my_df' (of type polars.dataframe.frame.DataFrame) in 'my_func'.
```
Follow-up of https://github.com/streamlit/streamlit/issues/5088#issuecomment-2338389918
### Why?
I want to be able to cache functions that have a Polars dataframe as input.
### How?
There is already support for Pandas dataframes:
https://github.com/streamlit/streamlit/blob/9914751b0d59f55cb03c3e871e90f65264895431/lib/streamlit/runtime/caching/hashing.py#L434-L453
Support for Polars dataframes could be implemented the same way.
Alternative: Use https://github.com/narwhals-dev/narwhals for a dataframe-agnostic implementation.
### Additional Context
_No response_ | closed | 2025-02-05T10:42:18Z | 2025-03-04T02:00:23Z | https://github.com/streamlit/streamlit/issues/10347 | [
"type:enhancement",
"feature:cache",
"feature:cache-hash-func"
] | BartSchuurmans | 3 |
PrefectHQ/prefect | data-science | 17,348 | Resolving `DaskTaskRunner` task futures can cause a global client error in context hydration. | ### Bug summary
Resolving the result of a `PrefectDaskFuture` returned by a task submitted to a `DaskTaskRunner` randomly causes a `RuntimeError: No global client found and no address provided` error to be raised in the calling flow/task.
It's significantly easier to reproduce in my cloud environment, with close to 30 to 50% chance of occurence in some flows; reproducing locally is much more difficult, but I've included a flow and traceback that managed to. Both the cloud and local flow use a `distributed.LocalCluster` and have similar structure and types.
Lastly, it seems like it might be related to this PR: https://github.com/PrefectHQ/prefect/pull/15341, though I've seen this raised in both flows and tasks calling `PrefectDaskFuture.result`.
Flow for reproducing locally (though it has a very low occurrence rate it seems):
```python
from prefect import flow, task, serve
import dask.dataframe as dd
import numpy as np
from prefect_dask.utils import get_dask_client
from prefect_dask.task_runners import DaskTaskRunner
@task
def generate_data() -> np.ndarray:
return np.random.random(size=1_000_000)
@task(retries=5, retry_delay_seconds=1)
def load_dataframe(xs, ys) -> dd.DataFrame:
with get_dask_client():
return (
dd.DataFrame.from_dict(
{
"x": xs,
"y": ys,
}
)
.mean()
.compute()
)
@task
def load_dataframes() -> dict[int, dd.DataFrame]:
xs = generate_data()
ys = generate_data()
tasks = {}
for idx in range(50):
tasks[idx] = load_dataframe.submit(xs, ys)
results = {}
for idx, task_future in tasks.items():
results[idx] = task_future.result()
return results
@flow(task_runner=DaskTaskRunner(cluster_class="distributed.LocalCluster"))
def nested_fanout_flow():
results = load_dataframes()
return results
if __name__ == "__main__":
nested_deploy = nested_fanout_flow.to_deployment(
name="nested-fanout-deployment", work_pool_name="local"
)
serve(nested_deploy)
```
Traceback:
```python
Task run failed with exception: RuntimeError('No global client found and no address provided') - Retries are exhausted
Traceback (most recent call last):
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 805, in run_context
yield self
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 1387, in run_task_sync
engine.call_task_fn(txn)
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 828, in call_task_fn
result = call_with_parameters(self.task.fn, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/utilities/callables.py", line 208, in call_with_parameters
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/global_client_err.py", line 39, in load_dataframes
results[idx] = task_future.result()
^^^^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect_dask/task_runners.py", line 132, in result
future_result = self._wrapped_future.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/distributed/client.py", line 401, in result
return self.client.sync(self._result, callback_timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect_dask/client.py", line 62, in wrapper_func
return run_task_sync(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 1383, in run_task_sync
with engine.start(task_run_id=task_run_id, dependencies=dependencies):
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 751, in start
with self.initialize_run(task_run_id=task_run_id, dependencies=dependencies):
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/task_engine.py", line 663, in initialize_run
with hydrated_context(self.context):
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect/context.py", line 97, in hydrated_context
task_runner = stack.enter_context(flow.task_runner.duplicate())
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 526, in enter_context
result = _enter(cm)
^^^^^^^^^^^^^^^^^
File "/Users/kzvezdarov/git/prefect-dask-test/.venv/lib/python3.12/site-packages/prefect_dask/task_runners.py", line 416, in __enter__
raise RuntimeError("No global client found and no address provided")
^^^^^^^^^^^^^^^^^
RuntimeError: No global client found and no address provided
```
### Version info
```Text
Version: 3.2.9
API version: 0.8.4
Python version: 3.12.9
Git commit: 27eb408c
Built: Fri, Feb 28, 2025 8:12 PM
OS/Arch: darwin/arm64
Profile: local
Server type: ephemeral
Pydantic version: 2.10.6
Server:
Database: sqlite
SQLite version: 3.49.1
Integrations:
prefect-dask: 0.3.3
```
### Additional context
Full flow run logs for the reproduced local case:
[turquoise-coati.csv](https://github.com/user-attachments/files/19057253/turquoise-coati.csv) | open | 2025-03-03T17:42:10Z | 2025-03-03T17:42:10Z | https://github.com/PrefectHQ/prefect/issues/17348 | [
"bug"
] | kzvezdarov | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,715 | Price detection (once it crosses the "lower" threshold) only works for the first time | **Describe the bug**
By using price detection with notification I only receive the first time the price is lower as the price that I have set as trigger.
**Version**
v0.47.03
**To Reproduce**
Steps to reproduce the behavior:
configuration that I have:

notifications that I received from price 310e to 283e which is OK.

but future lower prices are not notified, but they appear as detected:

here the response of api call of the watcher, where last change timestamp is 1 octuber, the date of the notification:
` "restock": {
"in_stock": true,
"price": 279.89,
"currency": "EUR",
"original_price": 279.89,
"availability": "instock"
},
"restock_settings": {
"in_stock_processing": "all_changes",
"price_change_min": 290.0,
"price_change_max": null,
"price_change_threshold_percent": 2.0,
"follow_price_changes": true
},
"history_n": 2,
"last_changed": 1727782967,
`
here the response of the call of the history of the lastest change of the watcher GET /history/latest:
`In Stock: True - Price: 283.89`
**Expected behavior**
Have future price detection notification with current price 279.89e
| closed | 2024-10-15T08:35:00Z | 2025-01-09T21:59:56Z | https://github.com/dgtlmoon/changedetection.io/issues/2715 | [
"Change detection algorithms",
"triage",
"restock-detection",
"restock-price-monitoring"
] | lluisd | 5 |
psf/requests | python | 6,071 | CURL_CA_BUNDLE= disables certificate verification | <!-- Summary. -->
I'm not the first to notice this, see:
https://stackoverflow.com/questions/48391750/disable-python-requests-ssl-validation-for-an-imported-module
Which implies people have even relied on the current behavior as a hack ... but I think it's pretty clear that the current behavior is an accidental bug, which should be fixed (for requests 3?)
Vaguely related to #3829
## Expected Result
An empty-string CURL_CA_BUNDLE should use default system verification, the same way as:
* An unset CURL_CA_BUNDLE
* An empty-string or unset REQUESTS_CA_BUNDLE
* Behavior of curl/libcurl with an empty-string or unset CURL_CA_BUNDLE
## Actual Result
Empty CURL_CA_BUNDLE disables certificate verification
## Reproduction Steps
* Set CURL_CA_BUNDLE to an empty value, try to fetch a self-signed or invalid HTTPS endpoint => success | closed | 2022-02-23T21:54:42Z | 2022-12-20T22:25:04Z | https://github.com/psf/requests/issues/6071 | [] | owtaylor | 24 |
liangliangyy/DjangoBlog | django | 721 | 数据库连接设置为localhost而不是host,作者的需要修改一下 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [ ] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [ ] 请求技术支持
| closed | 2024-06-13T07:19:54Z | 2024-10-28T08:47:08Z | https://github.com/liangliangyy/DjangoBlog/issues/721 | [] | Jiangfengyuh | 2 |
man-group/arctic | pandas | 837 | Deleting snapshots is very slow when the VersionStore contains many snapshots | open | 2020-01-07T15:42:38Z | 2020-01-07T15:46:23Z | https://github.com/man-group/arctic/issues/837 | [] | cozmacib | 1 | |
onnx/onnx | tensorflow | 6,589 | TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' | # Bug Report
### Describe the bug
I am trying to convert Nvidia NeMo's FilterbankFeaturesTA class to ONNX. Here is my code -
```
from nemo.collections.asr.parts.preprocessing.features import (
FilterbankFeatures,
FilterbankFeaturesTA,
make_seq_mask_like,
)
_model = FilterbankFeaturesTA(
sample_rate= 16000,
# window_size = 0.02,
# window_stride = 0.01,
n_window_size = None,
n_window_stride = None,
window = "hann",
normalize = "per_feature",
n_fft = None,
preemph = 0.97,
# features = 64,
lowfreq = 0,
highfreq = None,
log = True,
log_zero_guard_type = "add",
log_zero_guard_value = 2 ** -24,
dither = 1e-5,
pad_to = 16,
frame_splicing = 1,
exact_pad = False,
pad_value = 0,
mag_power = 2.0,
rng = None,
nb_augmentation_prob = 0.0,
nb_max_freq = 4000,
# use_torchaudio = False,
mel_norm = "slaney",
stft_exact_pad = False,
stft_conv = False,
)
_model.eval()
example_input_1 = torch.randn(1, 18432) # Input for x1
example_input_2 = torch.randn(18432) # Input for x2
# _model(example_input_1, example_input_2)
example_out = _model.forward(example_input_1, example_input_2,)
# example_out
onnx_file_path = "preprocessor.onnx"
args = (example_input_1, example_input_2)
# kwargs = {"seq_len": example_input_2}
onnx_model, _ = torch.onnx.dynamo_export(
_model, # Model to export
*args,
# **kwargs,
export_options=torch.onnx.ExportOptions(
dynamic_shapes=True,
),
)
# Save the ONNX model to file
onnx_model.save(onnx_file_path)
```
Running this code gives me the following error -
```
{
"name": "TypeError",
"message": "unsupported operand type(s) for //: 'NoneType' and 'int'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[66], line 9
1 # trying to export features.py FilterbankFeatures to onnx for web inference
2 # from nemo.collections.asr.parts.preprocessing import FilterbankFeatures
3 from nemo.collections.asr.parts.preprocessing.features import (
4 FilterbankFeatures,
5 FilterbankFeaturesTA,
6 make_seq_mask_like,
7 )
----> 9 _model = FilterbankFeaturesTA(
10 sample_rate= 16000,
11 # window_size = 0.02,
12 # window_stride = 0.01,
13 n_window_size = None,
14 n_window_stride = None,
15 window = \"hann\",
16 normalize = \"per_feature\",
17 n_fft = None,
18 preemph = 0.97,
19 # features = 64,
20 lowfreq = 0,
21 highfreq = None,
22 log = True,
23 log_zero_guard_type = \"add\",
24 log_zero_guard_value = 2 ** -24,
25 dither = 1e-5,
26 pad_to = 16,
27 frame_splicing = 1,
28 exact_pad = False,
29 pad_value = 0,
30 mag_power = 2.0,
31 rng = None,
32 nb_augmentation_prob = 0.0,
33 nb_max_freq = 4000,
34 # use_torchaudio = False,
35 mel_norm = \"slaney\",
36 stft_exact_pad = False,
37 stft_conv = False,
38 )
40 _model.eval()
42 example_input_1 = torch.randn(1, 18432) # Input for x1
File ~/Documents/aakhor/asr/NeMo/nemo/collections/asr/parts/preprocessing/features.py:555, in __init__(self, sample_rate, n_window_size, n_window_stride, normalize, nfilt, n_fft, preemph, lowfreq, highfreq, log, log_zero_guard_type, log_zero_guard_value, dither, window, pad_to, pad_value, mel_norm, use_grads, max_duration, frame_splicing, exact_pad, nb_augmentation_prob, nb_max_freq, mag_power, rng, stft_exact_pad, stft_conv)
553 self.dither = dither
554 self.pad_to = pad_to
--> 555 self.pad_value = pad_value
556 self.n_fft = n_fft
557 self._mel_spec_extractor: torchaudio.transforms.MelSpectrogram = torchaudio.transforms.MelSpectrogram(
558 sample_rate=self._sample_rate,
559 win_length=self.win_length,
(...)
568 wkwargs={\"periodic\": False},
569 )
File ~/miniconda3/envs/nemo/lib/python3.11/site-packages/torchaudio/transforms/_transforms.py:587, in MelSpectrogram.__init__(self, sample_rate, n_fft, win_length, hop_length, f_min, f_max, pad, n_mels, window_fn, power, normalized, wkwargs, center, pad_mode, onesided, norm, mel_scale)
585 self.n_fft = n_fft
586 self.win_length = win_length if win_length is not None else n_fft
--> 587 self.hop_length = hop_length if hop_length is not None else self.win_length // 2
588 self.pad = pad
589 self.power = power
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'"
}
```
### System information
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241218
[pip3] open_clip_torch==2.29.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.6.0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.24.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.29.0 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
### Reproduction instructions
1. Clone the NeMo github repo.
2. Run the code from above.
### Expected behavior
The model should export to onnx.
| closed | 2024-12-19T16:24:34Z | 2025-01-14T20:51:24Z | https://github.com/onnx/onnx/issues/6589 | [
"bug",
"topic: converters"
] | kabyanil | 1 |
jmcnamara/XlsxWriter | pandas | 826 | Bug: write_rich_text prints misleading warning | ### Current behavior
I've tried to use write_rich_string feature to specify cell format like this:
`worksheet.write_rich_string('A1', bold, 'bold', center)`
However, function prints
> UserWarning: You must specify more than 2 format/fragments for rich strings. Ignoring input in write_rich_string().
and also does not write anything to the cell. I believe this warning is misleading as I specified 3 format/fragments in my function call.
### Expected behavior
I expect one of two following results:
- the function writes bold centered text
or
- the warning explains that my input is not acceptable
### Sample code to reproduce
```markdown
import xlsxwriter
workbook = xlsxwriter.Workbook('rich_strings.xlsx')
worksheet = workbook.add_worksheet()
worksheet.set_column('A:A', 30)
# Set up some formats to use.
bold = workbook.add_format({'bold': True})
center = workbook.add_format({'align': 'center'})
# Write some strings with multiple formats.
worksheet.write_rich_string('A1', bold, 'bold', center)
workbook.close()
```
### Environment
```markdown
- XlsxWriter version:3.0.1
- Python version:3.8.3
```
### Any other information
_No response_
### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2021-09-08T18:00:49Z | 2021-09-08T18:59:38Z | https://github.com/jmcnamara/XlsxWriter/issues/826 | [
"bug",
"wont_fix"
] | tyshchuk | 3 |
slackapi/python-slack-sdk | asyncio | 1,240 | Update `chat_unfurl` to support `source`/`unfurl_id` parameters | Per the [API documention](https://api.slack.com/methods/chat.unfurl) the `chat.unfurl` method should support `source` and `unfurl_id` as identifiers, instead of `channel` and `ts`. Currently the SDK method does not accept those parameters and requires `channel` and `ts`.
### Category
- [x] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
| closed | 2022-07-19T00:56:24Z | 2022-07-19T01:52:15Z | https://github.com/slackapi/python-slack-sdk/issues/1240 | [
"bug",
"enhancement",
"web-client",
"Version: 3x"
] | angrychimp | 1 |
mirumee/ariadne-codegen | graphql | 342 | [feature request] multiple .graphql files for queries | Hey! We could really use this feature, our queries are huge. Having them in a single file is getting hard to manage. For now my workaround is a script that reads from several files in a given directory and outputs their content into a single .graphql file, it'd be nice for the tool to do this out of the box.
Cheers, Matt | closed | 2025-01-24T05:12:34Z | 2025-01-24T16:24:29Z | https://github.com/mirumee/ariadne-codegen/issues/342 | [] | wiredmatt | 2 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 35 | N + 1 round trip problem | Does this library handle nested models (joins) in a single query from the server to the DB?
For example
```
user {
id
posts {
id
}
}
``` | open | 2017-02-16T16:34:58Z | 2022-09-02T16:07:46Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/35 | [] | itaied246 | 25 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.