repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
comfyanonymous/ComfyUI | pytorch | 7,087 | Desktop ComfyUI (Windows) executable won't accept parameters on it's commandline | ### Your question
**Situation**
I'm new to ComfyUI and installed it in Windows 11 to run in the _Desktop_ ComfyUI version.
Since my NVIDIA GPU needs the '--disable-cuda-malloc' argument I have to add this to the program's startup. According to [README.md](https://github.com/hiddenswitch/ComfyUI/blob/master/README.md) the argument can be added to the command line, but that does not seem to work.
1. At first, I changed _ComfyUI.exe_ in _ComfyUI.exe --disable-cuda-malloc_ in the link on my desktop (see the image included).
2. Secondly, I tried to run the program from the DOS-prompt. Again using the _ComfyUI.exe --disable-cuda-malloc_ command line. Also with no results. The program starts, but is running without the '--disable-cuda-malloc' argument.
What I _could_ see in the DOS-prompt however is an extra log, maybe someone here can analyze?
I added the full _DOS-prompt-log_ below here.
Note that running the command line _ComfyUI.exe -h_ or _--help_ also _only_ starts the program and does _not_ display the [program syntaxis](https://github.com/hiddenswitch/ComfyUI/blob/master/README.md).
With the _portable_ version I have no problem running and creating images (as commenteed in #6843).
**Programs and hardware**
- ComfyUI version: 0.3.18
- ComfyUI_frontend version: 1.10.17
- Python version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
- Embedded Python: false
- Pytorch version: 2.6.0+cu126
- RAM total: 35.85 GB
- NVIDIA GeForce MX110
NVIDIA App-version: 11.0.2.337
Driver version: Game Ready - 572.60 - Thu Feb 27, 2025
Total graphic memory: 20.405 MB ≈ 20 GB
Dedicated graphic memory: 2.048 MB ≈ 2 GB GDDR5
- (Dutch) Windows 11 Home, version: 24H2 (Build 26100.3323).
**Related articles:**
- At first, I thought this issue was described in #6843. But that was not the case, that issue is about the _Portable_ ComfyUI version. Not the _Desktop_ version I'm using.
- Adding the '--disable-cuda-malloc' argument to the _Portable_ version's 'run_nvidia_gpu.bat' is as described in #1845
## Command in the DOS-prompt
C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\ComfyUI.exe --disable-cuda-malloc
## DOS-prompt-log
15:16:49.173 > Starting app v0.4.26
15:16:49.180 > Initializing Sentry
15:16:49.414 > App ready
15:16:49.419 > Queueing event desktop:app_ready with properties null
15:16:49.422 > Getting config: windowStyle
15:16:49.423 > Getting config: installState
15:16:51.766 > Getting config: installState
15:16:51.766 > Getting config: basePath
15:16:51.769 > Getting config: selectedDevice
15:16:51.769 > Using uv at C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\uv\win\uv.exe
15:16:51.770 > Install state: installed
15:16:51.771 > Validating installation. Recorded state: [installed]
15:16:51.771 > Getting config: basePath
15:16:51.773 > Running direct process command: pip install --dry-run -r C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\requirements.txt
15:16:51.774 > Running command: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\uv\win\uv.exe pip install --dry-run -r C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\requirements.txt in D:\ComfyUI
Audited 21 packages in 16ms
Would make no changes
15:16:52.089 > Running direct process command: pip install --dry-run -r C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager\requirements.txt
15:16:52.090 > Running command: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\uv\win\uv.exe pip install --dry-run -r C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager\requirements.txt in D:\ComfyUI
Audited 11 packages in 10ms
Would make no changes
15:16:52.444 > Validation result: isValid:true, state:installed {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
15:16:52.447 > Setting up GPU context
15:16:53.980 > Getting config: versionConsentedMetrics
15:16:53.981 > Tracking desktop:app_ready with properties {
distinct_id: '83b30f99-60f5-4479-b3a4-97f95d66b694',
time: 1741184209419,
'$os': 'win32'
}
15:16:54.007 > Initializing todesktop
15:16:54.014 > Server start
15:16:54.020 > Tracking comfyui:server_start_start with properties {
distinct_id: '83b30f99-60f5-4479-b3a4-97f95d66b694',
time: 1741184214020,
'$os': 'win32'
}
15:16:54.036 > Running command: D:\ComfyUI\.venv\Scripts\python.exe C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\main.py --user-directory D:\ComfyUI\user --input-directory D:\ComfyUI\input --output-directory D:\ComfyUI\output --front-end-root C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app --base-directory D:\ComfyUI --extra-model-paths-config C:\Users\%UserName%\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 in D:\ComfyUI
15:16:54.074 > Received renderer-ready message!
15:16:54.075 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.077 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.078 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.079 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.080 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.081 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.082 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.082 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.083 > Sending queued message {
channel: 'validation-update',
data: {
inProgress: false,
installState: 'installed',
basePath: 'OK',
venvDirectory: 'OK',
pythonInterpreter: 'OK',
uv: 'OK',
pythonPackages: 'OK',
git: 'OK',
vcRedist: 'OK'
}
}
15:16:54.084 > Sending queued message { channel: 'loading-progress', data: { status: 'starting-server' } }
Adding extra search path custom_nodes D:\ComfyUI\custom_nodes
Adding extra search path download_model_base D:\ComfyUI\models
15:16:54.776 > Adding extra search path custom_nodes D:\ComfyUI\custom_nodes
Adding extra search path download_model_base D:\ComfyUI\models
Adding extra search path custom_nodes C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes
Setting output directory to: D:\ComfyUI\output
Setting input directory to: D:\ComfyUI\input
Setting user directory to: D:\ComfyUI\user
15:16:54.778 > Adding extra search path custom_nodes C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes
Setting output directory to: D:\ComfyUI\output
Setting input directory to: D:\ComfyUI\input
Setting user directory to: D:\ComfyUI\user
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
15:16:59.183 > [START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-03-05 15:16:59.183
** Platform: Windows
** Python version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
** Python executable: D:\ComfyUI\.venv\Scripts\python.exe
** ComfyUI Path: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI
** ComfyUI Base Folder Path: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI
** User directory: D:\ComfyUI\user
15:16:59.186 > ** ComfyUI startup time: 2025-03-05 15:16:59.183
** Platform: Windows
** Python version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
** Python executable: D:\ComfyUI\.venv\Scripts\python.exe
** ComfyUI Path: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI
** ComfyUI Base Folder Path: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI
** User directory: D:\ComfyUI\user
** ComfyUI-Manager config path:
15:16:59.481 > ** ComfyUI-Manager config path:
D:\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: D:\ComfyUI\user\comfyui.log
15:16:59.483 > D:\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: D:\ComfyUI\user\comfyui.log
Prestartup times for custom nodes:
15:17:03.130 >
Prestartup times for custom nodes:
8.4 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
15:17:03.132 > 8.4 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
15:17:05.482 > Checkpoint files will always be loaded safely.
Total VRAM 2048 MB, total RAM 36715 MB
15:17:05.914 > Total VRAM 2048 MB, total RAM 36715 MB
pytorch version: 2.6.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce MX110 : cudaMallocAsync
15:17:05.915 > pytorch version: 2.6.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce MX110 : cudaMallocAsync
Using pytorch attention
15:17:07.658 > Using pytorch attention
ComfyUI version: 0.3.18
15:17:09.444 > ComfyUI version: 0.3.18
[Prompt Server] web root: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app
15:17:09.485 > [Prompt Server] web root: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app
### Loading: ComfyUI-Manager (V3.27)
15:17:10.041 > ### Loading: ComfyUI-Manager (V3.27)
[ComfyUI-Manager] network_mode: public
### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
15:17:10.043 > [ComfyUI-Manager] network_mode: public
### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
Import times for custom nodes:
15:17:10.051 >
Import times for custom nodes:
0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
WARNING: this card most likely does not support cuda-malloc, if you get "CUDA error" please run ComfyUI with: --disable-cuda-malloc
15:17:10.053 > 0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
WARNING: this card most likely does not support cuda-malloc, if you get "CUDA error" please run ComfyUI with: --disable-cuda-malloc
Starting server
15:17:10.072 > Starting server
To see the GUI go to: http://127.0.0.1:8000
15:17:10.074 > To see the GUI go to: http://127.0.0.1:8000
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
15:17:10.096 > [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
15:17:10.108 > [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
15:17:10.159 > [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
15:17:10.203 > Python server is ready
15:17:10.204 > Tracking comfyui:server_start_end with properties {
distinct_id: '83b30f99-60f5-4479-b3a4-97f95d66b694',
time: 1741184230204,
'$os': 'win32'
}
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
15:17:10.369 > [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
15:17:10.411 > [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
FETCH ComfyRegistry Data: 5/36
15:17:13.500 > FETCH ComfyRegistry Data: 5/36
FETCH ComfyRegistry Data: 10/36
15:17:17.084 > FETCH ComfyRegistry Data: 10/36
got prompt
15:17:19.518 > got prompt
model weight dtype torch.float32, manual cast: None
15:17:19.891 > model weight dtype torch.float32, manual cast: None
model_type EPS
15:17:19.893 > model_type EPS
FETCH ComfyRegistry Data: 15/36
15:17:20.407 > FETCH ComfyRegistry Data: 15/36
Using pytorch attention in VAE
15:17:21.143 > Using pytorch attention in VAE
Using pytorch attention in VAE
15:17:21.148 > Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
15:17:21.321 > VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load SD1ClipModel
15:17:21.476 > Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
15:17:21.487 > loaded completely 9.5367431640625e+25 235.84423828125 True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
15:17:21.492 > CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
!!! Exception during processing !!! CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
15:17:22.857 > !!! Exception during processing !!! CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sample.py", line 43, in sample
sampler = comfy.samplers.KSampler(model, steps=steps, device=model.load_device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1060, in __init__
self.set_steps(steps, denoise)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1081, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
15:17:22.862 > Traceback (most recent call last):
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sample.py", line 43, in sample
sampler = comfy.samplers.KSampler(model, steps=steps, device=model.load_device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1060, in __init__
self.set_steps(steps, denoise)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1081, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Prompt executed in 3.34 seconds
15:17:22.868 > Prompt executed in 3.34 seconds
15:17:23.140 > Tracking execution with properties {
distinct_id: '83b30f99-60f5-4479-b3a4-97f95d66b694',
time: 1741184243140,
'$os': 'win32',
status: 'failed'
}
FETCH ComfyRegistry Data: 20/36
15:17:23.908 > FETCH ComfyRegistry Data: 20/36
15:17:24.012 > @todesktop/runtime: alwaysResolve: Promise timed out after 30000ms
15:17:24.078 > @todesktop/runtime: AutoUpdater: Setting up UpdaterAgent
15:17:24.078 > @todesktop/runtime: AutoUpdater: UpdaterAgent: UpdaterAgent: _uninstallSquirrelWindowsAppIfItExists()
15:17:24.079 > @todesktop/runtime: AutoUpdater: UpdaterAgent: UpdaterAgent: Does Squirrel.Windows uninstall marker exist? false C:\Users\%UserName%\AppData\Local\@comfyorg\comfyui-electron\.shouldUninstall
15:17:24.079 > @todesktop/runtime: AutoUpdater: checking for update on interval
15:17:24.079 > @todesktop/runtime: AutoUpdater: checking for update on launch
15:17:24.079 > @todesktop/runtime: AutoUpdater: _check called {
source: 'auto-check-on-launch',
pendingCheckSources: [ 'auto-check-on-launch' ]
}
15:17:24.080 > @todesktop/runtime: AutoUpdater: _actuallyPerformCheck called
15:17:24.080 > @todesktop/runtime: Checking for update
15:17:24.081 > @todesktop/runtime: AutoUpdater: checking-for-update
15:17:25.226 > @todesktop/runtime: Update for version 0.4.26 is not available (latest version: 0.4.26, downgrade is allowed).
15:17:25.227 > @todesktop/runtime: AutoUpdater: update-not-available {
version: '0.4.26',
files: [
{
url: 'ComfyUI Setup 0.4.26 - Build 2503047uzec04mv-x64.exe',
sha512: '9KyjTjvdSPKaX4xtHjcnUSktQoKES7kqKBZ2I4meCVRdDvStKeT6pLL3N33flNbpSPbJJNiwq1vhR6Ii0AvKpg==',
size: 149735432
}
],
path: 'ComfyUI Setup 0.4.26 - Build 2503047uzec04mv-x64.exe',
sha512: '9KyjTjvdSPKaX4xtHjcnUSktQoKES7kqKBZ2I4meCVRdDvStKeT6pLL3N33flNbpSPbJJNiwq1vhR6Ii0AvKpg==',
releaseDate: '2025-03-04T01:04:14.072Z'
}
15:17:25.228 > @todesktop/runtime: AutoUpdater: UpdaterAgent: UpdaterAgent: Analysing autoUpdater.checkForUpdates result {
currentVersion: '0.4.26',
latestVersion: '0.4.26',
updateInfo: {
version: '0.4.26',
files: [ [Object] ],
path: 'ComfyUI Setup 0.4.26 - Build 2503047uzec04mv-x64.exe',
sha512: '9KyjTjvdSPKaX4xtHjcnUSktQoKES7kqKBZ2I4meCVRdDvStKeT6pLL3N33flNbpSPbJJNiwq1vhR6Ii0AvKpg==',
releaseDate: '2025-03-04T01:04:14.072Z'
}
}
15:17:25.229 > @todesktop/runtime: AutoUpdater: No update available
FETCH ComfyRegistry Data: 25/36
15:17:27.355 > FETCH ComfyRegistry Data: 25/36
FETCH ComfyRegistry Data: 30/36
15:17:30.841 > FETCH ComfyRegistry Data: 30/36
FETCH ComfyRegistry Data: 35/36
15:17:34.207 > FETCH ComfyRegistry Data: 35/36
FETCH ComfyRegistry Data [DONE]
15:17:35.352 > FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
15:17:35.419 > [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel:
15:17:35.484 > nightly_channel:
https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
15:17:35.486 > https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[DONE]
15:17:35.526 > [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
15:17:35.595 > [ComfyUI-Manager] All startup tasks have been completed.
15:19:11.855 > App window closed.
15:19:11.882 > Quitting ComfyUI because window all closed
15:19:11.885 > Before-quit: Killing Python server
15:19:11.887 > Killing ComfyUI python server.
15:19:11.888 > Tracking desktop:app_quit with properties {
distinct_id: '83b30f99-60f5-4479-b3a4-97f95d66b694',
time: 1741184351888,
'$os': 'win32',
reason: {},
exitCode: 0
}
15:19:11.895 > Python process exited with code null and signal SIGTERM
### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 3
- **Node Type:** KSampler
- **Exception Type:** RuntimeError
- **Exception Message:** CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
## Stack Trace
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sample.py", line 43, in sample
sampler = comfy.samplers.KSampler(model, steps=steps, device=model.load_device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1060, in __init__
self.set_steps(steps, denoise)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1081, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
## System Information
- **ComfyUI Version:** 0.3.18
- **Arguments:** C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\main.py --user-directory D:\ComfyUI\user --input-directory D:\ComfyUI\input --output-directory D:\ComfyUI\output --front-end-root C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app --base-directory D:\ComfyUI --extra-model-paths-config C:\Users\%UserName%\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000
- **OS:** nt
- **Python Version:** 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.6.0+cu126
## Devices
- **Name:** cuda:0 NVIDIA GeForce MX110 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 2147352576
- **VRAM Free:** 1768056423
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-03-05T15:16:54.776054 - Adding extra search path custom_nodes D:\ComfyUI\custom_nodes
2025-03-05T15:16:54.776054 - Adding extra search path download_model_base D:\ComfyUI\models
2025-03-05T15:16:54.776054 - Adding extra search path custom_nodes C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes
2025-03-05T15:16:54.776054 - Setting output directory to: D:\ComfyUI\output
2025-03-05T15:16:54.776054 - Setting input directory to: D:\ComfyUI\input
2025-03-05T15:16:54.776054 - Setting user directory to: D:\ComfyUI\user
2025-03-05T15:16:55.494915 - [START] Security scan2025-03-05T15:16:55.494915 -
2025-03-05T15:16:59.059004 - [DONE] Security scan2025-03-05T15:16:59.059004 -
2025-03-05T15:16:59.183734 - ## ComfyUI-Manager: installing dependencies done.2025-03-05T15:16:59.183734 -
2025-03-05T15:16:59.183734 - ** ComfyUI startup time:2025-03-05T15:16:59.183734 - 2025-03-05T15:16:59.183734 - 2025-03-05 15:16:59.1832025-03-05T15:16:59.183734 -
2025-03-05T15:16:59.183734 - ** Platform:2025-03-05T15:16:59.183734 - 2025-03-05T15:16:59.183734 - Windows2025-03-05T15:16:59.183734 -
2025-03-05T15:16:59.183734 - ** Python version:2025-03-05T15:16:59.183734 - 2025-03-05T15:16:59.183734 - 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]2025-03-05T15:16:59.184755 -
2025-03-05T15:16:59.184755 - ** Python executable:2025-03-05T15:16:59.184755 - 2025-03-05T15:16:59.184755 - D:\ComfyUI\.venv\Scripts\python.exe2025-03-05T15:16:59.184755 -
2025-03-05T15:16:59.184755 - ** ComfyUI Path:2025-03-05T15:16:59.184755 - 2025-03-05T15:16:59.184755 - C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-03-05T15:16:59.184755 -
2025-03-05T15:16:59.184755 - ** ComfyUI Base Folder Path:2025-03-05T15:16:59.184755 - 2025-03-05T15:16:59.184755 - C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI2025-03-05T15:16:59.184755 -
2025-03-05T15:16:59.184755 - ** User directory:2025-03-05T15:16:59.184755 - 2025-03-05T15:16:59.184755 - D:\ComfyUI\user2025-03-05T15:16:59.184755 -
2025-03-05T15:16:59.481104 - ** ComfyUI-Manager config path:2025-03-05T15:16:59.481104 - 2025-03-05T15:16:59.482234 - D:\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-03-05T15:16:59.482234 -
2025-03-05T15:16:59.482234 - ** Log path:2025-03-05T15:16:59.482234 - 2025-03-05T15:16:59.482234 - D:\ComfyUI\user\comfyui.log2025-03-05T15:16:59.482234 -
2025-03-05T15:17:03.130976 -
Prestartup times for custom nodes:
2025-03-05T15:17:03.130976 - 8.4 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-03-05T15:17:03.130976 -
2025-03-05T15:17:05.482083 - Checkpoint files will always be loaded safely.
2025-03-05T15:17:05.914687 - Total VRAM 2048 MB, total RAM 36715 MB
2025-03-05T15:17:05.914687 - pytorch version: 2.6.0+cu126
2025-03-05T15:17:05.915720 - Set vram state to: NORMAL_VRAM
2025-03-05T15:17:05.915720 - Device: cuda:0 NVIDIA GeForce MX110 : cudaMallocAsync
2025-03-05T15:17:07.657708 - Using pytorch attention
2025-03-05T15:17:09.445715 - ComfyUI version: 0.3.18
2025-03-05T15:17:09.485574 - [Prompt Server] web root: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\web_custom_versions\desktop_app
2025-03-05T15:17:10.042358 - ### Loading: ComfyUI-Manager (V3.27)
2025-03-05T15:17:10.043364 - [ComfyUI-Manager] network_mode: public
2025-03-05T15:17:10.043364 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
2025-03-05T15:17:10.051480 -
Import times for custom nodes:
2025-03-05T15:17:10.052636 - 0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\websocket_image_save.py
2025-03-05T15:17:10.052636 - 0.0 seconds: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-03-05T15:17:10.052636 -
2025-03-05T15:17:10.052636 -
WARNING: this card most likely does not support cuda-malloc, if you get "CUDA error" please run ComfyUI with: --disable-cuda-malloc
2025-03-05T15:17:10.073001 - Starting server
2025-03-05T15:17:10.073001 - To see the GUI go to: http://127.0.0.1:8000
2025-03-05T15:17:10.096336 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-03-05T15:17:10.108397 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-03-05T15:17:10.159730 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-03-05T15:17:10.223342 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-03-05T15:17:10.411650 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-03-05T15:17:13.500866 - FETCH ComfyRegistry Data: 5/362025-03-05T15:17:13.500866 -
2025-03-05T15:17:17.085932 - FETCH ComfyRegistry Data: 10/362025-03-05T15:17:17.085932 -
2025-03-05T15:17:19.520050 - got prompt
2025-03-05T15:17:19.891812 - model weight dtype torch.float32, manual cast: None
2025-03-05T15:17:19.894805 - model_type EPS
2025-03-05T15:17:20.408691 - FETCH ComfyRegistry Data: 15/362025-03-05T15:17:20.408691 -
2025-03-05T15:17:21.144905 - Using pytorch attention in VAE
2025-03-05T15:17:21.149928 - Using pytorch attention in VAE
2025-03-05T15:17:21.321412 - VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
2025-03-05T15:17:21.477885 - Requested to load SD1ClipModel
2025-03-05T15:17:21.488717 - loaded completely 9.5367431640625e+25 235.84423828125 True
2025-03-05T15:17:21.491594 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2025-03-05T15:17:22.856857 - !!! Exception during processing !!! CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-03-05T15:17:22.861119 - Traceback (most recent call last):
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\sample.py", line 43, in sample
sampler = comfy.samplers.KSampler(model, steps=steps, device=model.load_device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1060, in __init__
self.set_steps(steps, denoise)
File "C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1081, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-03-05T15:17:22.864630 - Prompt executed in 3.34 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":9,"last_link_id":9,"nodes":[{"id":9,"type":"SaveImage","pos":[1538,51],"size":[210,58],"flags":{"collapsed":false},"order":6,"mode":0,"inputs":[{"name":"images","localized_name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.18"},"widgets_values":["ComfyUI"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[16,49],"size":[315,98],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","localized_name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","localized_name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1},{"name":"VAE","localized_name":"VAE","type":"VAE","links":[8],"slot_index":2}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["v1-5-pruned-emaonly-fp16.safetensors"]},{"id":5,"type":"EmptyLatentImage","pos":[502,741],"size":[315,106],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","localized_name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":8,"type":"VAEDecode","pos":[1273,53],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","localized_name":"samples","type":"LATENT","link":7},{"name":"vae","localized_name":"vae","type":"VAE","link":8}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":6,"type":"CLIPTextEncode","pos":[395,237],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"]},{"id":7,"type":"CLIPTextEncode","pos":[382,476],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":5}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":3,"type":"KSampler","pos":[925,190],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","localized_name":"model","type":"MODEL","link":1},{"name":"positive","localized_name":"positive","type":"CONDITIONING","link":4},{"name":"negative","localized_name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","localized_name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","localized_name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.18","Node name for S&R":"KSampler"},"widgets_values":[119113864191836,"randomize",20,8,"euler","normal",1]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.1,"offset":[-12.520661157024943,-15.446280991735643]},"node_versions":{"comfy-core":"0.3.18"}},"version":0.4}
## Additional Context
Command used at the Windows DOS-prompt: C:\Users\%UserName%\AppData\Local\Programs\@comfyorgcomfyui-electron\ComfyUI.exe --disable-cuda-malloc
```
### Other
 | closed | 2025-03-05T15:32:17Z | 2025-03-05T18:08:12Z | https://github.com/comfyanonymous/ComfyUI/issues/7087 | [
"User Support"
] | C-Denninger | 6 |
sinaptik-ai/pandas-ai | data-science | 1,170 | cannot pickle '_thread.RLock' object when save Agent Object to Redis | ### System Info
I use last version pandas AI,
OS: WINDOWS
### 🐛 Describe the bug
How can I save Object Agent to Redis successfully?
My goal is to serve multiple users, each user has its own context. So the idea is that when the user sends the conversation_id, IT WILL load in Redis, to get the correct Agent object, then I can call the function chat(). for example :
df = Agent([df], config={"llm": llm})
df.chat('Which are the 5 happiest countries?')
def save_agent_to_redis(agent, conversation_id):
"""
Serialize and save the Agent object to Redis.
"""
redis_conn = get_redis_connection()
agent_bytes = pickle.dumps(agent)
redis_conn.set(conversation_id, agent_bytes)
but when I try to save to Redis, the error I get is:
The error TypeError: cannot pickle '_thread.RLock' object occurs when you try to pickle (serialize) an object that contains a thread lock or other non-picklable objects. Pickling is the process of converting a Python object into a byte stream that can be saved to a file or sent over a network.( in my case, i need to save to Redis)
I have used some libraries specialized for serialization (pickle, dill) but without success
The Agent class is also quite special, it will automatically create a Conversation_id every time a new Agent is created, we do not control the Conversation_id, so if we choose to serialize some properties and ignore the remaining properties, then recreate them, will generate a different conversation_id than before
Can anyone give me suggestions, please
| closed | 2024-05-21T08:13:27Z | 2024-09-10T16:05:29Z | https://github.com/sinaptik-ai/pandas-ai/issues/1170 | [
"bug"
] | dobahoang | 6 |
microsoft/nni | tensorflow | 5,536 | If model has two inputs, how to set dummy_input of ModelSpeedup | I try to prune the Siam-U-Net, and it needs 2 inputs because of its siamese structure, can nni support to compress such structure? If so, how should I set the `dummy_input` parameter in `ModelSpeedup` function?
Environment:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):local
- Client OS:linux
- Server OS (for remote mode only):
- Python version:3.8
- PyTorch/TensorFlow version:pytorch
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?:no | open | 2023-04-29T15:55:59Z | 2023-06-14T01:39:09Z | https://github.com/microsoft/nni/issues/5536 | [] | Sugar929 | 8 |
streamlit/streamlit | python | 10,128 | Pre set selections for `st.dataframe` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Is there any way to set selection on dataframe or data_editor ?
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-01-08T10:19:34Z | 2025-02-12T04:20:14Z | https://github.com/streamlit/streamlit/issues/10128 | [
"type:enhancement",
"feature:st.dataframe"
] | StarDustEins | 6 |
StructuredLabs/preswald | data-visualization | 144 | Logging Inconsistencies | Logging practices vary across files:

This improvement will make the codebase more consistent and maintainable with minimal risk of introducing new issues. It also sets a standard for future logging implementations. | open | 2025-03-02T05:50:26Z | 2025-03-02T05:50:26Z | https://github.com/StructuredLabs/preswald/issues/144 | [] | aaryan182 | 0 |
alteryx/featuretools | data-science | 1,767 | Add Docker install instructions | - We should add installation instructions for Docker (to our install page):
```dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get -y update
RUN apt-get install -y build-essential python3-pip python3-dev
RUN pip -q install pip --upgrade
RUN pip install featuretools
``` | closed | 2021-11-01T15:37:14Z | 2021-11-17T17:15:28Z | https://github.com/alteryx/featuretools/issues/1767 | [] | gsheni | 2 |
bauerji/flask-pydantic | pydantic | 16 | Support union types in request body | Is there a way to have a request which can dynamically accept two (or more) different models?
e.g.
```
@validate(body=Union[ModelA, ModelB])
def post():
```
Is it possible for the deserialisation to then be dynamic and the function can check `request.body_params` using `isinstance`? | closed | 2020-09-11T15:47:54Z | 2020-10-01T14:55:32Z | https://github.com/bauerji/flask-pydantic/issues/16 | [] | eboddington | 1 |
thtrieu/darkflow | tensorflow | 487 | Is it a bug in yolo/train.py? | Not every grid will contain an object, so one grid may predict no objects. But in [yolo/train.py](https://github.com/thtrieu/darkflow/blob/master/darkflow/net/yolo/train.py#L66), `tf.reduce_max(iou, [2], True)` will return max value of two bbox even though the grid don't contain an object, and `best_box = tf.equal(iou, tf.reduce_max(iou, [2], True))` will have at least one true for two bbox of each grid.
Do you agree with me ? | open | 2017-12-24T16:00:10Z | 2017-12-30T06:01:01Z | https://github.com/thtrieu/darkflow/issues/487 | [] | gauss-clb | 2 |
ResidentMario/missingno | data-visualization | 28 | Saving output as .bmp | This is most likely a very Python newbie question, but unfortunately I haven't managed to get it working: how does one save the output to an image file? | closed | 2017-05-05T10:10:55Z | 2017-06-26T19:44:34Z | https://github.com/ResidentMario/missingno/issues/28 | [] | Arty2 | 1 |
jonaswinkler/paperless-ng | django | 441 | Web-UI Login not working after installation | Hi there,
I have followed the setup guide "Install Paperless from Docker Hub" and installed the PostgreSQL / Tika version on an Archlinux x86 server.
After creating the supervisor user everything seems to run normally.
But I cannot login to the Web-UI via HTTP - it will stuck on "Loading".
This is the error I see in in Docker container logs:
```172.18.0.1 - - [25/Jan/2021:10:08:03 +0000] "GET / HTTP/1.1" 200 769 "http://192.168.1.2:8000/accounts/login/?next=/" "Mozilla/5.0 (Linux; Android 11; Pixel 5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.93 Mobile Safari/537.36"
[2021-01-25 10:08:04 +0000] [40] [ERROR] Socket error processing request.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 190, in handle_request
util.reraise(*sys.exc_info())
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 625, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 178, in handle_request
resp.write_file(respiter)
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/wsgi.py", line 396, in write_file
if not self.sendfile(respiter):
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/wsgi.py", line 386, in sendfile
sent += os.sendfile(sockno, fileno, offset + sent, count)
OSError: [Errno 22] Invalid argument
```
I tried it with several browsers and clients (Desktop/mobile) without luck.
I have also tried the Paperless Android app - it runs fine! I can also add files via consumer directory.
Any idea whats wrong? | closed | 2021-01-25T10:13:22Z | 2021-01-26T18:54:03Z | https://github.com/jonaswinkler/paperless-ng/issues/441 | [
"bug",
"documentation"
] | igno2k | 7 |
aidlearning/AidLearning-FrameWork | jupyter | 217 | Couldn't install remote-development extension for vscode | i can't find remote-development extension in vscode.
And for some other similar extensions report: "xxx" is not available in openvscode server for the web. | closed | 2022-10-31T05:31:09Z | 2024-01-17T07:33:27Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/217 | [] | donggoing | 1 |
jowilf/starlette-admin | sqlalchemy | 582 | Bug: Filter parameters are not applied after switching to another view | **Describe the bug**
After returning to a page with configured Filter parameters, records are not filtered until the parameters are changed again.
**To Reproduce**
1. Go to the demo site, to the "Blog Posts" view: https://starlette-admin-demo.jowilf.com/admin/sqla/post/list
2. Select Filter, "Title, Contains, test"

Result: Filter is applied, records are filtered. - **correct**.
3. Go to another view, e.g. "Comments", get back to "Blog Posts".
Result: Filter is set, but records are not filtered - **wrong**.

**Environment (please complete the following information):**
- starlette_admin~=0.14.1
- ORM/ODMs: [SQLAlchemy, MongoEngine]
also can be reproduced on the official demo site. | open | 2024-09-29T21:37:26Z | 2024-09-30T18:49:35Z | https://github.com/jowilf/starlette-admin/issues/582 | [
"bug"
] | evgenybf | 2 |
coqui-ai/TTS | deep-learning | 3,148 | [Bug] XTTS v2.0 finetuning - wrong checkpoint links | ### Describe the bug
Hi there,
I believe that in the new XTTS v2.0 fine tuning recipe, there needs to be a change to the following lines:
```
TOKENIZER_FILE_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v2.0/vocab.json"
XTTS_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v2.0/model.pth"
```
It's impossible to reach these URLs.
Thanks.
### To Reproduce
```
python recipes/ljspeech/xtts_v2/train_gpt_xtts.py
```
### Expected behavior
Training
### Logs
```shell
/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/dataset.py:10: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
from torchaudio.backend.soundfile_backend import load as torchaudio_soundfile_load
/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/dataset.py:11: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
from torchaudio.backend.sox_io_backend import load as torchaudio_sox_load
/home/raph/miniconda3/envs/TTS/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Traceback (most recent call last):
File "/home/raph/repos/TTS/recipes/ljspeech/xtts_v2/train_gpt_xtts.py", line 232, in <module>
main()
File "/home/raph/repos/TTS/recipes/ljspeech/xtts_v2/train_gpt_xtts.py", line 204, in main
model = GPTTrainer.init_from_config(config)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/gpt_trainer.py", line 500, in init_from_config
return GPTTrainer(config)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/gpt_trainer.py", line 79, in __init__
self.xtts.tokenizer = VoiceBpeTokenizer(self.args.tokenizer_file)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/tokenizer.py", line 540, in __init__
self.tokenizer = Tokenizer.from_file(vocab_file)
Exception: expected value at line 1 column 1
~/repos/TTS main !1 ?3 vim recipes/ljspeech
```
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu121",
"TTS": "0.20.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-11-06T17:06:50Z | 2023-12-12T06:56:07Z | https://github.com/coqui-ai/TTS/issues/3148 | [
"bug"
] | rlenain | 4 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 242 | 训练谷歌英文数据集是否适合? | 不知道您这个项目是否能训练谷歌英文的数据集?如果不适合,有没有一些适合的推荐一些,感谢感谢 | open | 2021-05-17T04:15:46Z | 2021-05-17T11:54:38Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/242 | [] | minicarbon | 1 |
mwaskom/seaborn | data-visualization | 3,701 | Feature Request: Continuous axes heat map | Feature Request:
Continuous axes heat map.
This would function similarly to the existing heatmap feature but allow for continuous axes rather than purely categorical.
On the backend, it would behave more similarly to a 2d histplot, but instead of performing a count of data the function would accept an array_like containing values or perhaps keywords corresponding to aggregators (e.g. 'min', 'max', etc.). A special case would be 'count' which would behave like a regular histogram.
Many thanks for your excellent work maintaining an excellent library. | closed | 2024-05-31T03:55:38Z | 2025-01-26T15:39:56Z | https://github.com/mwaskom/seaborn/issues/3701 | [] | HThawley | 1 |
sherlock-project/sherlock | python | 1,653 | Träwelling | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [ x] I'm requesting support for a new site
- [ x] I've checked for similar site support requests including closed ones
- [ x] I've checked that the site I am requesting has not been removed in the past and is not documented in [removed_sites.md](https://github.com/sherlock-project/sherlock/blob/master/removed_sites.md)
- [ x] The site I am requesting support for is not a pornographic website
- [ x] I'm only requesting support of **one** website (create a separate issue for each site)
## Description
<!--
Provide the url to the website and the name of the website.
If there is anything else you want to mention regarding the site support request include that in this section.
-->
URL:
https://traewelling.de/
| closed | 2022-12-28T21:10:51Z | 2023-02-16T19:50:34Z | https://github.com/sherlock-project/sherlock/issues/1653 | [
"site support request"
] | Phyroks | 0 |
asacristani/fastapi-rocket-boilerplate | pydantic | 9 | Feature Suggestion: Kafka Integration | closed | 2023-10-10T07:18:22Z | 2024-04-04T21:58:15Z | https://github.com/asacristani/fastapi-rocket-boilerplate/issues/9 | [
"enhancement"
] | SamOyeAH | 1 | |
pyeventsourcing/eventsourcing | sqlalchemy | 177 | requests library dependency before 2.20 have a security vulnerability | Hello,
The requests library before 2.20 has a security vulnerability that was fixed in 2.20. We should bump up the library to 2.20 if possible. At the moment it currently has this requirement:
`requests<=2.19.99999` | closed | 2019-07-26T18:21:38Z | 2019-07-26T21:10:23Z | https://github.com/pyeventsourcing/eventsourcing/issues/177 | [] | fearedbliss | 1 |
wkentaro/labelme | computer-vision | 536 | Why loading imgs winkle? | Hi, i am use labelme3.18.0,but i find when i was use keyboards shortcuts D or A to scan my own dataset imgs fastly,there is a blank window int the window's center exist between one img and two img,and it is not convenient for me to check my annotated datas fastly.Does anyone can help to deal with it?tkx. | closed | 2020-01-02T03:25:55Z | 2022-06-25T15:38:56Z | https://github.com/wkentaro/labelme/issues/536 | [
"issue::bug"
] | chegnyanjun | 1 |
pytest-dev/pytest-xdist | pytest | 592 | Should master terminology be replaced with controller? | I saw in the changelog that references to slave were removed in 2.0.0. I also noticed that functions referencing the counterpart to slave, master, were introduced.
Should master be replaced with a more neutral term such as controller (or something similar)?
It is also worth noting that git is moving away from master as the default branch and [GitHub is moving to main as the default branch](https://github.com/github/renaming).
| open | 2020-08-27T12:12:30Z | 2021-02-07T20:50:48Z | https://github.com/pytest-dev/pytest-xdist/issues/592 | [] | bashtage | 1 |
jofpin/trape | flask | 363 | Login screen disappears | Running terminal in Mac. trape (stable) v2.0
Used python3 trape.py -u http://www.google.com -p 7070
Login screen shows for 0.5 seconds with the
Lure for the victims:
Control Panel Link:
Your Access key:
Then it disappears. Continues to show Loading trape... | open | 2022-07-13T17:48:21Z | 2023-03-12T03:11:58Z | https://github.com/jofpin/trape/issues/363 | [] | voelspriet | 1 |
twopirllc/pandas-ta | pandas | 151 | PyInstaller can't properly bundle pandas_ta on windows | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
* Version: 0.2.23b0
**Upgrade.**
* Already done.
**Describe the bug**
When I bundle my script using PyInstaller, Pyinstaller creates the bundled EXE just fine. But when I want to open my EXE I'd get the following traceback:
Traceback (most recent call last):
File "main.py", line 3, in <module>
File "C:\Users\my_username\miniconda3\envs\talib\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 623, in exec_module
File "pandas_ta\__init__.py", line 96, in <module>
File "C:\Users\my_username\miniconda3\envs\my_env\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 623, in exec_module
File "pandas_ta\core.py", line 12, in <module>
ImportError: cannot import name 'version' from 'pandas_ta' (C:\Users\my_username\AppData\Local\Temp\_MEI42442\pandas_ta\__init__.pyc)
[4208] Failed to execute script main
I've tried using onedir instead of onefile with no luck. Also tried using the official cpython+pip instead of miniconda and still no luck.
**To Reproduce**
1. Import pandas_ta in a project.
2. Try to create bundled EXE using PyInstaller.
3. It doesn't matter if you use onefile or not.
4. Try to run the bundled EXE file.
**Expected behavior**
* The bundled EXE would run without a traceback regarding pandas_ta.
| closed | 2020-10-18T16:20:43Z | 2020-10-31T15:25:32Z | https://github.com/twopirllc/pandas-ta/issues/151 | [
"enhancement",
"help wanted"
] | kolahghermezi | 5 |
agronholm/anyio | asyncio | 125 | TLS server only performs handshake after 'accept' is called | Hi,
I would expect the following code to work:
```python
import anyio
import ssl
async def main():
server_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
server_context.load_cert_chain(certfile="cert.pem", keyfile="key.pem")
client_context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
client_context.load_verify_locations(cafile="cert.pem")
async with await anyio.create_tcp_server(1234, ssl_context=server_context) as server:
async with await anyio.connect_tcp("localhost", 1234, ssl_context=client_context, autostart_tls=True) as client:
print("connected") # This line is never reached
conn = await server.accept()
anyio.run(main)
```
However, it gets stuck in `anyio.connect_tcp`. If I disable `autostart_tls` it connects fine but gets stuck when I call `start_tls` on the client. It seems like the server is only able to perform the TLS handshake if I call `accept` on the server object, as the following code works fine:
```python
import anyio
import ssl
async def task(server):
conn = await server.accept()
await conn.close()
async def main():
server_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
server_context.load_cert_chain(certfile="cert.pem", keyfile="key.pem")
client_context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
client_context.load_verify_locations(cafile="cert.pem")
async with await anyio.create_tcp_server(1234, ssl_context=server_context) as server:
async with anyio.create_task_group() as tg:
await tg.spawn(task, server)
async with await anyio.connect_tcp("localhost", 1234, ssl_context=client_context, autostart_tls=True) as client:
print("connected") # This works fine
anyio.run(main)
```
I don't think this is the intended behavior?
The behavior seems to be the same across all backends. | closed | 2020-07-14T12:17:48Z | 2020-08-04T20:52:59Z | https://github.com/agronholm/anyio/issues/125 | [
"design"
] | kinnay | 16 |
pywinauto/pywinauto | automation | 905 | Why don't support the newest python? | ## Expected Behavior
## Actual Behavior
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version:
- Python version and bitness:
- Platform and OS:
| closed | 2020-04-01T12:37:13Z | 2020-04-02T13:34:08Z | https://github.com/pywinauto/pywinauto/issues/905 | [
"duplicate",
"invalid"
] | TechForBad | 4 |
iMerica/dj-rest-auth | rest-api | 500 | I want to change the link in the password reset email to the frontend url. | The link for password reset in the sent email cannot be changed from the backend url.
I would like to use the front-end url like
'http:localhost:3000/password/reset/confirm/<str:uidb64>/<str:token>/'
thank you
| closed | 2023-04-05T09:55:27Z | 2023-04-10T06:30:03Z | https://github.com/iMerica/dj-rest-auth/issues/500 | [] | agent-Y | 1 |
Nemo2011/bilibili-api | api | 564 | [需求] 直播间赠送免费人气票 | 直播间每观看5分钟可以获得25票,每整点分区内主播排名,每天0点会被清零。希望`LiveRoom`可以有个函数送出这个票。
获得目前有多少票:
```
https://api.live.bilibili.com/xlive/general-interface/v1/rank/getUserPopularTicketsNum?ruid=780791&source=0
```
Response:
```
{
"code": 0,
"message": "0",
"ttl": 1,
"data": {
"pay_ticket": {
"num": 0,
"limit": 1000,
"exchange": 7
},
"free_ticket": {
"num": 25,
"limit": 0,
"exchange": 0
},
"popular_gift": true
}
}
```
送出:payload 有 ruid, csrf_token, csrf, visit_id
```
https://api.live.bilibili.com/xlive/general-interface/v1/rank/popularRankFreeScoreIncr
```
Response:
```
{"code":0,"message":"0","ttl":1,"data":{"num":25}}
``` | closed | 2023-11-14T16:18:04Z | 2023-11-16T04:18:12Z | https://github.com/Nemo2011/bilibili-api/issues/564 | [
"need",
"feature"
] | TZFC | 3 |
piskvorky/gensim | data-science | 2,662 | BM25 Average IDF returns negative even with Epsilon correction | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Description
Currently the BM25 algorithm uses the correction formula described by [Barrios et al.](https://arxiv.org/pdf/1602.03606.pdf#page=4) when a calculated IDF is negative. However, this solution returns a negative value when the average IDF is also negative, creating an issue. Perhaps the IDF should be 0 when both the word's IDF and the average IDF is negative.
#### Code to reproduce
```
>>> from gensim.summarization.bm25 import BM25
>>> corpus = [
['people', 'drink', 'bar'],
['bear', 'consume', 'drink']
]
>>> BM25(corpus).idf
{'people': 0.0, 'drink': -0.08047189562170502, 'bar': 0.0, 'bear': 0.0, 'consume': 0.0}
```
| closed | 2019-10-31T14:02:50Z | 2021-09-13T13:16:48Z | https://github.com/piskvorky/gensim/issues/2662 | [] | roodrallec | 3 |
marcomusy/vedo | numpy | 247 | How to change the interaction mode to 2D | Hello,
In Paraview, it's possible to change the interaction mode between 3D and 2D as shown below:

In vedo, the interaction mode is 3D, how can I change it to 2D?
Thank you | closed | 2020-11-17T05:21:44Z | 2020-11-22T16:33:20Z | https://github.com/marcomusy/vedo/issues/247 | [] | OpenFoam-User | 4 |
huggingface/datasets | tensorflow | 7,020 | Casting list array to fixed size list raises error | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.list_(pa.int64(), 2))
```
Stack trace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-6cb90a1d8216> in <module>
3
4 arr = pa.array([[0, 1]])
----> 5 array_cast(arr, pa.list_(pa.int64(), 2))
~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1803 else:
-> 1804 return func(array, *args, **kwargs)
1805
1806 return wrapper
~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1920 else:
1921 array_values = array.values[
-> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length
1923 ]
1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size)
AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
``` | closed | 2024-07-03T07:54:49Z | 2024-07-03T08:41:56Z | https://github.com/huggingface/datasets/issues/7020 | [
"bug"
] | albertvillanova | 0 |
tensorpack/tensorpack | tensorflow | 1,069 | How to train/fine-tune flownet with new dataset? | How to train/fine-tune flownet with new dataset? KIndly help | closed | 2019-02-05T06:19:10Z | 2019-02-21T18:58:22Z | https://github.com/tensorpack/tensorpack/issues/1069 | [
"usage"
] | chowkamlee81 | 4 |
httpie/cli | api | 921 | Custom header list or json in http request is possible? | I have a situation where i have an incoming json which has different number of custom headers which i want to pass in my httpie command from python script
right now httpie command allow space separated headers as follows
http httpbin.org/headers User-Agent:Bacon/1.0 'Cookie:valued-visitor=yes;foo=bar' X-Foo:Bar Referer:https://httpie.org
My requirement is strange because i have varying number of headers within my incoming request and python subprocess run to execute command as follows
headerVal = [{"X-My-Header":"value1"},{"X-Other-Header":"value2"}] and below command wont work
x = subprocess.run([HTTP, AUTH_TYPE, AUTH, URL, headersVal], capture_output=True)
if i pass headers separately it would work perfectly fine as below
header1="X-My-Header:value1"
header2="X-Other-Header:value2"
x = subprocess.run([HTTP, AUTH_TYPE, AUTH, URL, header1, header2], capture_output=True)
my problem is i dont know how many custom headers i will be getting and i cant prepare this above command dynamically. I was wondering if httpie has a way where i can pass list of headers or json of headers like we do in posting json body?
Thanks for your time!
| closed | 2020-05-20T18:15:01Z | 2020-05-20T19:26:36Z | https://github.com/httpie/cli/issues/921 | [] | lbindal | 1 |
miguelgrinberg/Flask-Migrate | flask | 2 | alembic.ini file location is wrong in message shown during "init" command | The output of the "init" command directs the operator to customize the `alembic.ini` file. The message shows that the location of this file is that of the root of the project, but Flask-Migrate puts this file inside the _migrations_ folder instead.
| closed | 2013-09-13T14:37:49Z | 2013-09-15T22:03:59Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/2 | [
"bug"
] | miguelgrinberg | 0 |
mars-project/mars | numpy | 2,938 | Add more logs for debugging | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Log now is not enough for debugging, we need more informations, like which method we choose when tile merge operands or groupby operands, they are quite useful when look into some performance issues.
| closed | 2022-04-20T05:21:02Z | 2022-04-22T09:18:52Z | https://github.com/mars-project/mars/issues/2938 | [
"type: enhancement"
] | hekaisheng | 0 |
JohnSnowLabs/nlu | streamlit | 175 | nlu.load function m1_chip parameter is not passed on correctly | The `m1_chip` parameter in `nlu.load` *(in __init__.py)* is passed on to `get_open_source_spark_context` and there used in `sparknlp.start(gpu=gpu, m1=True)`. However, `sparknlp.start` takes only the parameter `apple_silicon`.

| open | 2023-05-11T06:47:03Z | 2023-05-11T06:57:02Z | https://github.com/JohnSnowLabs/nlu/issues/175 | [] | Priapos1004 | 0 |
QingdaoU/OnlineJudge | django | 326 | 一个奇怪的Bug | 当题目过长时,会出现页面Bug
自己看[](http://orzzhouwc.top:85/problem/P0001) | open | 2020-09-23T12:49:46Z | 2020-09-23T12:53:32Z | https://github.com/QingdaoU/OnlineJudge/issues/326 | [] | L1uTongwei | 3 |
PaddlePaddle/PaddleHub | nlp | 2,260 | 官方好,安装完paddlepaddle、paddlehub后 引入paddlehub报错 |
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:paddlehub-2.3.1 paddlepaddle-2.4.2
2)系统环境:Windows python3.9.13
安装完paddlepaddle、paddlehub后
引入paddlehub报错

| closed | 2023-05-30T07:59:00Z | 2023-09-20T11:02:32Z | https://github.com/PaddlePaddle/PaddleHub/issues/2260 | [] | data2 | 3 |
chaos-genius/chaos_genius | data-visualization | 306 | Anomaly drill down graphs only display integer values | The data points present in all the anomaly prediction graphs are all integers even when they are meant to be float.
**Chaos Genius version**: 0.1.2-alpha
**OS Version / Instance**: AWS EC2
**Deployment type**: Docker
**Current behavior**
As you can see from the screenshots, the values are all integers which is the wrong output.


**Expected behavior**
The actual output is shown in the screenshots below. Values are now of the correct type : float.

w

| closed | 2021-10-12T17:01:16Z | 2021-10-14T04:49:45Z | https://github.com/chaos-genius/chaos_genius/issues/306 | [
"🐛 bug",
"P1"
] | Amatullah | 1 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 5 | 训练报错 | 模型构建是成功的,但是训练一开始就报错,如下:
```
Invalid argument: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1415 labels:
Traceback (most recent call last):
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 1361, in _do_call
return fn(*args)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _run_fn
target_list, status, run_metadata)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1415 labels:
[[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ctc/Log/_1203, ctc/ToInt64/_1205, ctc/ToInt32_2/_1207, ctc/ToInt32_1/_1209)]]
[[Node: training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1/_959 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_5057_training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](^_clooptraining/Adadelta/gradients/NextIteration_7/_252)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".\SpeechModel.py", line 345, in <module>
ms.TrainModel(datapath, epoch = 2, batch_size = 8, save_step = 1)
File ".\SpeechModel.py", line 161, in TrainModel
self._model.fit_generator(yielddatas, save_step)
File "G:\asr\asrvenv\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "G:\asr\asrvenv\lib\site-packages\keras\engine\training.py", line 2224, in fit_generator
class_weight=class_weight)
File "G:\asr\asrvenv\lib\site-packages\keras\engine\training.py", line 1883, in train_on_batch
outputs = self.train_function(ins)
File "G:\asr\asrvenv\lib\site-packages\keras\backend\tensorflow_backend.py", line 2478, in __call__
**self.session_kwargs)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
run_metadata_ptr)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 1137, in _run
feed_dict_tensor, options, run_metadata)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 1355, in _do_run
options, run_metadata)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\client\session.py", line 1374, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1415 labels:
[[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ctc/Log/_1203, ctc/ToInt64/_1205, ctc/ToInt32_2/_1207, ctc/ToInt32_1/_1209)]]
[[Node: training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1/_959 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_5057_training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](^_clooptraining/Adadelta/gradients/NextIteration_7/_252)]]
Caused by op 'ctc/CTCLoss', defined at:
File ".\SpeechModel.py", line 342, in <module>
ms = ModelSpeech(datapath)
File ".\SpeechModel.py", line 44, in __init__
self._model = self.CreateModel()
File ".\SpeechModel.py", line 109, in CreateModel
loss_out = Lambda(self.ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length])
File "G:\asr\asrvenv\lib\site-packages\keras\engine\topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
File "G:\asr\asrvenv\lib\site-packages\keras\layers\core.py", line 663, in call
return self.function(inputs, **arguments)
File ".\SpeechModel.py", line 135, in ctc_lambda_func
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
File "G:\asr\asrvenv\lib\site-packages\keras\backend\tensorflow_backend.py", line 3950, in ctc_batch_cost
sequence_length=input_length), 1)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\ops\ctc_ops.py", line 158, in ctc_loss
ignore_longer_outputs_than_inputs=ignore_longer_outputs_than_inputs)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\ops\gen_ctc_ops.py", line 231, in _ctc_loss
name=name)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3271, in create_op
op_def=op_def)
File "G:\asr\asrvenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1650, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1415 labels:
[[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ctc/Log/_1203, ctc/ToInt64/_1205, ctc/ToInt32_2/_1207, ctc/ToInt32_1/_1209)]]
[[Node: training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1/_959 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_5057_training/Adadelta/gradients/lstm_1/while/Softmax_grad/mul_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](^_clooptraining/Adadelta/gradients/NextIteration_7/_252)]]
```
是不是我的数据输入格式有问题?我的数据是这样的:
1. dict.txt (来自清华数据集)
```
$ head dict.txt
SIL sil
<SPOKEN_NOISE> sil
啊 aa a1
啊 aa a2
啊 aa a4
啊 aa a5
啊啊啊 aa a2 aa a2 aa a2
啊啊啊 aa a5 aa a5 aa a5
阿 aa a1
阿 ee e1
```
2. train.wav.lst
```
$ head train.wav.lst
A11_000 A11_0.wav
A11_001 A11_1.wav
A11_010 A11_10.wav
A11_100 A11_100.wav
A11_102 A11_102.wav
A11_103 A11_103.wav
A11_104 A11_104.wav
A11_105 A11_105.wav
A11_106 A11_106.wav
A11_107 A11_107.wav
```
3. train.syllable.txt
```
$ head train.syllable.txt
A11_000 绿 是 阳春 烟 景 大块 文章 的 底色 四月 的 林 峦 更是 绿 得 鲜活 秀媚 诗意 盎然
A11_001 他 仅 凭 腰部 的 力量 在 泳道 上下 翻腾 蛹 动 蛇行 状 如 海豚 一直 以 一头 的 优势 领先
A11_010 炮眼 打好 了 炸药 怎么 装 岳 正 才 咬 了 咬牙 倏 地 脱去 衣服 光膀子 冲进 了 水 窜 洞
A11_100 可 谁知 纹 完 后 她 一 照镜子 只见 左下 眼睑 的 线 又 粗 又 黑 与 右侧 明显 不对称
A11_102 一进门 我 被 惊呆 了 这 户 名叫 庞 吉 的 老农 是 抗美援朝 负伤 回乡 的 老兵 妻子 长年 有病 家徒四壁 一贫如洗
A11_103 走出 村子 老远 老远 我 还 回头 张望 那个 安宁 恬静 的 小院 那个 使 我 终身 难忘的 小院
A11_104 二月 四日 住进 新 西门外 罗家 碾 王家 冈 朱自清 闻讯 特地 从 东门外 赶来 庆贺
A11_105 单位 不是我 老爹 开 的 凭什么 要 一 次 二 次 照顾 我 我 不能 把 自己 的 包袱 往 学校 甩
A11_106 都 用 草帽 或 胳膊肘 护 着 碗 趔 趔趄 趄 穿过 烂 泥塘 般 的 院坝 跑回 自己 的 宿 舍去 了
A11_107 香港 演艺圈 欢迎 毛阿敏 加盟 无线 台 与 华星 一些 重大 的 演唱 活动 都 邀请 她 出场 有几次 还 特意 安排 压轴 演出
```
感谢大神看一看我的问题,还有,可以加QQ联系吗? | closed | 2018-04-09T01:06:47Z | 2018-05-12T07:04:40Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/5 | [] | ZJUGuoShuai | 10 |
explosion/spaCy | deep-learning | 13,475 | User Warning Transformer with Torch | ## How to reproduce the behaviour
nlp = spacy.load("en_core_web_sm")
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu 22.04
* Python Version Used: 3.11
* spaCy Version Used: 3.7.2 - latest / 4.0.0-dev
* Environment Information: IPyNotebook
I cought the warning if I am loading the pre-defined model in both version of SpaCy Latest / Dev
[python3.11/site-packages/transformers/utils/generic.py:441](https://file+.vscode-resource.vscode-cdn.net/home/..../lib/python3.11/site-packages/transformers/utils/generic.py:441): UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
| open | 2024-05-03T11:24:30Z | 2024-11-04T06:35:59Z | https://github.com/explosion/spaCy/issues/13475 | [
"feat / ux",
"feat / transformer"
] | hdaipteam | 4 |
django-import-export/django-import-export | django | 1,518 | `import_obj()` declaration can be improved for consistency | Throughout the code base, the param name `row` is used to define the incoming row (even though this could be JSON or YAML). In [`import_obj()`](https://github.com/django-import-export/django-import-export/blob/905839290016850327658bbee790314d4854f8a6/import_export/resources.py#L549) it is named `data`, which is less descriptive. We should standardise variable naming for better readability. | closed | 2022-12-02T11:32:47Z | 2023-10-10T19:45:46Z | https://github.com/django-import-export/django-import-export/issues/1518 | [
"enhancement",
"good first issue",
"v4"
] | matthewhegarty | 0 |
sanic-org/sanic | asyncio | 2,679 | Use custom logging functions or support dynamic logging directories | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
tittle(Chinese):使用自定义的日志函数,或支持动态设置日志保存目录
Unable to get logs to be written to a file by date
(Chinese)
无法让日志按照日期写入文件
### Describe the solution you'd like
I want to store logs by date in a specific directory
There are two ways to solve this
1. Use my custom logging function
2. sanic provides an interface for setting diary writing files
(Chinese)
我想把日志按照日期存储在特定的目录下
目前有2种解决方式
1. 使用我自定义的日志函数
2. sanic提供日记写入文件设置接口
### Additional context
_No response_ | closed | 2023-02-11T05:51:43Z | 2023-02-12T11:57:52Z | https://github.com/sanic-org/sanic/issues/2679 | [
"feature request"
] | David-xian66 | 3 |
jmcnamara/XlsxWriter | pandas | 1,105 | feature request: url_write to support numbers | ### Feature Request
Currently, [url_write](https://xlsxwriter.readthedocs.io/worksheet.html#write_url) only supports strings. When the "string" is a number, Excel displays the "numbers stored as a text" message in every cell.
Can url_write support numbers or be effected by a global `{'strings_to_numbers': True}`

| closed | 2024-12-16T18:00:09Z | 2024-12-17T09:12:14Z | https://github.com/jmcnamara/XlsxWriter/issues/1105 | [
"feature request"
] | ryan-cpi | 1 |
flasgger/flasgger | flask | 146 | OpenAPI 3.0 | https://www.youtube.com/watch?v=wBDSR0x3GZo | open | 2017-08-10T17:42:13Z | 2020-07-16T10:23:14Z | https://github.com/flasgger/flasgger/issues/146 | [
"hacktoberfest"
] | rochacbruno | 10 |
huggingface/transformers | python | 36,009 | Qwen-2-VL generates inconsistent logits between `generate()` and `__call__ ()` for multi-modal queries | ### System Info
- `transformers` version: 4.49.0.dev0 (commit: 62db3e, for Qwen2.5-VL)
- Platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
- Using GPU in script?: YES
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@ArthurZucker @zucchini-nlp
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When using Qwen2-VL or Qwen2.5-VL with multi-modal queries, logits generated inside `generate()` have non-negligible error from the ones directly generated from `__call__()` method.
The code follows the example introduced in [Qwen2-VL docs on Huggingface](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct#using-%F0%9F%A4%97--transformers-to-chat).
```python
import torch
from qwen_vl_utils import process_vision_info
from transformers import (Qwen2_5_VLForConditionalGeneration,
Qwen2_5_VLProcessor)
model_name = "Qwen/Qwen2.5-VL-7B-Instruct"
processor = Qwen2_5_VLProcessor.from_pretrained(model_name)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(model_name,
torch_dtype=torch.bfloat16,
device_map="auto")
messages = [{
"role": "user",
"content": [
{"type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"},
{"type": "text", "text": "Describe this image."},
]
}]
text = processor.apply_chat_template(messages,
tokenize=False,
add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
```
`generate()` produces the same logits as `__call__()` with `use_cache=True` and an appropriate `cache_position` similar to what is done in `generate()` method.
However, the logits from `__call__()` without cache, which are usually used for training, are different from the ones used in `generate()`.
```python
with torch.no_grad():
logits_in_generate = model.generate(**inputs, max_new_tokens=5,
output_logits=True,
return_dict_in_generate=True).logits
logits_from_call = model(**inputs).logits
logits_from_call_with_cache = model(**inputs, use_cache=True,
cache_position=torch.arange(inputs.input_ids.shape[1])).logits
# Check logits of the first generated tokens (First 8 vocabularies only)
print(f'generate():\n{logits_in_generate[0][0, :8]}')
# tensor([ 9.5625, 14.3750, 11.1875, 9.9375, 9.7500, 5.8125, 8.2500, 12.0625],
# device='cuda:0')
print(f'__call__():\n{logits_from_call[0, -1, :8]}')
# tensor([11.0000, 16.1250, 12.8125, 11.5625, 11.1875, 6.9375, 9.5625, 13.2500],
# device='cuda:0', dtype=torch.bfloat16)
print(f'__call__() with cache:\n{logits_from_call_with_cache[0, -1, :8]}')
# tensor([ 9.5625, 14.3750, 11.1875, 9.9375, 9.7500, 5.8125, 8.2500, 12.0625],
# device='cuda:0', dtype=torch.bfloat16)
```
If a query contains no image, the logits from `generate()` and `__call__()` are the same.
```python
messages = [{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image."},
]
}]
# ... same as above ...
image_inputs = None
# ... same as above ...
# These are the same
print(f'generate():\n{logits_in_generate[0][0, :8]}')
print(f'__call__(): {logits_from_call[0, -1, :8]}')
print(f'__call__() with cache: {logits_from_call_with_cache[0, -1, :8]}')
```
It is unclear if this is purely a precision problem or involves other issues.
A related discussion can be found in #25420, in particular [this comment](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535), but in my case, switching to higher precision does not resolve the issue, and visual tokens appear to affect the logits unexpectedly.
### Expected behavior
Consistent logits are used in `generate()` and `__call__()`. | closed | 2025-02-03T01:14:00Z | 2025-02-04T08:48:30Z | https://github.com/huggingface/transformers/issues/36009 | [
"bug",
"VLM"
] | waffoo | 2 |
sktime/sktime | scikit-learn | 7,953 | [BUG] NaiveForecaster.fit has different behavior on two identical dataframes. | **Describe the bug**
I have two (seemingly) identical hierarchical dataframes, but calling NaiveForecaster.fit() on a slice of one dataframe results in error, but no error on the other dataframe.
**To Reproduce**
```python
import pandas as pd
from pandas import Timestamp
from datetime import datetime
from sktime.split import ExpandingGreedySplitter
from sktime.forecasting.naive import NaiveForecaster
########################## Create df_does_not_work ##########################
df_dict = {'Value': {('L1',
'L2',
'Customer_1',
Timestamp('2024-08-31 00:00:00')): 10,
('L1', 'L2', 'Customer_1', Timestamp('2024-09-30 00:00:00')): 11,
('L1',
'L2',
'Customer_1',
Timestamp('2024-10-31 00:00:00')): 12,
('L1', 'L2', 'Customer_1', Timestamp('2024-11-30 00:00:00')): 13,
('L1', 'L2', 'Customer_1', Timestamp('2024-12-31 00:00:00')): 14,
('L1',
'L2',
'Customer_1',
Timestamp('2025-01-31 00:00:00')): 15,
('L1', 'L2', 'Customer_1', Timestamp('2025-02-28 00:00:00')): 16}}
df_does_not_work = pd.DataFrame.from_dict(df_dict)
df_does_not_work.index.names=['Level_1', 'Level_2', 'Customer', 'Date']
########################## Create df_works ##########################
dates = pd.date_range(datetime(2024, 8, 31), datetime(2025, 2, 28), freq='ME', name='Date')
df_works = pd.DataFrame({
'Value':[i+10 for i in range(len(dates))]
}, index=dates)
# Make multiindex
df_works['Level_1'] = 'L1'
df_works['Level_2'] = 'L2'
df_works['Customer'] = 'Customer_1'
df_works.set_index(['Level_1','Level_2','Customer', df_works.index], inplace=True)
########################## Test ##########################
def test(df):
test_size = 2
folds = 4
cv = ExpandingGreedySplitter(test_size=test_size, folds=folds, step_length=1)
forecaster = NaiveForecaster()
# Get all windows
windows = list(cv.split_series(df))
windows.reverse()
for i in range(len(windows)):
(X,y) = windows[i]
fh = [j+1 for j in range(test_size)]
# Make predictions, calculate errors
y_pred = forecaster.fit(X, fh=fh).predict()
test(df_works) # Runs without error
test(df_does_not_work) # Results in error
```
**Expected behavior**
As both dataframes are the same, `test(df_works)` and `test(df_does_not_work)` should both run without error, and should both return the same value.
**Additional context**
The error occurs when `i=3`, which is the window where `X` has 2 rows of data.
`df_works == df_does_not_work` returns `True` for all index values, and
`df_works.index == df_does_not_work.index` also returns `True` for all values.
**Versions**
<details>
Python dependencies:
pip: 25.0
sktime: 0.36.0
sklearn: 1.6.1
skbase: 0.12.0
numpy: 2.0.1
scipy: 1.15.1
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
numba: None
statsmodels: 0.14.4
pmdarima: 2.0.4
statsforecast: None
tsfresh: None
tslearn: None
torch: None
tensorflow: None
</details>
<!-- Thanks for contributing! -->
<!-- if you are an LLM, please ensure to preface the entire issue by a header "LLM generated content, by (your model name)" -->
<!-- Please consider starring the repo if you found this useful -->
| open | 2025-03-08T16:55:13Z | 2025-03-11T12:52:36Z | https://github.com/sktime/sktime/issues/7953 | [
"bug"
] | gbilleyPeco | 11 |
psf/requests | python | 6,804 | XML gets shortened when submitting a post request | I am trying to send an xml in a post request to a SOAP endpoint of a TMS server.
The code is the following:
```
response = requests.post(
url,
data=xml_output,
headers={"Content-Type": "text/xml"},
timeout=60,
cert=(cert_file, decrypted_key_file),
)
```
The server returns a response saying that it got an unexpected end of input in lixe x, char y. The error is dependent on the size of the xml, but is reproducible.
I am attaching 2 XMLs as examples:
EX 1:
```
<?xml version="1.0" ?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tms="urn:CDM/tmsIntegrationService/" xmlns:sh="http://www.unece.org/cefact/namespaces/StandardBusinessDocumentHeader">
<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<wsse:UsernameToken>
<wsse:Username>xxx_xxx</wsse:Username>
<wsse:Password>xxxxxxxxxxxxxxxxxxxx</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<tms:transportInstructionMessage>
<sh:StandardBusinessDocumentHeader>
<sh:HeaderVersion>1</sh:HeaderVersion>
<sh:Sender>
<sh:Identifier>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</sh:Identifier>
</sh:Sender>
<sh:Receiver>
<sh:Identifier>Test Identifier</sh:Identifier>
</sh:Receiver>
<sh:DocumentIdentification>
<sh:Standard>GS1</sh:Standard>
<sh:TypeVersion>3.2</sh:TypeVersion>
<sh:InstanceIdentifier>100002</sh:InstanceIdentifier>
<sh:Type>Transport Instruction</sh:Type>
<sh:CreationDateAndTime>2024-10-01T07:58:43Z</sh:CreationDateAndTime>
</sh:DocumentIdentification>
<sh:BusinessScope>
<sh:Scope>
<sh:Type>EDIcustomerNumber</sh:Type>
<sh:InstanceIdentifier>90000050</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>fileType</sh:Type>
<sh:InstanceIdentifier>IF</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>department</sh:Type>
<sh:InstanceIdentifier>62</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>application</sh:Type>
<sh:InstanceIdentifier>LOGI</sh:InstanceIdentifier>
</sh:Scope>
</sh:BusinessScope>
</sh:StandardBusinessDocumentHeader>
<transportInstruction>
<creationDateTime>2024-10-01T07:58:43Z</creationDateTime>
<documentStatusCode>ORIGINAL</documentStatusCode>
<documentActionCode>ADD</documentActionCode>
<transportInstructionIdentification>
<entityIdentification>12345_test/012934535</entityIdentification>
</transportInstructionIdentification>
<transportInstructionFunction>SHIPMENT</transportInstructionFunction>
<logisticServicesSeller/>
<logisticServicesBuyer>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</additionalPartyIdentification>
</logisticServicesBuyer>
<transportInstructionShipment>
<additionalShipmentIdentification additionalShipmentIdentificationTypeCode="refopd">LC48</additionalShipmentIdentification>
<note languageCode="EN" noteTypeCode="INF"/>
<note languageCode="EN" noteTypeCode="INF">Fragile goods !</note>
<note languageCode="EN" noteTypeCode="INF">Towar niepiętrowalny</note>
<note languageCode="EN" noteTypeCode="INF"/>
<note languageCode="EN" noteTypeCode="INF">Fragile goods !</note>
<note languageCode="EN" noteTypeCode="INF">Towar niepiętrowalny</note>
<receiver>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">3b045ae1-edec-485d-9</additionalPartyIdentification>
<address>
<city>Üllő</city>
<countryCode>HU</countryCode>
<name>AUCHAN MAGYARORSZÁG</name>
<postalCode>2225</postalCode>
<streetAddressOne>Zsaróka út 8</streetAddressOne>
</address>
</receiver>
<shipper>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">ad11a7fc-0442-4b03-8</additionalPartyIdentification>
<address>
<city>WIRY</city>
<countryCode>PL</countryCode>
<name>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</name>
<postalCode>62-051</postalCode>
<streetAddressOne>KASZTANOWA 12</streetAddressOne>
</address>
</shipper>
<shipTo>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">3b045ae1-edec-485d-9</additionalPartyIdentification>
<address>
<city>Üllő</city>
<countryCode>HU</countryCode>
<name>AUCHAN MAGYARORSZÁG</name>
<postalCode>2225</postalCode>
<streetAddressOne>Zsaróka út 8</streetAddressOne>
</address>
<contact>
<contactTypeCode>BJ</contactTypeCode>
<personName>John Doe</personName>
<communicationChannel>
<communicationChannelCode>EMAIL</communicationChannelCode>
<communicationValue>john.doe@mail.com</communicationValue>
</communicationChannel>
<communicationChannel>
<communicationChannelCode>TELEPHONE</communicationChannelCode>
<communicationValue>+391234123490</communicationValue>
</communicationChannel>
</contact>
</shipTo>
<shipFrom>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">ad11a7fc-0442-4b03-8</additionalPartyIdentification>
<address>
<city>WIRY</city>
<countryCode>PL</countryCode>
<name>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</name>
<postalCode>62-051</postalCode>
<streetAddressOne>KASZTANOWA 12</streetAddressOne>
</address>
</shipFrom>
<transportInstructionTerms>
<transportServiceCategoryType>30</transportServiceCategoryType>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ForPlanning</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="productType">PROD01</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ROD</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ROP</logisticServiceRequirementCode>
</logisticService>
</transportInstructionTerms>
<plannedDespatch>
<logisticEventPeriod>
<beginDate>2024-10-27</beginDate>
<beginTime>01:10:00</beginTime>
<endDate>2024-10-27</endDate>
<endTime>10:00:00</endTime>
</logisticEventPeriod>
</plannedDespatch>
<transportReference>
<entityIdentification>12345_test/012934535</entityIdentification>
<transportReferenceTypeCode>customerRef</transportReferenceTypeCode>
</transportReference>
<transportInstructionShipmentItem>
<lineItemNumber>1</lineItemNumber>
<transportCargoCharacteristics>
<cargoTypeCode>neam</cargoTypeCode>
<cargoTypeDescription languageCode="PL">Smycze</cargoTypeDescription>
<totalGrossVolume measurementUnitCode="MTQ">7.5</totalGrossVolume>
<totalGrossWeight measurementUnitCode="KGM">5.000</totalGrossWeight>
<totalLoadingLength measurementUnitCode="PP">2.8000000000000003</totalLoadingLength>
<totalPackageQuantity measurementUnitCode="Euro pallet 120x80">7</totalPackageQuantity>
<totalItemQuantity measurementUnitCode="Euro pallet 120x80">7</totalItemQuantity>
</transportCargoCharacteristics>
</transportInstructionShipmentItem>
</transportInstructionShipment>
</transportInstruction>
</tms:transportInstructionMessage>
</soapenv:Body>
</soapenv:Envelope>
```
the error here is unexpected end of input block at row 161, col 15
EX 2:
```
<?xml version="1.0" ?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tms="urn:CDM/tmsIntegrationService/" xmlns:sh="http://www.unece.org/cefact/namespaces/StandardBusinessDocumentHeader">
<soapenv:Header>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<wsse:UsernameToken>
<wsse:Username>xxx_xxx</wsse:Username>
<wsse:Password>xxxxxxxxxxxxxxxxxxxx</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<tms:transportInstructionMessage>
<sh:StandardBusinessDocumentHeader>
<sh:HeaderVersion>1</sh:HeaderVersion>
<sh:Sender>
<sh:Identifier>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</sh:Identifier>
</sh:Sender>
<sh:Receiver>
<sh:Identifier>Test Identifier</sh:Identifier>
</sh:Receiver>
<sh:DocumentIdentification>
<sh:Standard>GS1</sh:Standard>
<sh:TypeVersion>3.2</sh:TypeVersion>
<sh:InstanceIdentifier>100002</sh:InstanceIdentifier>
<sh:Type>Transport Instruction</sh:Type>
<sh:CreationDateAndTime>2024-10-01T08:06:49Z</sh:CreationDateAndTime>
</sh:DocumentIdentification>
<sh:BusinessScope>
<sh:Scope>
<sh:Type>EDIcustomerNumber</sh:Type>
<sh:InstanceIdentifier>90000050</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>fileType</sh:Type>
<sh:InstanceIdentifier>IF</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>department</sh:Type>
<sh:InstanceIdentifier>62</sh:InstanceIdentifier>
</sh:Scope>
<sh:Scope>
<sh:Type>application</sh:Type>
<sh:InstanceIdentifier>LOGI</sh:InstanceIdentifier>
</sh:Scope>
</sh:BusinessScope>
</sh:StandardBusinessDocumentHeader>
<transportInstruction>
<creationDateTime>2024-10-01T08:06:49Z</creationDateTime>
<documentStatusCode>ORIGINAL</documentStatusCode>
<documentActionCode>ADD</documentActionCode>
<transportInstructionIdentification>
<entityIdentification>GV-RD-0099703/1</entityIdentification>
</transportInstructionIdentification>
<transportInstructionFunction>SHIPMENT</transportInstructionFunction>
<logisticServicesSeller/>
<logisticServicesBuyer>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</additionalPartyIdentification>
</logisticServicesBuyer>
<transportInstructionShipment>
<additionalShipmentIdentification additionalShipmentIdentificationTypeCode="refopd">LC49</additionalShipmentIdentification>
<note languageCode="EN" noteTypeCode="INF"/>
<note languageCode="EN" noteTypeCode="INF">Fragile goods !</note>
<note languageCode="EN" noteTypeCode="INF">Towar niepiętrowalny</note>
<note languageCode="EN" noteTypeCode="INF"/>
<note languageCode="EN" noteTypeCode="INF">Fragile goods !</note>
<note languageCode="EN" noteTypeCode="INF">Towar niepiętrowalny</note>
<receiver>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">669ff966-c2ca-4745-8</additionalPartyIdentification>
<address>
<city>Hrádek</city>
<countryCode>CZ</countryCode>
<name>Borgers Hradek</name>
<postalCode>33842</postalCode>
<streetAddressOne>Rokycanska ulice 223/II</streetAddressOne>
</address>
</receiver>
<shipper>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">ad11a7fc-0442-4b03-8</additionalPartyIdentification>
<address>
<city>WIRY</city>
<countryCode>PL</countryCode>
<name>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</name>
<postalCode>62-051</postalCode>
<streetAddressOne>KASZTANOWA 12</streetAddressOne>
</address>
</shipper>
<shipTo>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">669ff966-c2ca-4745-8</additionalPartyIdentification>
<address>
<city>Hrádek</city>
<countryCode>CZ</countryCode>
<name>Borgers Hradek</name>
<postalCode>33842</postalCode>
<streetAddressOne>Rokycanska ulice 223/II</streetAddressOne>
</address>
<contact>
<contactTypeCode>BJ</contactTypeCode>
<personName>John Doe</personName>
<communicationChannel>
<communicationChannelCode>EMAIL</communicationChannelCode>
<communicationValue>john.doe@mail.com</communicationValue>
</communicationChannel>
<communicationChannel>
<communicationChannelCode>TELEPHONE</communicationChannelCode>
<communicationValue>+391231212340</communicationValue>
</communicationChannel>
</contact>
</shipTo>
<shipFrom>
<additionalPartyIdentification additionalPartyIdentificationTypeCode="searchname">ad11a7fc-0442-4b03-8</additionalPartyIdentification>
<address>
<city>WIRY</city>
<countryCode>PL</countryCode>
<name>PAWEŁ ZABŁOCKI ZABŁOCKI I PARTNERZY</name>
<postalCode>62-051</postalCode>
<streetAddressOne>KASZTANOWA 12</streetAddressOne>
</address>
</shipFrom>
<transportInstructionTerms>
<transportServiceCategoryType>30</transportServiceCategoryType>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ForPlanning</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="productType">PROD01</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ROD</logisticServiceRequirementCode>
</logisticService>
<logisticService>
<logisticServiceRequirementCode logisticServiceTypeCode="parameters">ROP</logisticServiceRequirementCode>
</logisticService>
</transportInstructionTerms>
<plannedDespatch>
<logisticEventPeriod>
<beginDate>2024-10-10</beginDate>
<beginTime>08:00:00</beginTime>
<endDate>2024-10-10</endDate>
<endTime>22:00:00</endTime>
</logisticEventPeriod>
</plannedDespatch>
<transportReference>
<entityIdentification>GV-RD-0099703/1</entityIdentification>
<transportReferenceTypeCode>customerRef</transportReferenceTypeCode>
</transportReference>
<transportInstructionShipmentItem>
<lineItemNumber>1</lineItemNumber>
<transportCargoCharacteristics>
<cargoTypeCode>neam</cargoTypeCode>
<cargoTypeDescription languageCode="PL">Smycze</cargoTypeDescription>
<totalGrossVolume measurementUnitCode="MTQ">0.00</totalGrossVolume>
<totalGrossWeight measurementUnitCode="KGM">24.500</totalGrossWeight>
<totalLoadingLength measurementUnitCode="PP">2.8000000000000003</totalLoadingLength>
<totalPackageQuantity measurementUnitCode="Euro pallet 120x80">7</totalPackageQuantity>
<totalItemQuantity measurementUnitCode="Euro pallet 120x80">7</totalItemQuantity>
</transportCargoCharacteristics>
</transportInstructionShipmentItem>
</transportInstructionShipment>
</transportInstruction>
</tms:transportInstructionMessage>
</soapenv:Body>
</soapenv:Envelope>
```
here the row is the same but the column is different
| closed | 2024-10-01T09:02:13Z | 2024-10-27T18:49:31Z | https://github.com/psf/requests/issues/6804 | [] | danster99 | 1 |
CTFd/CTFd | flask | 1,742 | Put Vue in production mode | ```
You are running Vue in development mode.
Make sure to turn on production mode when deploying for production.
See more tips at https://vuejs.org/guide/deployment.html
```
This error is showing up in the console and I have no idea why. | closed | 2020-11-25T03:35:41Z | 2020-11-25T07:33:48Z | https://github.com/CTFd/CTFd/issues/1742 | [
"help wanted"
] | ColdHeat | 1 |
python-visualization/folium | data-visualization | 1,619 | Support continent/country/city names | **Is your feature request related to a problem? Please describe.**
It is not related to any problem. It just occurred to me while experimenting that there could could be something like this where we dont have to take the lat and long instead we could just type in the place name and show the map for that place.
**Describe the solution you'd like**
write a code in the package that can extract the lat and long from any website and we can just call the city/country, etc to draw the map.
**Describe alternatives you've considered**
We could also store a dataset in any repository and pull the information from there itself which is one way to do it.
| closed | 2022-09-23T00:19:35Z | 2022-11-04T10:05:43Z | https://github.com/python-visualization/folium/issues/1619 | [] | shampa-dutta | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 160 | Add hugging_face models with the context window | closed | 2024-05-06T11:30:12Z | 2024-05-06T12:50:18Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/160 | [] | VinciGit00 | 1 | |
proplot-dev/proplot | matplotlib | 462 | When supporting cartopy0.23, numpy2.0, matplotlib 3.5+ ? | Come over every day to check when the updates are available. | closed | 2024-08-04T13:56:44Z | 2024-08-04T13:57:08Z | https://github.com/proplot-dev/proplot/issues/462 | [] | ybmy001 | 0 |
ExpDev07/coronavirus-tracker-api | fastapi | 299 | API is down | The API is currently unavailable again.

| closed | 2020-04-22T06:12:25Z | 2020-04-22T06:22:38Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/299 | [
"bug"
] | l-dietrich | 1 |
QuivrHQ/quivr | api | 2,954 | Test Github | closed | 2024-08-07T10:46:51Z | 2024-08-07T10:47:03Z | https://github.com/QuivrHQ/quivr/issues/2954 | [] | StanGirard | 1 | |
jina-ai/clip-as-service | pytorch | 141 | Are there any example code of fine-tuning model for example5.py? | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x ] Are you running the latest `bert-as-service`?
* [ x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [ x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#faq)?
* [x ] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
This project is very handy, and I want to make use of it quickly. I run example5, the result is poor when only using the original BERT model(chinese_L-12_H-768_A-12). Can you please share the fine tuning scripts for that dataset(https://github.com/thunlp/CAIL)? Thanks! | closed | 2018-12-18T07:13:57Z | 2018-12-25T14:06:17Z | https://github.com/jina-ai/clip-as-service/issues/141 | [] | ruanwz | 1 |
pandas-dev/pandas | python | 60,603 | BUG: Dropped Index Name on Aggregated Index Column with MultiIndexed DataFrame | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby(lambda x : x[0]).aggregate('sum')
```
### Issue Description
When using a lambda to group a DataFrame by, even though the lambda is using a pre-existing index from the DataFrame, the resulting index in the aggregation does not take its name. Comparatively, using ```df.groupby('A')``` in the above example would yield an index with the index name 'A'. As best as I can tell, this happens because of [this](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/ops.py#L766), which requests ```name[0]``` from the ```BaseGrouper``` object, which ultimately seems to get its names from the underlying ```Grouping``` object. However, when getting that [name](https://github.com/pandas-dev/pandas/blob/8a5344742c5165b2595f7ccca9e17d5eff7f7886/pandas/core/groupby/grouper.py#L550), since ```self.grouping_vector``` is a ```MultiIndex```, the ```elif isinstance(self.grouping_vector, Index):``` branch is taken, which leads to a request for the ```.name``` attribute of a ```MultiIndex```, which returns ```None```. This names of index columns are dropped even if a list of lambdas is provided in order to produce multiple levels of a MultiIndex in the aggregation output, i.e. even if ```len(self.groupings) > 1``` in the ```BaseGrouper``` object
Apologies in advance if this is intended behaviour.
### Expected Behavior
```python
df = pandas.DataFrame({"A" : [1,2,3,1,2,3], "B": [4,5,6,4,5,6], "C" : [7,8,9,7,8,9]})
df = df.set_index(['A', 'B'])
df.groupby('A')
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18363
machine : AMD64
processor : AMD64 Family 21 Model 96 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Canada.1252
pandas : 2.2.3
numpy : 2.2.1
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| open | 2024-12-24T23:07:41Z | 2024-12-26T22:02:54Z | https://github.com/pandas-dev/pandas/issues/60603 | [
"Bug",
"Groupby",
"Closing Candidate"
] | RahimD | 1 |
JaidedAI/EasyOCR | deep-learning | 578 | multi GPUs Issue | Hi tried this but still had issues:
```
/katakana/notebooks/vision/ocr_hybrid.py in easy_ocr_no_table(img, device)
69 img = prep_img_for_easyocr(img)
70 img = remove_table_from_scan(img)
---> 71 result = run_easy_ocr(img, device)
72 return result
73
~/katakana/notebooks/vision/ocr_hybrid.py in run_easy_ocr(img, device)
60 if easyocr_reader is None:
61 load_easyocr_reader(device) # to prevent slow import
---> 62 result = easyocr_reader.readtext(
63 img, link_threshold=0.99, text_threshold=0.7, add_margin=0.02
64 )
~/.local/lib/python3.8/site-packages/easyocr/easyocr.py in readtext(self, image, decoder, beamWidth, batch_size, workers, allowlist, blocklist, detail, rotation_info, paragraph, min_size, contrast_ths, adjust_contrast, filter_ths, text_threshold, low_text, link_threshold, canvas_size, mag_ratio, slope_ths, ycenter_ths, height_ths, width_ths, y_ths, x_ths, add_margin, output_format)
383 img, img_cv_grey = reformat_input(image)
384
--> 385 horizontal_list, free_list = self.detect(img, min_size, text_threshold,\
386 low_text, link_threshold,\
387 canvas_size, mag_ratio,\
~/.local/lib/python3.8/site-packages/easyocr/easyocr.py in detect(self, img, min_size, text_threshold, low_text, link_threshold, canvas_size, mag_ratio, slope_ths, ycenter_ths, height_ths, width_ths, add_margin, reformat, optimal_num_chars)
273 img, img_cv_grey = reformat_input(img)
274
--> 275 text_box_list = get_textbox(self.detector, img, canvas_size, mag_ratio,
276 text_threshold, link_threshold, low_text,
277 False, self.device, optimal_num_chars)
~/.local/lib/python3.8/site-packages/easyocr/detection.py in get_textbox(detector, image, canvas_size, mag_ratio, text_threshold, link_threshold, low_text, poly, device, optimal_num_chars)
92 result = []
93 estimate_num_chars = optimal_num_chars is not None
---> 94 bboxes_list, polys_list = test_net(canvas_size, mag_ratio, detector,
95 image, text_threshold,
96 link_threshold, low_text, poly,
~/.local/lib/python3.8/site-packages/easyocr/detection.py in test_net(canvas_size, mag_ratio, net, image, text_threshold, link_threshold, low_text, poly, device, estimate_num_chars)
43 # forward pass
44 with torch.no_grad():
---> 45 y, feature = net(x)
46
47 boxes_list, polys_list = [], []
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
151 for t in chain(self.module.parameters(), self.module.buffers()):
152 if t.device != self.src_device_obj:
--> 153 raise RuntimeError("module must have its parameters and buffers "
154 "on device {} (device_ids[0]) but found one of "
155 "them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3
```
_Originally posted by @akbir in https://github.com/JaidedAI/EasyOCR/issues/10#issuecomment-902039791_
I have same exact issues when using multi GPU. | closed | 2021-10-28T07:49:06Z | 2023-02-28T11:49:19Z | https://github.com/JaidedAI/EasyOCR/issues/578 | [] | anhncs | 4 |
graphql-python/graphene-django | graphql | 829 | Performance issues with large data sets and pagination. | It is known that GraphQL is not the fastest API when you have many objects, see: https://github.com/graphql-python/graphene/issues/268
If you have many objects, you want to use pagination with `DjangoConnectionField`.
I have run some benchmarks for this:
1. There are 1000 items and i fetch 100 of them:
```
django v1.11.26
graphene v2.1.8
graphene-django v2.7.1
Use timeit statement: graphene.Schema(query=Query).execute('{Items(first: 100) {edges {node {id}}}}')
Run one timeit call... takes: 13.3 ms
timeit... use 5 * 75 loop...
max...: 10.84 ms
median: 10.66 ms
min...: 10.61 ms
cProfile stats for one request: 30148 function calls (28688 primitive calls) in 0.019 seconds
```
2. There are only 100 items and i didn't use a `DjangoConnectionField`:
```
django v1.11.26
graphene v2.1.8
graphene-django v2.7.1
Use timeit statement: graphene.Schema(query=Query).execute('{Items {id}}')
Run one timeit call... takes: 13.3 ms
timeit... use 5 * 106 loop...
max...: 7.54 ms
median: 7.48 ms
min...: 7.29 ms
cProfile stats for one request: 18556 function calls (17830 primitive calls) in 0.012 seconds
```
Graphene already makes a lot of calls anyway. But with these two variants there is also a very clear difference to be noticed.
| open | 2019-12-19T15:40:44Z | 2020-08-28T04:23:16Z | https://github.com/graphql-python/graphene-django/issues/829 | [
"✨enhancement",
"help wanted"
] | jedie | 3 |
slackapi/python-slack-sdk | asyncio | 1,304 | ssl_context is not passed from async web_client to aiohttp socket client | (Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The Slack SDK version
3.18.1
#### Python runtime version
3.8
#### OS info
Linux version 4.15.0-196-generic (buildd@lcy02-amd64-018) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #207-Ubuntu SMP Thu Oct 27 21:24:58 UTC 2022
### Expected result:
I expect to be able to provide an SSL context for the websockets to use.
I can provide one to the AsyncWebClient:
```python
AsyncWebClient(token=BOT_TOKEN, ssl=ssl_context)
```
and I want to use the same context for the websocket.
### Actual result:
<img width="449" alt="image" src="https://user-images.githubusercontent.com/71178874/202699045-40e27a15-9abb-4b3f-b4f7-c615079b7c8e.png">
The ssl_context is not passed to ws_connect, which leads to the above error:
```
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>
```
### Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at https://my.slack.com/help/requests/new instead. :bow:
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2022-11-18T12:48:16Z | 2022-11-28T07:26:44Z | https://github.com/slackapi/python-slack-sdk/issues/1304 | [
"bug",
"Version: 3x",
"socket-mode",
"area:async"
] | giwrgos-skouras | 3 |
gradio-app/gradio | data-science | 10,105 | The message format of examples of multimodal chatbot is different from that of normal submission | ### Describe the bug
When you click the example image inside the Chatbot component of the following app
```py
import gradio as gr
def run(message, history):
print(message)
return "aaa"
demo = gr.ChatInterface(
fn=run,
examples=[
[
{
"text": "Describe the image.",
"files": ["cats.jpg"],
},
],
],
multimodal=True,
type="messages",
cache_examples=False,
)
demo.launch()
```

the printed message format looks like this:
```
{'text': 'Describe the image.', 'files': [{'path': '/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'url': 'https://hysts-debug-multimodal-chat-examples.hf.space/gradio_api/file=/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'size': None, 'orig_name': 'cats.jpg', 'mime_type': 'image/jpeg', 'is_stream': False, 'meta': {'_type': 'gradio.FileData'}}]}
```
But when you submit the same input from the textbox component in the bottom, it looks like this:
```
{'text': 'Describe the image.', 'files': ['/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg']}
```
This inconsistency is problematic. I think the latter is the correct and expected format.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
https://huggingface.co/spaces/hysts-debug/multimodal-chat-examples
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio==5.7.1
```
### Severity
I can work around it | closed | 2024-12-03T08:35:43Z | 2024-12-07T15:51:01Z | https://github.com/gradio-app/gradio/issues/10105 | [
"bug"
] | hysts | 0 |
mage-ai/mage-ai | data-science | 5,512 | API TRIGGER - "Skip run if previous run still in progress" option request | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
Please provide the option "Skip run if previous run still in progress" for the API trigger section. This option is not present for API Triggers
**CAPTURE:**

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-10-18T16:38:12Z | 2024-10-18T22:51:30Z | https://github.com/mage-ai/mage-ai/issues/5512 | [
"feature"
] | Arthidon | 0 |
ageitgey/face_recognition | python | 715 | How to deal with 1000 face images, | * face_recognition version: 1.2.3.
* Python version: 2.7
* Operating System: Mac
### Description
I want to recognize the face of guest onboard on hotel and want to update checkin status in data base, I have guest image of 1000 guest, will this library work with 1K image is there any performance impact or any other better way to do the same.
| closed | 2019-01-08T04:20:33Z | 2020-01-08T07:16:09Z | https://github.com/ageitgey/face_recognition/issues/715 | [] | verma171 | 3 |
modelscope/data-juicer | streamlit | 420 | AssertionError | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
<img width="361" alt="111111" src="https://github.com/user-attachments/assets/e1a708b8-4aa9-475f-ab60-be81c7636946">
我在执行python tools/analyze_data.py --config xxx.yaml 分析 数据的时候显示报了这个错误,因为我原来操作过一遍,没用任何问题,今天使用报这个错误,是什么原因?请各位大佬指点一二
### Additional 额外信息
_No response_ | closed | 2024-09-09T01:54:30Z | 2024-09-09T05:05:31Z | https://github.com/modelscope/data-juicer/issues/420 | [
"bug",
"question"
] | abchbx | 1 |
flairNLP/flair | nlp | 2,828 | SentenceTransformerDocumentEmbeddings model in Spanish? | A clear and concise description of what you want to know.
Hello, I am working on a project in Spanish, and I need to obtain the embeddings of some sentences in my corpus, for this I use ` SentenceTransformerDocumentEmbeddings` but note that these models are pre-trained in English.
Is there a model of these pre-trained in Spanish?
Where I see the models in English is from this sheet.
https://docs.google.com/spreadsheets/d/14QplCdTCDwEmTqrn1LH4yrbKvdogK4oQvYO1K1aPR5M/edit#gid=0 | closed | 2022-06-22T01:26:37Z | 2022-11-01T15:04:41Z | https://github.com/flairNLP/flair/issues/2828 | [
"question",
"wontfix"
] | fmafelipe | 2 |
wandb/wandb | data-science | 9,010 | [Bug-App]: Point Cloud visualization misses points (max_point limit?) | ### Describe the bug
Hi there,
there appears to be a bug with the visualization of large point clouds.
The uploaded Object3D should contain 385.000 points, however the white parts are incomplete.
The following shows the result uploading white+red Object3D (size=385.000 points), and the second contains only the white points (size=190.000 points).


I also checked the uploaded pts.json file and it contains all 385.000 points so it may be a bug on the visualization side. If I upload only white or blue, the visualizations work properly.
Perhaps there is a max_point limit that overrides the white ones (they are first in the data structure)?
**UPDATE**: Swapping white+red => red+white in the np.array yields this upload visualization. So it seems only the last parts up to a max point limit are shown. (Also same results on other browsers / devices)

(Also hint to my other feature proposal, while you're at it ;) https://github.com/wandb/wandb/issues/9009)
Kind regards | open | 2024-12-04T11:30:30Z | 2024-12-17T10:01:13Z | https://github.com/wandb/wandb/issues/9010 | [
"ty:bug",
"a:app"
] | mokrueger | 4 |
Kav-K/GPTDiscord | asyncio | 155 | [BUG] Docker env errors with 10.3.2 + Image Size concerns | **Describe the bug**
The docker image has grown quite a lot - Was this expected?
```
kaveenk/gpt3discord latest b4c0677089fe 15 hours ago 4.07GB
```
I also updated to latest_release and latest and get the following error (So I am guessing I'm 10.3.2)
```
Loading environment from .env
Loading environment from /opt/gpt3discord/etc/environment
Loading environment from None
Attempting to retrieve the settings DB
Retrieved the settings DB
Traceback (most recent call last):
File "/opt/gpt3discord/bin/gpt3discord.py", line 14, in <module>
from cogs.search_service_cog import SearchService
File "/usr/local/lib/python3.9/site-packages/cogs/search_service_cog.py", line 13, in <module>
ALLOWED_GUILDS = EnvService.get_allowed_guilds()
File "/usr/local/lib/python3.9/site-packages/services/environment_service.py", line 87, in get_allowed_guilds
allowed_guilds = [int(guild) for guild in allowed_guilds]
File "/usr/local/lib/python3.9/site-packages/services/environment_service.py", line 87, in <listcomp>
allowed_guilds = [int(guild) for guild in allowed_guilds]
ValueError: invalid literal for int() with base 10: ''
```
I'm guessing there might be config changes since I last did an update - My config:
```
cooper@us:/containers$ cat /containers/gpt3discord/env
DATA_DIR="/data"
OPENAI_TOKEN="FOO"
DISCORD_TOKEN="FOO"
ALLOWED_GUILDS="811050810460078100"
ALLOWED_ROLES="Admin,gpt"
DEBUG_GUILD="811050810460078100"
DEBUG_CHANNEL="1058174617287663689"
# This is the channel that auto-moderation alerts will be sent to
MODERATIONS_ALERT_CHANNEL="1058174617287663689"
# People with the roles in ADMIN_ROLES can use admin commands like /clear-local, and etc
ADMIN_ROLES="Server Admin,Owner,Special People"
# People with the roles in DALLE_ROLES can use commands like /dalle draw or /dalle imgoptimize
DALLE_ROLES="Server Admin,Special People,@everyone"
# People with the roles in GPT_ROLES can use commands like /gpt ask or /gpt converse
GPT_ROLES="Special People,@everyone"
WELCOME_MESSAGE="Long ass message removing for paste"
```
Have I missed something there? I noticed in the update I got a lot more env vars send to nothing in the docker container itself - That might be the bug here resetting my `ALLOWED_GUILDS`? Has anyone else had an issue upgrading?
(Sadly I don't know the version I was on but might roll back to a 9.x.x and see if i work for now)
Thanks in advance | closed | 2023-02-17T20:56:39Z | 2023-02-24T02:52:45Z | https://github.com/Kav-K/GPTDiscord/issues/155 | [
"bug"
] | cooperlees | 16 |
tflearn/tflearn | tensorflow | 979 | download vgg16.tflearn | how can i download vgg16.tflearn? | open | 2017-12-13T23:59:56Z | 2018-01-16T22:53:00Z | https://github.com/tflearn/tflearn/issues/979 | [] | mhabab | 2 |
jofpin/trape | flask | 146 | Trape loops setup | Whenever I start trape using `python trape.py` and I supply values for the ngrok token and google maps api it sasy that the configuration was successful but then it loops and asks for the values again. This is an endless loop of supplying the values and then reentering them. Anyone know how to fix? | closed | 2019-04-04T11:14:19Z | 2021-06-17T15:12:53Z | https://github.com/jofpin/trape/issues/146 | [] | Soutcast | 2 |
quantumlib/Cirq | api | 6,326 | Improve `__str__` and `__repr__` for `SingleQubitCliffordGate` | **Is your feature request related to a use case or problem? Please describe.**
Both the string and repr operators of SingleQubitCliffrodGate fall to its parent class which gives a confusing representaiton e.g.
```py3
>>> import cirq
>>> repr(cirq.ops.SingleQubitCliffordGate.X)
"cirq.CliffordGate.from_clifford_tableau(cirq.CliffordTableau(1,rs=np.array([False, True], dtype=np.dtype('bool')), xs=np.array([[True], [False]], dtype=np.dtype('bool')),zs=np.array([[False], [True]], dtype=np.dtype('bool')), initial_state=0))"
>>> str(cirq.ops.SingleQubitCliffordGate.X)
"cirq.CliffordGate.from_clifford_tableau(cirq.CliffordTableau(1,rs=np.array([False, True], dtype=np.dtype('bool')), xs=np.array([[True], [False]], dtype=np.dtype('bool')),zs=np.array([[False], [True]], dtype=np.dtype('bool')), initial_state=0))"
```
**Describe the solution you'd like**
the representation should be simpler. For example for `cirq.ops.SingleQubitCliffordGate.X` it should be
`cirq.ops.SingleQubitCliffordGate(_clifford_tableau=cirq.CliffordTableau(1, xs=np.array([[True], [False]]), zs=np.array([[False], [True]])))`
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle | closed | 2023-10-24T21:06:17Z | 2025-01-15T16:24:41Z | https://github.com/quantumlib/Cirq/issues/6326 | [
"good first issue",
"kind/feature-request",
"triage/accepted",
"good for learning"
] | NoureldinYosri | 13 |
indico/indico | sqlalchemy | 6,785 | Wizard error at Contact email | **Describe the bug**
I want to install indico on a server for developing but when I want to run wizard, I get this problem.
 | closed | 2025-03-02T22:16:55Z | 2025-03-02T23:03:48Z | https://github.com/indico/indico/issues/6785 | [
"bug"
] | aforouz | 2 |
pytest-dev/pytest-qt | pytest | 501 | ModelTester for recursive tree models | I´m running into a recursive loop when using the ModelTester with recursive tree models.
QAbstractItemModelTester gained an option to not call fetchMore with Qt6.4. ( https://doc.qt.io/qt-6/qabstractitemmodeltester.html#setUseFetchMore )
Since the _qt_tester attribute of ModelTester is only set after calling ModelTester.check(), there is no way to use that option.
Would be nice if that option could get added.
Thank you! | open | 2023-07-01T11:42:55Z | 2023-08-10T01:18:08Z | https://github.com/pytest-dev/pytest-qt/issues/501 | [] | phil65 | 5 |
zappa/Zappa | django | 551 | [Migrated] Unable to access request.form | Originally from: https://github.com/Miserlou/Zappa/issues/1463 by [pickfire](https://github.com/pickfire)
## Context
Zappa fail to recognize `application/x-www-form-urlencoded` and access `request.form`.
## Expected Behavior
`request.form` is should not be empty and `request.data` should be empty. And I believe the same goes for `request.args`, `request.files`, `request.values` ... http://flask.pocoo.org/docs/0.12/api/#incoming-request-data
## Actual Behavior
`request.form` is empty but `request.data` isn't.
## Possible Fix
Parse `response.form` from `response.data` if `Content-Type` is `application/x-www-form-urlencoded` in https://github.com/Miserlou/Zappa/blob/master/zappa/handler.py#L516?
## Steps to Reproduce
```python
from flask import Flask, request
app = Flask(__name__)
@app.route('/', methods=['POST'])
def root():
return str(request.form)
if __name__ == `__main__`:
app.run()
```
1. `python3 -m venv venv`
2. `source venv/bin/activate`
3. `pip install zappa flask`
4. `python app.py`
5. `curl 127.0.0.1:5000 -d k=v # this should not show anything`
6. `zappa init`
7. `zappa deplay`
8. `curl aws -d k=v`
## Your Environment
* Zappa version used: `0.45.1`
* Operating System and Python version: `Linux cti 4.15.13-1-ARCH #1 SMP PREEMPT Sun Mar 25 11:27:57 UTC 2018 x86_64 GNU/Linux`
* The output of `pip freeze`:
```
argcomplete==1.9.2
base58==0.2.4
boto3==1.6.14
botocore==1.9.14
certifi==2018.1.18
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-CAS==1.0.1
future==0.16.0
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
ldap3==2.4.1
MarkupSafe==1.0
placebo==0.8.1
pyasn1==0.4.2
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.1
Unidecode==1.0.22
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
xmltodict==0.11.0
zappa==0.45.1
```
(not using python-ldap here since python-ldap 3.0.0 was not packaged for lambda-packages)
* Link to your project (optional): Closed
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app",
"aws_region": "ap-southeast-1",
"profile_name": "default",
"project_name": "security",
"runtime": "python3.6",
"s3_bucket": "zappa-waxc8fhf7"
}
}
```
| closed | 2021-02-20T12:22:38Z | 2022-07-16T07:09:47Z | https://github.com/zappa/Zappa/issues/551 | [] | jneves | 1 |
scikit-learn/scikit-learn | machine-learning | 31,049 | RFC adopt narwhals for dataframe support | At least as of [SLEP018](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep018/proposal.html), scikit-learn supports dataframes passed as `X`. In #25896 is a further place of current discussions.
This issue is to discuss whether or not, or in which form, a future scikit-learn should depend on [narwhals](https://github.com/narwhals-dev/narwhals) for general dataframe support.
`+` wide df support
`+` less maintenance within scikit-learn
`-` external dependency
@scikit-learn/core-devs @MarcoGorelli | open | 2025-03-21T13:15:28Z | 2025-03-24T16:50:13Z | https://github.com/scikit-learn/scikit-learn/issues/31049 | [
"RFC"
] | lorentzenchr | 3 |
ultralytics/ultralytics | pytorch | 19,655 | yolov8-model get the obb, but how can I crop the obb? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I used yolov8-obb model get the obb of image, when I used following code to crop the obb, the result is wrong.

The plotted image is:

the crop image is:

what's the wrong with the cropped image?
### Additional
_No response_ | open | 2025-03-12T03:46:46Z | 2025-03-12T09:59:20Z | https://github.com/ultralytics/ultralytics/issues/19655 | [
"question",
"OBB"
] | wuchaotao | 2 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 7 | Training data and label | I want to know which are "dict.txt" or train/ or test/ or dev/ or trans/label ...
It is not match the thchs30 data which is download from the webset
Can you tell me or share your dataset? Thx | closed | 2018-04-14T15:35:26Z | 2018-05-04T04:48:05Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/7 | [] | cnzdc | 2 |
jwkvam/bowtie | plotly | 86 | check speed of rendering chart with plotly | compare the timings for svg and gl with scatter and scattergl
https://plot.ly/javascript/reference/#scattergl
may need to add some more features to plotlywrapper | open | 2017-01-26T21:43:32Z | 2018-03-03T19:55:20Z | https://github.com/jwkvam/bowtie/issues/86 | [
"low-priority"
] | jwkvam | 0 |
marshmallow-code/apispec | rest-api | 54 | Bad serialization with load_operations_from_docstring | Hello,
I noticed a problem while using the `apispec.utils.load_operations_from_docstring` method.
While trying to parse the following docstring:
``` python
"""
Fetch multiple stuff
---
get:
description: Returns stuff
responses:
200:
description: A list of stuff.
produces: [ application/json ]
schema:
type: array
items:
$ref: "#/definitions/Stuff"
"""
```
Using this code:
``` python
spec.add_path(path=Path(path="/api/stuff", operations=load_operations_from_docstring(method.__doc__)))
```
The `200` response was serialized as an `int`, and not as a `string`. As a result, the JSON was not valid.
I did a little workaround, a path_helper you can find below. Maybe this should be included as a default path_helper, or the behavior should be fixed somehow.
``` python
def yaml_serializer(apispec, **kwargs):
def replace_nums(d):
for k, v in d.items():
if isinstance(k, int):
d[str(k)] = v
del d[k]
if isinstance(v, dict):
replace_nums(v)
replace_nums(kwargs['path'])
return kwargs['path']
spec.register_path_helper(yaml_serializer)
```
(Also, writing 'items: StuffSchema' in the docstring didnt work as expected, so I had to add the "$ref" line manually.)
I'm not at ease enough with the project to make a PR yet, but I thought I should tell you guys !
EDIT: along with the fix for the bad-formatted integers as key, the fix for the bad-formatted "schema" value
``` python
def yaml_serializer(apispec, **kwargs):
def replace_nums(d):
for k, v in d.items():
if isinstance(k, int):
d[str(k)] = v
del d[k]
if isinstance(v, dict):
replace_nums(v)
def add_schema_ref(d):
for k, v in d.items():
if k == "schema":
if not isinstance(v, dict):
d[k] = {'$ref': '#/definitions/' + v.replace('Schema', '')}
elif v.get('items', False) and not isinstance(v, dict):
schema = v['items']
v['items'] = {'$ref': '#/definitions/' + schema.replace('Schema', '')}
elif isinstance(v, dict):
add_schema_ref(v)
replace_nums(kwargs['path'])
add_schema_ref(kwargs['path'])
return kwargs['path']
spec.register_path_helper(yaml_serializer)
```
| closed | 2016-02-22T14:20:26Z | 2016-03-04T13:02:57Z | https://github.com/marshmallow-code/apispec/issues/54 | [] | martinlatrille | 3 |
microsoft/MMdnn | tensorflow | 388 | mxnet BlockGrad doesn't support | (DL) room@room-MS-7A93:~/PycharmProject/insightface/models/MobileFaceNet$ mmtoir -f mxnet -n model-y1-softmax-vggface2-symbol.json -w model-y1-softmax-vggface2-0117.params -d model_converted_softmax_0117_vggface2/mobilefacenet --inputShape 3,112,112
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
pre_fc1
Warning: MXNet Parser has not supported operator BlockGrad with name blockgrad0.
Traceback (most recent call last):
File "/home/room/anaconda3/envs/DL/bin/mmtoir", line 11, in <module>
sys.exit(_main())
File "/home/room/anaconda3/envs/DL/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 194, in _main
ret = _convert(args)
File "/home/room/anaconda3/envs/DL/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 117, in _convert
parser.run(args.dstPath)
File "/home/room/anaconda3/envs/DL/lib/python3.6/site-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run
self.gen_IR()
File "/home/room/anaconda3/envs/DL/lib/python3.6/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 265, in gen_IR
self.rename_UNKNOWN(current_node)
File "/home/room/anaconda3/envs/DL/lib/python3.6/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 376, in rename_UNKNOWN
raise NotImplementedError()
NotImplementedError
| closed | 2018-08-30T02:38:55Z | 2018-09-03T05:04:08Z | https://github.com/microsoft/MMdnn/issues/388 | [] | aguang1201 | 5 |
ray-project/ray | python | 51,596 | [CG, Core] Illegal memory access with Ray 2.44 and vLLM v1 pipeline parallelism | ### What happened + What you expected to happen
We got the following errors when running vLLM v1 PP>1 with Ray 2.44. It was working fine with Ray 2.43.
```
ERROR 03-21 10:34:30 [core.py:343] File "/home/ray/default/vllm/vllm/v1/worker/gpu_model_runner.py", line 1026, in execute_model
ERROR 03-21 10:34:30 [core.py:343] self.intermediate_tensors[k][:num_input_tokens].copy_(
ERROR 03-21 10:34:30 [core.py:343] RuntimeError: CUDA error: an illegal memory access was encountered
ERROR 03-21 10:34:30 [core.py:343] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
ERROR 03-21 10:34:30 [core.py:343] For debugging consider passing CUDA_LAUNCH_BLOCKING=1
ERROR 03-21 10:34:30 [core.py:343] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions / Dependencies
- Python 3.11
- CUDA 12.4
- NVIDIA L4 / L40S GPUs
- Ray 2.44
- vLLM 0.8.1 (or any newer commits)
### Reproduction script
```python
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0, max_tokens=50)
# Create an LLM.
llm = LLM(
model="Qwen/Qwen2.5-0.5B-Instruct",
distributed_executor_backend="ray",
pipeline_parallel_size=2,
enforce_eager=False,
)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
Run the script with
```
VLLM_USE_V1=1 python run.py
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-21T17:37:42Z | 2025-03-21T21:35:18Z | https://github.com/ray-project/ray/issues/51596 | [
"bug",
"P0",
"core"
] | comaniac | 0 |
blb-ventures/strawberry-django-plus | graphql | 188 | FieldDoesNotExist when using field via mixin | Given:
```python
# from strawberry_django import field as django_field
# from strawberry_django import type as django_type
from strawberry_django_plus.gql.django import field as django_field
from strawberry_django_plus.gql.django import type as django_type
class UrnFieldMixin:
urn: str = django_field()
@django_type(models.Foo, filters=TrayTypeFilter, pagination=True)
class Foo(UrnFieldMixin):
...
```
I get:
```
../../../Vcs/django/django/db/models/options.py:669: in get_field
return self.fields_map[field_name]
E KeyError: 'field'
During handling of the above exception, another exception occurred:
testing/api/test_foo.py:3: in <module>
from csd.api.schema import schema
.../schema.py:11: in <module>
from ...api_types import Baz
.../api_types.py:11: in <module>
from ....api_types import Bar
.../api_types.py:51: in <module>
@django_type(models.Foo, filters=FooFilter, pagination=True)
../../../Vcs/strawberry-django-plus/strawberry_django_plus/type.py:379: in wrapper
return _process_type(
../../../Vcs/strawberry-django-plus/strawberry_django_plus/type.py:276: in _process_type
fields = list(_get_fields(django_type).values())
../../../Vcs/strawberry-django-plus/strawberry_django_plus/type.py:210: in _get_fields
fields[name] = _from_django_type(
../../../Vcs/strawberry-django-plus/strawberry_django_plus/type.py:149: in _from_django_type
model_field = get_model_field(
../../../Vcs/strawberry-graphql-django/strawberry_django/fields/types.py:252: in get_model_field
raise e
../../../Vcs/strawberry-graphql-django/strawberry_django/fields/types.py:235: in get_model_field
return model._meta.get_field(field_name)
../../../Vcs/django/django/db/models/options.py:671: in get_field
raise FieldDoesNotExist(
E django.core.exceptions.FieldDoesNotExist: Foo has no field named 'field', did you mean ...
```
When not using a mixin, or when not using strawberry-django-plus it works.
I am happy to debug this further, but would appreciate some pointer(s). | open | 2023-03-23T15:57:05Z | 2023-03-27T14:39:43Z | https://github.com/blb-ventures/strawberry-django-plus/issues/188 | [] | blueyed | 5 |
InstaPy/InstaPy | automation | 6,462 | post_page[0]["shortcode_media"] KeyError: 0 | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
like and comment posts by tag
## Current Behavior
several hours ago script runs well, but suddenly:
INFO [2022-01-19 22:53:14] [kat_berlin_12] Like# [1/109]
INFO [2022-01-19 22:53:14] [kat_berlin_12] https://www.instagram.com/p/CY7db85vmmC/
INFO [2022-01-19 22:53:19] [kat_berlin_12] post_page: {'items': [{'taken_at': 1642632681, 'pk': 2754925061235173762, 'id': '2754925061235173762_49929679974', 'device_timestamp': 1642632562281324, 'media_type': 1, 'code': 'CY7db85vmmC', 'client_cache_key': .....
INFO [2022-01-19 22:53:20] [kat_berlin_12] Sessional Live Report:
|> No any statistics to show
[Session lasted 3.0 minutes]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-01-19 22:53:20] [kat_berlin_12] Session ended!
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
Traceback (most recent call last):
File "test.py", line 20, in <module>
session.like_by_tags(["animegirl", "anime"], amount=100)
File "/root/insta/lib/python3.8/site-packages/instapy/instapy.py", line 1980, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "/root/insta/lib/python3.8/site-packages/instapy/like_util.py", line 619, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
trying:
local machine (pyCharm + Windows)
remote server (Ubuntu 20.4)
6 or 7 different accounts,
but no luck
code:
```
# -*- coding: utf-8 -*-
from instapy import InstaPy
from instapy import smart_run
session = InstaPy(username='username', password='password', headless_browser=True)
with smart_run(session):
session.set_do_like(enabled=True, percentage=84)
session.set_do_comment(enabled=True, percentage=24)
session.set_comments(['kawaiii!!!😍',
'Cool🔥🔥🔥🔥',
'❤❤❤❤❤Just amaZZZing',
'Awesome❤',
'Really Cool🔥🔥🔥',
'😍I like your stuff😍'])
session.like_by_tags(["animegirl", "anime"], amount=40)
```
| closed | 2022-01-19T23:22:58Z | 2022-01-25T06:42:31Z | https://github.com/InstaPy/InstaPy/issues/6462 | [] | Goli777 | 54 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,283 | Revise globaleaks.init script to properly use variable LISTENING_IP while setting iptables rules | ### What version of GlobaLeaks are you using?
5.0.18
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
Currently the iptables rules set by the /etc/init.d/globaleaks script seems to not consider the LISTENING_IP defined in /etc/default/globaleaks
### Proposed solution
_No response_ | open | 2024-10-28T15:14:17Z | 2024-10-28T15:14:18Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4283 | [
"T: Bug",
"C: Startup Scripts"
] | evilaliv3 | 0 |
xorbitsai/xorbits | numpy | 672 | BUG: xorbits.DataFrames drop all columns that were not used in a calculation. | ### Describe the bug
When calling the following code
```python3
import pandas as pd
import numpy as np
import xorbits.pandas as xpd
df = pd.DataFrame({'a' : np.random.uniform(0,1,1000),
'b' : np.random.uniform(1,2,1000)})
df.to_parquet('test.pq')
del df
df = xpd.read_parquet('test.pq')
print(df.keys())
a = df['a'].to_numpy()
print(a.mean())
print(df.keys())
```
I get the output:
```python3
Index(['a', 'b'], dtype='object')
0.5050958861272348
Index(['a'], dtype='object')
```
If I instead call `(df['a']*df['c']).to_numpy()` then `a` and `c` are kept but `b` is still dropped. It looks like xorbits is dropping all columns that are not used in a calculation from the dataframe.
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version: 3.10.10
2. The version of Xorbits you use: 0.5.1
4. Versions of crucial packages, such as numpy, scipy and pandas: (numpy: '1.24.2', pandas: '1.4.0')
6. Full stack of the error.: No stack is created as no crash is caused.
7. Minimized code to reproduce the error. : See above.
### Expected behavior
All columns in the dataframe remaining accessible.
### Additional context
None | closed | 2023-08-26T01:04:00Z | 2023-09-08T03:04:48Z | https://github.com/xorbitsai/xorbits/issues/672 | [
"bug"
] | MarcelHoh | 4 |
d2l-ai/d2l-en | tensorflow | 1,945 | colab gives error code: '@save' is not an allowed annotation – allowed values include [@param, @title, @markdown]. | When I open the google colab files (pytorch or mx), I get this error:
'@save' is not an allowed annotation – allowed values include [@param, @title, @markdown].
This happens with all the colab files, in the specific case, this happens with the chapter 13 colab:
kaggle-cifar10.ipynb

| open | 2021-10-25T12:25:39Z | 2021-11-11T07:09:21Z | https://github.com/d2l-ai/d2l-en/issues/1945 | [] | g-i-o-r-g-i-o | 2 |
allenai/allennlp | data-science | 5,254 | pretrained_transformer_indexer sets token_ids and mask to different lengths | In `pretrained_transformer_indexer` the method `tokens_to_indices` adds `token_ids`, `mask` and `type_ids` to the output dict.
https://github.com/allenai/allennlp/blob/a6cfb1221520fca7a5cc55bef001c6a79a6a3e2f/allennlp/data/token_indexers/pretrained_transformer_indexer.py#L94
This is then passed to `_postprocess_output` which potentially resizes token_ids and type_ids (i.e. # Strips original special tokens), but it returns `segment_concat_mask` instead of `mask`. `mask` is now the same length as `token_ids`.
Also note the `_postprocess_output` only executes if _max_length is None. So special tokens only get stripped in this case? And the index sets `self._num_added_start_tokens` and `self._num_added_start_tokens` to 1 regardless of whether the tokenizer has include_special_tokens=true or false.
The issue I'm running into, is I'm using a CrfTagger with pretrained_transformer, setting the encoder to pass_through. The pass_through encoder accepts token_ids and mask, which are now misaligned so an exception is thrown.
It seems that the only combination which works is to not set `max_length` (in which case `segment_concat_mask` doesn't get added).
I'm using the json below to construct the tokenizer, token_indexer and model.
```
{
"tokenizer": {
"type": "pretrained_transformer",
"model_name": "/path/to/custom/model"
},
"token_indexers": {
"tokens": {
"type": "pretrained_transformer",
"model_name": "/path/to/custom/model",
"max_length": 128
}
}
}
{
"text_field_embedder":{
"token_embedders":{
"tokens": {
"type":"pretrained_transformer",
"model_name":"path/to/custom/model",
"max_length":128
}
}
},
"encoder":{
"type":"pass_through",
"input_dim":128
},
"label_encoding":"BIOUL",
"constrain_crf_decoding":true,
"include_start_end_transitions":true,
"dropout":0.5,
"verbose_metrics":false,
"calculate_span_f1":true
}
```
| closed | 2021-06-11T07:20:24Z | 2021-06-15T22:42:38Z | https://github.com/allenai/allennlp/issues/5254 | [
"bug"
] | david-waterworth | 5 |
LibrePhotos/librephotos | django | 635 | storybook for the front-end | **Describe the enhancement you'd like**
Add storybook the the development process.
**Describe why this will benefit the LibrePhotos**
Changing inputs and behaviour of components can be done in an isolated environment while remaining sure that all consumers adhere to the components' api
**Additional context**
N/A
| open | 2022-09-09T13:21:17Z | 2022-09-26T17:36:18Z | https://github.com/LibrePhotos/librephotos/issues/635 | [
"enhancement",
"frontend"
] | polaroidkidd | 5 |
miguelgrinberg/flasky | flask | 367 | Change SQLite into MySQL database. | Hello Miguel!
I'm beginner and I'm trying to create website with MySQL database. I have tried with many ways on the Internet. But I did not succeed, do have any suggestion for me with this question?
Currently, I'm using XAMPP for running the MySQL server and HeidiSQL for managing the database.
Thank you so much
| closed | 2018-06-25T08:43:04Z | 2018-10-14T22:18:24Z | https://github.com/miguelgrinberg/flasky/issues/367 | [
"question"
] | trannhuphuan | 2 |
miguelgrinberg/Flask-Migrate | flask | 24 | Change from Integer to String not detected. | I have a column:
`code = db.Column(db.Integer(unsigned=True,zerofill=True))`
And when I change that column from `Integer` to `String` like so:
`code = db.Column(db.String())`
And I run:
`python manage.py db migrate`
the migration does not detect any changes.
Should it not detect at least a change in the column data type?
Migration looks like this after:
```
def upgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
pass
### end Alembic commands ###
```
| closed | 2014-04-29T00:56:38Z | 2019-03-06T23:54:33Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/24 | [
"question"
] | playpianolikewoah | 13 |
collerek/ormar | sqlalchemy | 657 | Update documentation: wrong example | ```python
from fastapi import FastAPI
from sqlalchemy.ext.asyncio import create_async_engine
from .config import get_config
from .models.base import database, metadata
settings = get_config()
app = FastAPI()
engine = create_async_engine(settings.database_url, echo=True)
@app.on_event("startup")
async def startup() -> None:
# settings.database_url is 'postgresql+asyncpg://...'
# engine = sqlalchemy.create_engine(settings.database_url)
# metadata.create_all(engine)
# File "/home/sergey/.cache/pypoetry/virtualenvs/baza-express-api-dK1IMa-Y-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 597, in connect
# return self.dbapi.connect(*cargs, **cparams)
# File "/home/sergey/.cache/pypoetry/virtualenvs/baza-express-api-dK1IMa-Y-py3.10/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 758, in connect
# await_only(self.asyncpg.connect(*arg, **kw)),
# File "/home/sergey/.cache/pypoetry/virtualenvs/baza-express-api-dK1IMa-Y-py3.10/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 67, in await_only
# raise exc.MissingGreenlet(
# sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/14/xd2s)
# https://docs.sqlalchemy.org/en/14/orm/extensions/asyncio.html
async with engine.begin() as conn:
# await conn.run_sync(metadata.drop_all)
await conn.run_sync(metadata.create_all)
if not database.is_connected:
await database.connect()
@app.on_event("shutdown")
async def shutdown() -> None:
if database.is_connected:
await database.disconnect()
await engine.dispose()
```
| closed | 2022-05-07T11:02:50Z | 2022-07-19T15:16:16Z | https://github.com/collerek/ormar/issues/657 | [
"bug"
] | s3rgeym | 2 |
DistrictDataLabs/yellowbrick | scikit-learn | 435 | Improve RankD Tests | Right now the Rank1D and Rank2D tests are very basic and can be improved using the new assert image similarity and pytest testing framework mechanisms.
### Proposal/Issue
The test matrix should against the following:
- [x] replace `make_regression` dataset with `load_energy` and update those tests
- [x] use `load_occupancy` for classification tests
- [x] algorithms: pearson, covariance, spearman (2D) and shaprio (1D)
- [x] 1D case - horizontal orientation
- [x] 1D case - vertical orientation
- [x] Test that an exception is raised for unrecognized algorithms
- [x] test the underlying rank matrix is correct
Unfortunately, we can't use `pytest.mark.parametrize` with visual test cases (yet), so we'll have to make individual tests for each.
### Code Snippet
Tests will look approximately like:
```python
def test_rank2d_bad_algorithm(self):
""""
Assert that unknown algorithms raise exception
""""
with pytest.raises(YellowbrickValueError, match="unknown algorithm"):
# do the thing
def test_rank2d_pearson_regression(self):
""""
Test Rank2D images similar with pearson scores on regression dataset
""""
data = load_energy(return_dataset=True)
oz = Rank2D(algorithm='pearson')
oz.fit_transform(data)
npt.assert_array_equal(oz.ranks_, [[]])
self.assert_images_similar(oz, tol=0.25)
```
### Background
See #68 and #429
| closed | 2018-05-16T14:27:38Z | 2020-06-21T03:25:32Z | https://github.com/DistrictDataLabs/yellowbrick/issues/435 | [
"type: technical debt",
"level: novice"
] | bbengfort | 3 |
modelscope/data-juicer | data-visualization | 146 | [Bug]: “The features can't be aligned because the key __dj_stats__ of ...” for line_length related OPs | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
Ubuntu
### Installation Method 安装方式
from source
### Data-Juicer Version Data-Juicer版本
v0.1.2
### Python Version Python版本
3.8
### Describe the bug 描述这个bug
When some length related OPs are used in Analyzer to process dataset containing zero-length samples, there can be un-aligned features, such as `'max_line_length': Value(dtype='int64', id=None` v.s. `'max_line_length': Value(dtype='float64'`
### To Reproduce 如何复现
for pile dataset, raw chunk 24, running
`export HF_DATASETS_CACHE=/mnt/data/.cache/huggingface/datasets
python /root/data-juicer/tools/hpo/execute_hpo_3sigma.py --config dj_refine_recipe_base.yaml --path_3sigma_recipe dj_refined_3_sigma/dj_refine_recipe_chunk_24.yaml --dataset_path raw/24.jsonl.zst --export_path dj_refined_3_sigma/refined_24.jsonl.zst.jsonl`
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图

### Additional 额外信息
_No response_ | closed | 2023-12-20T12:22:11Z | 2023-12-21T06:35:51Z | https://github.com/modelscope/data-juicer/issues/146 | [
"bug"
] | yxdyc | 0 |
microsoft/qlib | deep-learning | 1,450 | 配置好 qlib-server服务器,使用 Mac 系统出现 NFS不能正常挂载的故障 | ## 🐛 Bug Description
## To Reproduce
Steps to reproduce the behavior:
1. 在Mac 系统无法正常完成 NFS 数据挂载
配置文件:
```yaml
calendar_provider:
class: LocalCalendarProvider
kwargs:
remote: True
feature_provider:
class: LocalFeatureProvider
kwargs:
remote: True
expression_provider: LocalExpressionProvider
instrument_provider: ClientInstrumentProvider
dataset_provider: ClientDatasetProvider
provider: ClientProvider
expression_cache: null
dataset_cache: null
calendar_cache: null
provider_uri: 10.71.117.61:/
mount_path: ./cn_data/
auto_mount: True
flask_server: 10.71.117.61
flask_port: 9710
```
客户端配置:
```python
import qlib
from qlib.data import D
qlib.init_from_yaml_conf('./config.yaml')
fox = D.calendar(start_time='2010-01-01', end_time='2017-12-31', freq='day')[:2]
print(fox)
```
## Expected Behavior
生成的命令如下: 在 Mac 环境此命令出现未预期的行为
sudo mount.nfs 10.71.117.61:/ cn_data:\\'
## Screenshot
将 win 改成 windows 问题得以解决

## Environment
- Qlib version: '0.9.1.99'
- Python version: Python 3.10.9
- OS (`Windows`, `Linux`, `MacOS`): Mac OS
- Commit number (optional, please provide it if you are using the dev version):
-
```python
# qlib/config.py : 342 line
if "win" in platform.system().lower():
fix:
if "windows" in platform.system().lower():
```
| open | 2023-02-24T11:49:30Z | 2023-03-14T15:53:46Z | https://github.com/microsoft/qlib/issues/1450 | [
"bug"
] | markthink | 2 |
roboflow/supervision | pytorch | 1,072 | Add support for limiting the number of instances in a video | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Currently, the Supervision library provides tools for tracking objects in a video, but there is no built-in support for limiting the number of instances. This can be useful in scenarios where we want to restrict the number of detections for a particular class, for example, in a football video, we want to limit the number of players to 22 maximum, for any given time, there are maximum 22 players on the field, so there should not be a no.23 tracker id.
because there are collisions of players in the video, so there will be mistracking happening. if a no.23 tracker id appears, and a no.10 tracker id disappears, and the distance of the two players are relatively close, then it's saft to say no.10 becomes no.23, we need to update tracker id 23 to 10.
### Use case
This is particularly useful to track players on a sports field.
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-03-29T04:02:22Z | 2024-04-05T10:34:47Z | https://github.com/roboflow/supervision/issues/1072 | [
"enhancement"
] | westlinkin | 3 |
lanpa/tensorboardX | numpy | 588 | The ability to add epoch and batch number in global step | It would be great if tensorboardX had this option. It greatly helps observe mean, std and generally improves productivity.
This step could be followed with a update to data visualization wherein one may click at an epoch and it would expand to show within epoch batch updates. | closed | 2020-06-02T14:44:12Z | 2020-09-11T12:28:05Z | https://github.com/lanpa/tensorboardX/issues/588 | [
"tensorboard_frontend"
] | RSKothari | 2 |
jowilf/starlette-admin | sqlalchemy | 399 | Bug: Regression for `action_btn_class` | **Describe the bug**
We just noticed a regression from v12 in PR #348 ([361057](https://github.com/jowilf/starlette-admin/commit/361057f81a94e9d2357600d8ca90c27365f20115#diff-b98e5c56b76f6d979c93841b6b3c5b91238a4bf09f22a9ea656d54c2cc1b3293L10)), where the `action_btn_class` field is not included in the primary `def action()`.
Not sure how I missed this in my review. Will be more thorough next time!
**To Reproduce**
- Create a View with an `action`
- Pass `action_btn_class='btn-fancy'` to action decorator
- Server won't start because `@action` receives an unknown parameter
**Environment (please complete the following information):**
- Starlette-Admin version: v12.0 +
- ORM/ODMs: SQLAlchemy
**Additional context**
Have PR, just wanted to track issue. | closed | 2023-11-16T16:35:26Z | 2023-11-18T17:51:50Z | https://github.com/jowilf/starlette-admin/issues/399 | [
"bug"
] | mrharpo | 0 |
apache/airflow | data-science | 48,076 | Add support for active session timeout in Airflow Web UI | ### Description
Currently, Airflow only support inactive session timeout via the `session_lifetime_minutes` config option. This handles session expiration after a period of inactivity, which is great - but it doesn't cover cases where a session should expire regardless of activity (i.e, an active session timeout).
This is a common requirement in environments with stricter security/compliance policies (e.g, session must expire after x hours, even if user is active)
### Use case/motivation
Introduce a new configuration option (e.g, `session_max_lifetime_minutes`) that defines the maximum duration a session can remain valid from the time of login, regardless of user activity.
This feature will help admins better enforce time-based access control.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-21T18:49:07Z | 2025-03-22T21:09:21Z | https://github.com/apache/airflow/issues/48076 | [
"kind:feature",
"area:UI",
"needs-triage"
] | bmoon4 | 2 |
gevent/gevent | asyncio | 1,522 | High-level gevent configuration/comparison overview | I am starting a new project in gevent and in general docs have been great for explaining what gevent actually does and how does it work.
What I found a little hard is to compare different possible configurations/usages and advantages/disadvantages of each approach. I think it would be good to have an FAQ where a user who has a good idea about how concurrency works can go a little bit more in-depth and hopefully avoid learning about 10 years of python async/stackless history and it's competing implementations.
So to start:
1. Gevent supports both `PyPy` and `CPython`. Are there any advantages to using `PyPy` except the speed? `PyPy` has a native stackless implementation. Will we get improved stack traces / debugging / profiling / stability out of it? Or there is basically no difference compared to the CPython greenlet library.
2. In the event loop section, there is a line about `libev-cffi` being easier to debug. What does that mean in practice? Will we get richer traceback compared to `libev` even in `CPython`?
3. Connecting to point 2. do exceptions and tracebacks work as expected in greenlet environment? Will we get proper traceback with local variables and other goodies (obviously profiling and tracing is a different story)? Are there any gotchas we should be careful about?
4. With asyncio now being standardized, what is the relationship between greenlets and asyncio? For example, in the future, will you be able to use async libs in a 'sync' way (no await) in the gevent codebase? Let's say somebody makes an async version of Postgres client for asyncio (not existing green version). Is there a chance to just plug and play it into gevent based code? Or is that just not possible?
Thanks! :) I would like to help with FAQ but since I am the one asking questions I probably am not the best person to answer them.
If anyone has more of these 'higher-level' questions we could use this thread to compile them and perhaps create an FAQ on the website. | open | 2020-01-31T10:15:30Z | 2020-01-31T10:22:40Z | https://github.com/gevent/gevent/issues/1522 | [] | Ryner01 | 0 |
nolar/kopf | asyncio | 371 | [PR] Deprecate old K8s versions (1.12) | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2020-06-08 19:19:30+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/371
>
## What do these changes do?
Drop old K8s versions (up to 1.12) from testing.
## Description
Minikube does not support K8s 1.12 anymore, the builds fail:
* https://travis-ci.org/github/zalando-incubator/kopf/jobs/696167018
This does not break the code itself, and the existing releases are fine, but all new PRs will fail at this test step.
So, we can drop the old enough version of K8s. Instead, enlist 1.17 explicitly, since the "latest" now refers to 1.18.
## Issues/PRs
> Issues: #13
## Type of changes
- Mostly CI/CD automation, contribution experience
## Checklist
- [x] The code addresses only the mentioned problem, and this problem only
- [x] I think the code is well written
- [x] Unit tests for the changes exist
- [x] Documentation reflects the changes
- [x] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
<!-- Are there any questions or uncertainties left?
Any tasks that have to be done to complete the PR? -->
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-08-20 20:14:02+00:00_
>
Closed in favor of https://github.com/nolar/kopf/pull/501 | closed | 2020-08-18T20:04:55Z | 2020-09-09T22:03:21Z | https://github.com/nolar/kopf/issues/371 | [
"archive",
"automation"
] | kopf-archiver[bot] | 1 |
deeppavlov/DeepPavlov | nlp | 1,690 | support latest numpy | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
**What problem are we trying to solve?**:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
deeppavlov 1.6.0 requires numpy<1.24, but you have numpy 1.26.4 which is incompatible.
```
**How can we solve it?**:
```
upgrade the package as per the latest dependency numpy package
```
**Are there other issues that block this solution?**:
```
other packages making this issue, not to install
```
| open | 2024-06-02T05:29:42Z | 2025-03-10T14:45:31Z | https://github.com/deeppavlov/DeepPavlov/issues/1690 | [
"enhancement"
] | mmadhuhasa | 1 |
chaoss/augur | data-visualization | 2,020 | Broken image references in Docs | **Description:**
Broken image references in the file - [Link](https://github.com/chaoss/augur/blob/main/docs/source/getting-started/Welcome.rst)
| closed | 2022-10-31T04:10:51Z | 2022-11-21T13:07:35Z | https://github.com/chaoss/augur/issues/2020 | [] | meetagrawal09 | 0 |
babysor/MockingBird | deep-learning | 371 | 不能运行pre.py 怎么办呀大佬们 | D:\MockingBird-main\MockingBird-main>python pre.py D:\shujuchuli
Using data from:
D:\shujuchuli\aidatatang_200zh\corpus\train
Traceback (most recent call last):
File "pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "D:\MockingBird-main\MockingBird-main\synthesizer\preprocess.py", line 64
, in preprocess_dataset
for v in dict_transcript:
File "D:\python\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb6 in position 10: invalid
start byte | closed | 2022-02-07T15:30:47Z | 2022-02-10T12:52:50Z | https://github.com/babysor/MockingBird/issues/371 | [] | 183954477 | 1 |
NullArray/AutoSploit | automation | 792 | Divided by zero exception62 | Error: Attempted to divide by zero.62 | closed | 2019-04-19T16:00:51Z | 2019-04-19T16:37:43Z | https://github.com/NullArray/AutoSploit/issues/792 | [] | AutosploitReporter | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.