repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ResidentMario/missingno | pandas | 30 | KeyError for heatmap | Hi, I get KeyError for **heatmap** function.
Example: [https://github.com/samuelbr/missingno/blob/master/missingno%2Bissue.ipynb](https://github.com/samuelbr/missingno/blob/master/missingno%2Bissue.ipynb)
Samuel | closed | 2017-06-23T13:22:13Z | 2017-06-26T19:42:08Z | https://github.com/ResidentMario/missingno/issues/30 | [] | samuelbr | 3 |
praw-dev/praw | api | 1,586 | Feature request: Override stream_generator limit with kwarg. | It would be nice to be able to override the stream_generator limit, which controls how many posts you get back from the selected stream. Currently it is always set to 100
I think a reasonable solution would be to add a limit kwarg to "stream.submissions()" function. For this to work though some kind of workaround for the limit being automatically set in the stream_generator() function.
My work-around was to replace [util.py Lines 173-175](https://github.com/praw-dev/praw/blob/a1f7e015a8a80c08ef70069d341e45bd74f9145e/praw/models/util.py#L173)
```python
limit=100
if before_attribute is None:
limit -= without_before_counter
````
with:
```python
if "limit" not in function_kwargs:
function_kwargs["limit"] = 100
if before_attribute is None:
function_kwargs["limit"] -= without_before_counter
```
and replace [util.py Line 179](https://github.com/praw-dev/praw/blob/a1f7e015a8a80c08ef70069d341e45bd74f9145e/praw/models/util.py#L179)
```python
for item in reversed(list(function(limit=limit, **function_kwargs))):
```
with:
```python
for item in reversed(list(function(**function_kwargs))):
```
This seems to work perfectly in my context, but I am not sure that is the cleanest method for doing so.
| closed | 2020-11-23T01:28:34Z | 2020-11-23T05:23:36Z | https://github.com/praw-dev/praw/issues/1586 | [] | JonathanSourdough | 2 |
littlecodersh/ItChat | api | 675 | itchat.run()的阻塞问题 | 在提交前,请确保您已经检查了以下内容!
- [ ] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [ ] 我已经阅读并按[文档][document] 中的指引进行了操作
- [ ] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ ] 本问题确实关于`itchat`, 而不是其他项目.
- [ ] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
我的itchat版本是最新版本的,我想实现一个功能,定时提醒,到了指定的时间,会向文件传输助手发送一则提醒。但是itchat.run()我看了看代码,run实际上是循环执行一个列表里的函数,register是把函数注册到列表里,好像只能在收到消息后才能激活。有没有什么好的解决方案呢? | closed | 2018-06-03T15:37:46Z | 2018-12-21T10:29:09Z | https://github.com/littlecodersh/ItChat/issues/675 | [] | thygyc | 6 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 150 | Array support | Hi and thanks for this cool package!
Now for the issue: it seems when selecting an Array column, we get it back as a string.
I saw in the code that arrays are converted to string.
Can an array be represented as a python array, similar to how it's in postgres?
Thanks!
Roy
| open | 2021-10-19T20:18:56Z | 2023-03-07T14:18:59Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/150 | [
"feature request"
] | royxact | 2 |
syrupy-project/syrupy | pytest | 361 | Dependabot couldn't authenticate with https://pypi.python.org/simple/ | Dependabot couldn't authenticate with https://pypi.python.org/simple/.
You can provide authentication details in your [Dependabot dashboard](https://app.dependabot.com/accounts/tophat) by clicking into the account menu (in the top right) and selecting 'Config variables'.
[View the update logs](https://app.dependabot.com/accounts/tophat/update-logs/48487395). | closed | 2020-09-24T14:29:28Z | 2020-09-25T13:51:37Z | https://github.com/syrupy-project/syrupy/issues/361 | [] | dependabot-preview[bot] | 1 |
iterative/dvc | data-science | 10,246 | Azure Blob Storage: DVC uses (wrong) environment variable instead of SAS token | # Bug Report
## Azure Blob Storage: DVC uses (wrong) environment variable instead of SAS token
## Description
If my environment variables contains a variable named `AZURE_STORAGE_ACCOUNT_KEY` (not related to the DVC Azure Blob Storage), `dvc pull` fails with message `"Authentication error"` or `"ERROR: unexpected error - Incorrect padding: Incorrect padding"`(depending on CLI used: Powershell, Bash...)
This happens even if there is a `config.local` file containing `sas_token = <valid SAS token for DVC Azure Blob Storage>`.
This is not the expected behavior for 2 reasons:
1. DVC doc does not mention any env. var named AZURE_STORAGE_ACCOUNT_KEY that it would use. Closest names are:
- AZURE_STORAGE_CONNECTION_STRING
- AZURE_STORAGE_ACCOUNT
- AZURE_STORAGE_KEY
- AZURE_STORAGE_SAS_TOKEN
2. There is a valid `config.local` file that should be used first, according to [documentation ](https://dvc.org/doc/user-guide/data-management/remote-storage/azure-blob-storage#microsoft-azure-blob-storage):
> For custom authentication, you can set the following config params with [dvc remote modify --local](https://dvc.org/doc/command-reference/remote/modify#--local), use [environment variables](https://dvc.org/doc/user-guide/data-management/remote-storage/azure-blob-storage#authenticate-with-environment-variables), or an [Azure CLI config file](https://dvc.org/doc/user-guide/data-management/remote-storage/azure-blob-storage#authenticate-with-an-azure-cli-config-file) **(in that order).**
### Reproduce
(0. ``az logout`` to prevent any interference)
1. ``dvc remote modify --local myremote sas_token 'my_valid_sas_token'``
2. ``$Env:AZURE_STORAGE_ACCOUNT_KEY='hello' ``
3. ``dvc pull``
### Expected
Data pulled from Azure Blob Storage, thanks to point 1 above. Point 2 should not interfere.
### Environment information
```
DVC version: 2.45.1 (pip) // same bug with 3.39, I have downgraded in case it would solve the bug
-------------------------
Platform: Python 3.10.7 on Windows-10-10.0.19045-SP0
Subprojects:
dvc_data = 0.40.3
dvc_objects = 0.19.3
dvc_render = 1.0.0
dvc_task = 0.1.11
dvclive = 2.2.0
scmrepo = 0.1.11
Supports:
azure (adlfs = 2023.10.0, knack = 0.11.0, azure-identity = 1.14.1),
http (aiohttp = 3.8.6, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.6, aiohttp-retry = 2.8.3)
Cache types: hardlink
Cache directory: NTFS on C:\
Caches: local
Remotes: azure
Workspace directory: NTFS on C:\
Repo: dvc, git
```
| closed | 2024-01-18T15:05:07Z | 2024-01-19T08:57:25Z | https://github.com/iterative/dvc/issues/10246 | [] | SoCrespo | 3 |
netbox-community/netbox | django | 18,527 | Allow virtual circuit terminations to have custom names/identifiers | ### NetBox version
v4.2.2
### Feature type
Data model extension
### Proposed functionality
Currently when creating a virtual circuit termination it is given a default name of "{Parent Virtual Circuit Name}: Peer termination" which is the same for all terminations for that specific virtual circuit. I propose the ability the ability to specify a custom name/ID for each termination to quickly differentiate each termination.
### Use case
I am currently trying to add 2 custom fields for each virtual circuit, UNI A and UNI Z which are Objects referencing virtual circuit terminations. I would like to assign IDs for each of these terminations so you can quickly see the termination ID for UNI A and UNI Z within the custom field of the virtual circuit. The current naming convention doesn't given any information on the specific termination.
### Database changes
_No response_
### External dependencies
_No response_ | open | 2025-01-29T17:31:05Z | 2025-02-27T18:33:12Z | https://github.com/netbox-community/netbox/issues/18527 | [
"type: feature",
"needs milestone",
"breaking change",
"status: backlog",
"complexity: low"
] | ebrletic | 2 |
nerfstudio-project/nerfstudio | computer-vision | 2,644 | Colab example script doesn't work ( Unable to Register cuDNN, cuFFT, and cuBLAS Factories) | Description:
I am encountering an issue while using the Google Colab nerfstudio example. The error message indicates problems with registering cuDNN, cuFFT, and cuBLAS factories, resulting in a subsequent TensorRT warning. Here is the error message:
> Copy code
> 2023-12-02 21:39:02.025230: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
> 2023-12-02 21:39:02.025298: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
> 2023-12-02 21:39:02.025339: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
> 2023-12-02 21:39:03.597118: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
> Traceback (most recent call last):
> File "/usr/local/bin/ns-train", line 5, in <module>
Thank you and best regards
briefkasten | open | 2023-12-02T21:48:27Z | 2023-12-02T21:48:27Z | https://github.com/nerfstudio-project/nerfstudio/issues/2644 | [] | briefkasten1988 | 0 |
pydantic/pydantic | pydantic | 11,485 | Make `ValidationInfo` generic | ### Initial Checks
- [x] I have searched Google & GitHub for similar requests and couldn't find anything
- [x] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
Make `ValidationInfo` generic so that users can type hint the `context` attribute. This would be defined with a type var default to avoid static type checking breaking changes:
```python
class ValidationInfo[Ctx = Any]:
context: Ctx
```
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [x] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | open | 2025-02-25T10:37:32Z | 2025-02-25T10:37:32Z | https://github.com/pydantic/pydantic/issues/11485 | [
"feature request"
] | Viicos | 0 |
modin-project/modin | pandas | 7,102 | Remove enable_api_only mode in modin logging |
In modin LogMode enable_api_only and enable options looks confusing. We should just leave enable option, using which memory.log would also should be generated
_Originally posted by @YarShev in https://github.com/modin-project/modin/issues/7092#issuecomment-2003894603_ | closed | 2024-03-18T15:58:19Z | 2024-04-17T12:54:11Z | https://github.com/modin-project/modin/issues/7102 | [] | arunjose696 | 1 |
google-research/bert | tensorflow | 743 | multi-gpu horovod | With multi-GPU training by horovod,the value of examples/sec is for one gpus or for allgpus?? I found that the value of multi-gpu is lower than one-gpu. Thanks | open | 2019-07-03T10:19:52Z | 2019-07-03T12:14:31Z | https://github.com/google-research/bert/issues/743 | [] | tianuna | 2 |
vaexio/vaex | data-science | 2,396 | [BUG-REPORT] vaex save error | When I use python multi-process and vaex, I want to save the text as embedding. Everything is normal in the early stage of the program running, but after a while, the saved hdf5 becomes like this:
<img width="697" alt="image" src="https://github.com/vaexio/vaex/assets/87348960/293c3ca9-dc6a-4891-a47b-3ca498b53f42">
everything is lost, here is my code:
`import gzip
import hashlib
import json
import logging
import os
import warnings
from multiprocessing import Pool
import numpy as np
import vaex
from sentence_transformers import SentenceTransformer
warnings.filterwarnings("ignore")
matching_files = ["x1.json.gz", "x2.json.gz", "x3.json.gz", ...]
print("TOTAL # JOBS:", len(matching_files))
print(matching_files)
def save_embedding(file_path):
cuda_num = int(file_path.split(".")[0][-4:]) % 8
save_name = file_path.split("/")[-1].split(".")[0]
save_path = "xxx"
# log
logging.basicConfig(filename=f"logs/{save_name}.log", level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
logger = logging.getLogger(f"{save_name}_logger")
print("file_path is: ", file_path, "device is: ", cuda_num, "save name is: ", save_name)
# process
model = SentenceTransformer("infgrad/stella-large-zh", device=f"cuda:{cuda_num}")
all_md5 = []
all_text = []
content_embedding = []
rdy_num = 0
total_num = 0
with gzip.open(file_path, "rt") as f:
for line in f:
total_num += 1
json_data = json.loads(line.strip())
content = json_data["content"]
embedding = model.encode(
sentences=content,
batch_size=1,
show_progress_bar=False,
device=f"cuda:{cuda_num}",
normalize_embeddings=True)
text_md5 = hashlib.md5(content.encode(encoding="utf-8")).hexdigest()
all_text.append(content)
all_md5.append(text_md5)
content_embedding.append(embedding)
rdy_num += 1
# save memory
if total_num % 10000 == 0:
print(f"{save_name}*** rdy_num: {rdy_num}, total_num: {total_num}")
logger.info(
f"rdy_num: {rdy_num}, total_num: {total_num}, finish: {rdy_num / total_num * 100}%")
all_text = np.array(all_text)
all_md5 = np.array(all_md5)
content_embedding = np.array(content_embedding)
df = vaex.from_arrays(text=all_text, md5=all_md5, content_embedding=content_embedding)
if os.path.exists(save_path):
old_df = vaex.open(save_path)
new_df = vaex.concat([old_df, df], resolver="strict")
# if not del old file, it only save new part, not all data, lost old part
os.remove(save_path)
else:
new_df = df
new_df.export_hdf5(save_path)
all_md5 = []
all_text = []
content_embedding = []
# last part
if os.path.exists(save_path) and len(all_text) != 0 and len(all_md5) != 0:
logger.info(
f"rdy_num: {rdy_num}, total_num: {total_num}, finish: {rdy_num / total_num * 100}%")
all_text = np.array(all_text)
all_md5 = np.array(all_md5)
content_embedding = np.array(content_embedding)
df = vaex.from_arrays(text=all_text, md5=all_md5, content_embedding=content_embedding)
old_df = vaex.open(save_path)
new_df = vaex.concat([old_df, df], resolver="strict")
os.remove(save_path)
new_df.export_hdf5(save_path)
with Pool(8) as p:
p.map(save_embedding, matching_files)
` | open | 2023-10-16T02:41:08Z | 2023-10-16T02:46:33Z | https://github.com/vaexio/vaex/issues/2396 | [] | shawn0wang | 0 |
ultralytics/ultralytics | machine-learning | 19,294 | Cannot access segment model in mobile hub | Hi
When I try to use my segment model I get the message that currently only detection models are supported.
Ok, but how does this fit with the remark
> @AstroCIEL Segment models also automatically are Detect models, they output both bounding boxes and segment masks.
_Originally posted by @glenn-jocher in [#14648](https://github.com/ultralytics/ultralytics/issues/14648#issuecomment-2247479874)_
Thanks for any clarification | open | 2025-02-18T09:51:38Z | 2025-02-20T23:51:27Z | https://github.com/ultralytics/ultralytics/issues/19294 | [
"question",
"HUB",
"segment"
] | metagic | 4 |
vipstone/faceai | tensorflow | 30 | list index out of range | Traceback (most recent call last):
File "face_recog.py", line 15, in <module>
face_recognition.load_image_file(path + "/" + fn))[0])
IndexError: list index out of range
[ WARN:0] terminating async callback
请问这里是哪里出了问题吗?文件夹里面放了两张图片可以运行,但是增加图片之后报错,始终找不到问题,为什么list[0]错误呢? | closed | 2019-01-05T03:42:26Z | 2019-10-14T02:15:44Z | https://github.com/vipstone/faceai/issues/30 | [] | AIprogrammer | 2 |
horovod/horovod | deep-learning | 3,690 | Update ML frameworks of released Docker images for v1.0 | Before releasing the next version, versions in `docker/*/Dokerfile` should be updated to latest versions. | closed | 2022-09-08T18:40:03Z | 2022-11-22T17:32:03Z | https://github.com/horovod/horovod/issues/3690 | [
"wontfix"
] | EnricoMi | 1 |
coqui-ai/TTS | pytorch | 3,572 | [Bug] The voice-cloned speaker continues with garbage after to-be-spoken text was finished or mid-sentence | ### Describe the bug
Sometimes the speech pauses then the speaker continues but it's neither written nor is it any language, but it's clearly the same speaker. Unless you want to create a horror movie with a disturbingly familiar voice, this behaviour is undesired. I think bark has the same issue.
### To Reproduce
```
device = "cuda" if torch.cuda.is_available() else "cpu"
was = 'tts_models/multilingual/multi-dataset/xtts_v2'
tts = TTS(model_name=was).to(device)
tts.tts_to_file(text="Some longer text", speaker_wav="some.wav", language="de", file_path="some-output.wav")
```
### Expected behavior
Only speak what's being written. | closed | 2024-02-11T14:24:17Z | 2025-01-03T08:49:01Z | https://github.com/coqui-ai/TTS/issues/3572 | [
"bug",
"wontfix"
] | Bardo-Konrad | 7 |
ivy-llc/ivy | numpy | 28,031 | Fix Ivy Failing Test: torch - creation.zeros_like | Todo list issue: #27501 | closed | 2024-01-24T18:32:36Z | 2024-01-25T09:07:38Z | https://github.com/ivy-llc/ivy/issues/28031 | [
"Sub Task"
] | vismaysur | 0 |
SYSTRAN/faster-whisper | deep-learning | 117 | Transcribe result extraction too time consuming | Hi, sorry for interrupting. And thanks for this great work!
I am trying to combine this faster whisper with Pyannote speaker diarization following the code here:
https://github.com/yinruiqing/pyannote-whisper/blob/main/pyannote_whisper/utils.py
According to my experiment,
It took 0.5442 second(s) to Whisper transcribe using this faster whisper.
It took 11.9825 second(s) to Pyannote diarize.
It took 19.0851 second(s) to merge results.
So in the merging part, most of the time is spent on the transcribe result extraction function:
It took 19.0565 second(s) to extract from whisper result.
It took 0.0284 second(s) to add spk info to text.
It took 0.0002 second(s) to merge sentences.
The transcribe extraction function (modified a little bit based on your format here) is creating a list of sentence timestamps and text contents:
```
def get_text_with_timestamp_fromsegments(transcribe_segments):
timestamp_texts = []
for item in transcribe_segments:
start = item.start
end = item.end
text = item.text
timestamp_texts.append((Segment(start, end), text))
return timestamp_texts
```
So basically, this function takes most of the execution time. Wondering if you can give some suggestions on how to extract transcribe results faster here? Thank you! | closed | 2023-04-05T16:36:04Z | 2023-04-05T17:29:49Z | https://github.com/SYSTRAN/faster-whisper/issues/117 | [] | dustinjoe | 2 |
napari/napari | numpy | 7,085 | Split RGB loses affine | ### 🐛 Bug Report
When splitting an an RGB layer via the context menu "Split RGB", three layers are created that have an identity transformation instead of the original layer's transformation.
This is be a problem in large, spatial multi-layer datasets because the transformation is relevant. With that information lost, the split channels are no drop-in replacement for the multi-channel layer. Only an advanced user knows how to use the console to repair the layers (if the original layer was not deleted).
### 💡 Steps to Reproduce
1. Create an RGB layer and assign a non-trivial transformation (with translation, scale).
2. Context-click the layer in the layers list
3. From the context menu, select "Split RGB"
4. The split layers move to a different position than the original was.
### 💡 Expected Behavior
It should preserve all layer properties, with just a single channel in data (for each original channel).
### 🌎 Environment
napari: 0.4.16.dev1413+ga4fa1fa0
Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
System: Ubuntu 22.04.4 LTS
Python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Qt: 5.15.2
PyQt5: 5.15.10
NumPy: 2.0.0
SciPy: 1.14.0
Dask: 2024.7.0
VisPy: 0.14.3
magicgui: 0.8.3
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.7
npe2: 0.7.6
OpenGL:
- GL version: 4.6.0 NVIDIA 470.256.02
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 1920x1200, scale 1.0
- screen 2: resolution 1920x1200, scale 1.0
Optional:
- numba: 0.60.0
- triangle: 20230923
Settings path:
- $HOME/.config/napari/venv_bf275f8c69c1b33961c67ab74d465f516a14cf89/settings.yaml
Plugins:
- napari: 0.4.16.dev1413+ga4fa1fa0 (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-svg: 0.2.0 (2 contributions)
### 💡 Additional Context
Possibly https://github.com/napari/napari/issues/6588 is related to this. | open | 2024-07-08T20:41:40Z | 2024-07-16T07:54:07Z | https://github.com/napari/napari/issues/7085 | [
"bug"
] | aeisenbarth | 3 |
statsmodels/statsmodels | data-science | 8,543 | Please release 0.13.6 without explicit upper bound on scipy | When installing scipy and statsmodels (0.13.5) the installer would choose scipy 1.9.3 somewhat contradicting the weak inequality < 1.9 you seem to impose somewhere. poetry (for poetry 1.2.2) is breaking then at poetry export. I see that in your live version you dropped the upper bound already. Maybe you could just release it on pypi. Many thanks
| closed | 2022-11-30T10:35:00Z | 2023-05-05T10:33:16Z | https://github.com/statsmodels/statsmodels/issues/8543 | [] | tschm | 15 |
ipython/ipython | data-science | 14,090 | PySide6.5.1 seems to have broken %gui qt | haven't been able to get to the bottom of it yet, but the 6.5.1 release of PySide6 (but NOT the 6.5.0 release) seems to have broken the %gui qt loop. With it installed, if you use `gui qt`, one seems to be unable to enter anything into the prompt anymore. (PyQt6 6.5.1 is fine) | open | 2023-05-30T17:56:51Z | 2023-05-30T17:57:34Z | https://github.com/ipython/ipython/issues/14090 | [] | tlambert03 | 0 |
sunscrapers/djoser | rest-api | 365 | WebAuthn in Djoser- W3C standard | Major browsers and platforms have built-in support for new Web standard for easy and secure logins via biometrics, mobile devices and FIDO security keys.
Now WebAuthn is W3C standard:
https://www.w3.org/2019/03/pressrelease-webauthn-rec.html | closed | 2019-03-11T09:17:27Z | 2022-01-12T23:14:42Z | https://github.com/sunscrapers/djoser/issues/365 | [] | Kub-AT | 7 |
drivendataorg/cookiecutter-data-science | data-science | 399 | Hook script failed: pre_gen_project hook script didn't exit successfully | I've installed `cookiecutter-data-science` in my global environment and tried running `ccds` command but it failed abruptly. Can't able to debug it. The error which I'm getting is below:
```
ERROR: Stopping generation because pre_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
```
The python version which I'm using is 3.9.0 and cookiecutter version is 2.6.0
I'm attaching the full set of commands below
```
PS C:\Users\HarshVardhan> cookiecutter --version
Cookiecutter 2.6.0 from c:\users\harshvardhan\appdata\local\programs\python\python39\lib\site-packages (Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)])
PS C:\Users\HarshVardhan> python --version
Python 3.9.0
PS C:\Users\HarshVardhan> ccds
You've downloaded C:\Users\HarshVardhan\.cookiecutters\cookiecutter-data-science before. Is it okay to delete and re-download it? [y/n] (y): y
project_name (project_name): demo
repo_name (demo): demo
module_name (demo): demo
author_name (Your name (or your organization/company/team)): demo
description (A short description of the project.): demo
python_version_number (3.10): 3.9.0
Select dataset_storage
1 - none
2 - azure
3 - s3
4 - gcs
Choose from [1/2/3/4] (1):
Select environment_manager
1 - virtualenv
2 - conda
3 - pipenv
4 - none
Choose from [1/2/3/4] (1):
Select dependency_file
1 - requirements.txt
2 - environment.yml
3 - Pipfile
Choose from [1/2/3] (1):
Select pydata_packages
1 - none
2 - basic
Choose from [1/2] (1):
Select open_source_license
1 - No license file
2 - MIT
3 - BSD-3-Clause
Choose from [1/2/3] (1):
Select docs
1 - mkdocs
2 - none
Choose from [1/2] (1):
Select include_code_scaffold
1 - Yes
2 - No
Choose from [1/2] (1):
ERROR: Stopping generation because pre_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)
```
Note: I've checked the environment variables for python and it's perfectly fine there. | open | 2024-09-29T08:34:03Z | 2025-03-05T17:15:23Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/399 | [] | phoeniXharsh | 4 |
litestar-org/litestar | pydantic | 3,029 | Bug: `exceptiongroup` backport is missing on Python 3.10 | ### Description
Code expects `exceptiongroup` backport to be installed in https://github.com/litestar-org/litestar/blob/6e4e530445eadbc1fd2f65bebca3bc68cf12f29a/litestar/utils/helpers.py#L101
However, it's only declared for _dev_ dependencies in https://github.com/litestar-org/litestar/blob/6e4e530445eadbc1fd2f65bebca3bc68cf12f29a/pyproject.toml#L109 so after `litestar` install it won't be found and one has to manually require it now.
Running Python 3.10.
### Logs
Full stacktrace (failure on launch):
```
Traceback (most recent call last):
File "/home/api/.local/lib/python3.10/site-packages/litestar/utils/helpers.py", line 99, in get_exception_group
return cast("type[BaseException]", ExceptionGroup) # type:ignore[name-defined]
NameError: name 'ExceptionGroup' is not defined. Did you mean: '_ExceptionGroup'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/myproject/api/myproject/cloud_management/new_app.py", line 4, in <module>
from litestar import Litestar
File "/home/api/.local/lib/python3.10/site-packages/litestar/__init__.py", line 1, in <module>
from litestar.app import Litestar
File "/home/api/.local/lib/python3.10/site-packages/litestar/app.py", line 20, in <module>
from litestar._openapi.plugin import OpenAPIPlugin
File "/home/api/.local/lib/python3.10/site-packages/litestar/_openapi/plugin.py", line 10, in <module>
from litestar.routes import HTTPRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/__init__.py", line 1, in <module>
from .asgi import ASGIRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/asgi.py", line 7, in <module>
from litestar.routes.base import BaseRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/base.py", line 13, in <module>
from litestar._kwargs import KwargsModel
File "/home/api/.local/lib/python3.10/site-packages/litestar/_kwargs/__init__.py", line 1, in <module>
from .kwargs_model import KwargsModel
File "/home/api/.local/lib/python3.10/site-packages/litestar/_kwargs/kwargs_model.py", line 49, in <module>
_ExceptionGroup = get_exception_group()
File "/home/api/.local/lib/python3.10/site-packages/litestar/utils/helpers.py", line 101, in get_exception_group
from exceptiongroup import ExceptionGroup as _ExceptionGroup # pyright: ignore
ModuleNotFoundError: No module named 'exceptiongroup'
```
### Litestar Version
2.5.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-01-26T14:59:04Z | 2025-03-20T15:54:22Z | https://github.com/litestar-org/litestar/issues/3029 | [
"Bug :bug:",
"Good First Issue",
"Dependencies",
"Package"
] | mtvx | 2 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 676 | 预训练模型选择 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型训练与精调
### 基础模型
Alpaca-Plus-7B
### 操作系统
Windows
### 详细描述问题
我想在现有的Alpaca-Plus-7B上继续预训练,我看介绍上说

这个HF格式的中文Alpaca/Alpaca-Plus和中文Alpaca的tokenizer(49954)指的是chinese-alpaca-plus-lora-13b和chinese-alpaca-lora-7b吗
### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图
_No response_ | closed | 2023-06-27T02:50:37Z | 2023-07-06T00:10:39Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/676 | [
"stale"
] | wuhuanon | 4 |
Miserlou/Zappa | flask | 2,001 | How do I set up a CI/CD pipeline using AWS CodePipeline for a Flask application deployed using Zappa? | I need a step by step guide on setting up a CI/CD pipeline using AWS CodePipeline for a Flask application deployed using Zappa. | closed | 2020-02-14T04:54:04Z | 2020-02-24T14:23:03Z | https://github.com/Miserlou/Zappa/issues/2001 | [
"help wanted",
"documentation"
] | rupam-kundu | 2 |
Asabeneh/30-Days-Of-Python | flask | 456 | Python challenge | Hi, it’s me...
I know you aren't know me (but it's ok😂) I'm in lowest level of programming and first of all the language's i started with C++ after while a found a new language and **Fabulous** it’s name is John Sina _sorry I just a joshedyou😂_.
Anyway _Python _ had come in my life.
After a many days I reach this channel by some suggestions of my friend .
It's suitable for none English user. | open | 2023-11-16T19:05:13Z | 2023-11-16T19:05:13Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/456 | [] | DoItLin | 0 |
ets-labs/python-dependency-injector | asyncio | 469 | SSL Certificate on https://python-dependency-injector.ets-labs.org/ has expired | Hi there,
Just wanted to let you know that https://python-dependency-injector.ets-labs.org/ was being blocked on my browser because your SSL cert has expired. My browser(s) give a warning and block traffic to the site without an express accepting of the stale cert. | closed | 2021-07-09T17:54:58Z | 2021-07-09T18:25:55Z | https://github.com/ets-labs/python-dependency-injector/issues/469 | [
"bug"
] | brianpkennedy | 3 |
FlareSolverr/FlareSolverr | api | 687 | FlareSolverr won't start. | ### Have you checked our README?
- [X] I have checked the README
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: latest form git
- Last working FlareSolverr version:
- Operating system: Manjaro Arm64 for RaspberryPi4
- Are you using Docker: [no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [no]
- Are you using a Proxy: [no]
- Are you using Captcha Solver: [no]
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
I'm installing FlareSolverr from source code and it won't start.
### Logged Error Messages
```text
2023-02-02 14:28:58 INFO FlareSolverr 3.0.2
2023-02-02 14:28:58 INFO Testing web browser installation...
2023-02-02 14:28:58 INFO Chrome / Chromium path: /usr/bin/chromium
2023-02-02 14:29:00 INFO Chrome / Chromium major version: 109
Traceback (most recent call last):
File "/home/pi/FlareSolverr/src/utils.py", line 158, in get_user_agent
driver = get_webdriver()
File "/home/pi/FlareSolverr/src/utils.py", line 72, in get_webdriver
driver = uc.Chrome(options=options, driver_executable_path=driver_exe_path, version_main=version_main,
File "/home/pi/FlareSolverr/src/undetected_chromedriver/__init__.py", line 436, in __init__
super(Chrome, self).__init__(
File "/home/pi/.local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/pi/.local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 103, in __init__
self.service.start()
File "/home/pi/.local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 90, in start
self._start_process(self.path)
File "/home/pi/.local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 199, in _start_process
self.process = subprocess.Popen(
File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 8] Exec format error: '/home/pi/.local/share/undetected_chromedriver/d0c3e87b99700651_chromedriver'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/FlareSolverr/src/flaresolverr.py", line 91, in <module>
flaresolverr_service.test_browser_installation()
File "/home/pi/FlareSolverr/src/flaresolverr_service.py", line 61, in test_browser_installation
user_agent = utils.get_user_agent()
File "/home/pi/FlareSolverr/src/utils.py", line 162, in get_user_agent
raise Exception("Error getting browser User-Agent. " + str(e))
Exception: Error getting browser User-Agent. [Errno 8] Exec format error: '/home/pi/.local/share/undetected_chromedriver/d0c3e87b99700651_chromedriver'
```
```
### Screenshots
_No response_ | closed | 2023-02-02T06:31:17Z | 2023-12-05T22:51:14Z | https://github.com/FlareSolverr/FlareSolverr/issues/687 | [
"needs investigation"
] | so1ar | 24 |
JaidedAI/EasyOCR | deep-learning | 403 | Tajik Language |
[easyocr.zip](https://github.com/JaidedAI/EasyOCR/files/6220400/easyocr.zip)
| closed | 2021-03-29T08:43:17Z | 2021-03-29T08:48:10Z | https://github.com/JaidedAI/EasyOCR/issues/403 | [] | KhayrulloevDD | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,517 | Error on starting the application | "The application “Ultimate Vocal Remover.app” can’t be opened.
1" | open | 2024-08-17T05:14:19Z | 2024-08-17T05:14:19Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1517 | [] | AndreiZhitkov | 0 |
flasgger/flasgger | api | 469 | flasgger swag_from not working with "flask run" | Hi, not sure what the source of this issue is, but running the app via the recommended way (flask run) doesn't seem to work:
```
@swag_from('username_specs.yml', methods=['GET'])
TypeError: swag_from() got an unexpected keyword argument 'methods'
```
Any ideas? Been tinkering around with it for a while with no results. | open | 2021-03-21T15:22:49Z | 2021-04-29T19:46:35Z | https://github.com/flasgger/flasgger/issues/469 | [] | arehmandev | 1 |
lucidrains/vit-pytorch | computer-vision | 228 | EfficientFormer Request! | Can you add EfficientFormer?
[https://github.com/snap-research/EfficientFormer](url)
[https://arxiv.org/pdf/2206.01191.pdf](url) | open | 2022-07-23T10:44:09Z | 2022-08-28T22:59:40Z | https://github.com/lucidrains/vit-pytorch/issues/228 | [] | umitkacar | 1 |
taverntesting/tavern | pytest | 170 | !anything doesn't work in dict body matches 2 | Hi,
I'm experienceing the same problem as issue #75.
The "!anything" placeholder does not match the received JWT token.
my code:
```
test_name: Test auth - 200
marks:
parametrize:
key: auth_customer_id
vals:
- 342
stages:
name: positive flow
request:
url: "{api_url}/auth?customer_id={auth_customer_id}"
method: GET
headers:
content-type: application/json
X-API-KEY: "{api_key}"
X-PARTNER-ID: "{partner_id}"
response:
status_code: 200
body:
data:
token: !anything
```
my error:
`flow_tests/test_customer_flow.tavern.yaml::Create - get id - get jwt - delete customer[flow_munchkin_id] 12:42:25 [ERROR]: (tavern.response.base:37) Value mismatch in body: Type of returned data was different than expected (expected["data"]["token"] = '<Tavern YAML sentinel for anything>', actual["data"]["token"] = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJNaW50aWdvIiwidXNhZ2UiOiJhcGkiLCJjdXN0b21lcl9pZCI6IjM2MyIsImV4cCI6MTUzMzU5MTc0NS4wfQ.fNrXrkQz1BpdNn5tUfaUNXxFLiRJqpFyYoZ_DTDsc4k')`
I am using Tavern 0.14.4 and yaml 0.15.50.
Thanks for the help! | closed | 2018-08-06T09:47:54Z | 2018-08-20T12:40:24Z | https://github.com/taverntesting/tavern/issues/170 | [] | liamMintigo | 2 |
mlflow/mlflow | machine-learning | 14,808 | [BUG] Cannot clear text while editing in filter | ### MLflow version
2.20.4.dev0
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: mac
- **Python version**: 3.9
- **yarn version, if running the dev UI**: 1.22
### Describe the problem
- Cannot clear text while editing in filter
https://github.com/user-attachments/assets/99879a33-213d-4bab-87d2-ff05994d48e2
### Steps to reproduce the bug
- focus on the filter field in experiment tracking
- Add some text and click `x` button to clear the text
- it doesn't clear the text
### Code to generate data required to reproduce the bug
_No response_
### Is the console panel in DevTools showing errors relevant to the bug?
_No response_
### Does the network panel in DevTools contain failed requests relevant to the bug?
_No response_ | closed | 2025-03-03T11:00:24Z | 2025-03-05T12:31:40Z | https://github.com/mlflow/mlflow/issues/14808 | [
"bug",
"area/uiux",
"has-closing-pr"
] | Gumichocopengin8 | 3 |
keras-rl/keras-rl | tensorflow | 255 | DQNAgent has no attribute optimizer | When using the Tensorboard callback with histogram > 10 I get this error:
> File "venv\lib\site-packages\keras\callbacks.py", line 798, in set_model
> grads = model.optimizer.get_gradients(model.total_loss,
> AttributeError: 'DQNAgent' object has no attribute 'optimizer'
With a histogram set to 0 the model runs as expected.
Is there something I should be doing differently?
My model:
```
model = Sequential()
shape = (1, env.parameters)
model.add(LSTM(units=output_dim,input_shape=shape, return_sequences=True))
model.add(LSTM(units=output_dim, return_sequences=True))
model.add(LSTM(units=output_dim))
model.add(Dense(output_dim, activation='relu'))
tensor_board = TensorBoard(log_dir="./logs/{}".format(time()), histogram_freq=10, write_graph=True,
write_grads=True, write_images=True)
return model, tensor_board
```
And my agent:
```
memory = SequentialMemory(limit=1000, window_length=1)
dqn = DQNAgent(model, nb_actions=output_dim, memory=memory, enable_dueling_network=True, enable_double_dqn=True,
policy=EpsGreedyQPolicy(), nb_steps_warmup=5000)
dqn.compile(optimizer='adam', metrics=['accuracy'])
dqn.fit(env, nb_steps=10000, callbacks=[tensor_board], visualize=False)
dqn.save_weights(file, overwrite=True)
dqn.test(env, nb_episodes=5, callbacks=[tensor_board], visualize=False)
```
What am I missing? | closed | 2018-10-05T21:45:35Z | 2021-03-31T14:10:35Z | https://github.com/keras-rl/keras-rl/issues/255 | [
"wontfix"
] | nisbus | 4 |
scikit-learn/scikit-learn | data-science | 30,220 | Missing dev changelog from the rendered website after towncrier | We should add a step to the doc build CI where we render the changelog from the existing files, and have it also under the `dev` of the website as it was before.
This also helps checking rendered changelog from the PRs.
cc @lesteve @glemaitre | closed | 2024-11-05T09:09:09Z | 2024-11-08T09:32:57Z | https://github.com/scikit-learn/scikit-learn/issues/30220 | [
"Build / CI"
] | adrinjalali | 2 |
huggingface/datasets | computer-vision | 7,059 | None values are skipped when reading jsonl in subobjects | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| open | 2024-07-22T13:02:42Z | 2024-07-22T13:02:53Z | https://github.com/huggingface/datasets/issues/7059 | [] | PonteIneptique | 0 |
mckinsey/vizro | pydantic | 499 | Does Vizro AI support Azure Open AI gpt-4-32k? | ### Description
Hey all, I have Azure Open AI gpt-4-32k, when I try to use this as my llm to generate graphs on a custom dataset, I am getting the following error:
<img width="817" alt="image" src="https://github.com/mckinsey/vizro/assets/152766533/d02264d0-6eba-478c-b5ea-681355bf5f25">
### Expected behavior
Please help me what went wrong, I have set up the env variables exactly the way langchain documentation has suggested.
### Which package?
vizro
### Package version
0.2.0
### Python version
3.12.0
### OS
Windows 10
### How to Reproduce
<img width="817" alt="image" src="https://github.com/mckinsey/vizro/assets/152766533/2fe2ccfd-1e35-4a10-8f43-d7a33a59cfd7">
### Output
<img width="817" alt="image" src="https://github.com/mckinsey/vizro/assets/152766533/79bb154f-5b52-4a05-86f8-01ec5ec9d35b">
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-05-28T05:05:08Z | 2024-10-02T15:16:23Z | https://github.com/mckinsey/vizro/issues/499 | [
"Bug Report :bug:",
"Vizro-AI :robot:"
] | saivl2 | 5 |
httpie/http-prompt | rest-api | 214 | Cookies auto setting invalid. | Function `get_response` has been removed at https://github.com/httpie/httpie/commit/bece3c77bb51ecc55dcc4008375dc29ccd91575c .
So that the tracing in https://github.com/httpie/http-prompt/pull/71/files#diff-0cd13edf0d54ff93c4d296836ba8d5a69462b2d7f33113b9e0fb92a9af980803R297 is no longer work. | open | 2022-10-27T19:51:50Z | 2022-10-27T19:55:13Z | https://github.com/httpie/http-prompt/issues/214 | [] | qwIvan | 0 |
ckan/ckan | api | 8,129 | Leverage HTMX in CKAN's Activity Stream | ## CKAN version
master
## Details
The current UI to display activity stream is a really good place to leverage HTMX. This will require some refactor on how the templates but both the `pagination` and the `filters` could be improved.
## Tasks
- [ ] Use an `hx-get` attribute in our **Activity type** `select` element to apply the filters.
- [ ] Use as well `hx-get` to move between pages instead of full page reload.
## Screenshot of the view to refactor

| closed | 2024-03-21T13:49:41Z | 2024-07-01T06:03:46Z | https://github.com/ckan/ckan/issues/8129 | [] | pdelboca | 2 |
ansible/awx | automation | 15,395 | Installing collections - Unexpected Exception, this is probably a bug: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Having a simple `requirements.yml` added to the project results in an error during git syncing the Project.
### AWX version
2.19.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [X] Collection
- [ ] CLI
- [ ] Other
### Installation method
minikube
### Modifications
no
### Ansible version
v24.6.1
### Operating system
macOS
### Web browser
Chrome
### Steps to reproduce
- Setup a Project like usual that syncs with a git repo, make sure that succeeds.
- Add the following requirements.yml to the project, try to sync again.
```yml
collections:
# Install a collection from Ansible Galaxy.
- name: community.general.homebrew
version: latest
source: https://galaxy.ansible.com
```
### Expected results
Would expect that the homebrew collection can be used.
### Actual results
Got an error during sync
Standard error: `ERROR! Unexpected Exception, this is probably a bug: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI`
```json
{
"changed": true,
"stdout": "ansible-galaxy [core 2.15.12]\n config file = /var/lib/awx/projects/_8__flushing/ansible.cfg\n configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\n ansible python module location = /usr/local/lib/python3.9/site-packages/ansible\n ansible collection location = /var/lib/awx/projects/.__awx_cache/_8__flushing/stage/requirements_collections\n executable location = /usr/local/bin/ansible-galaxy\n python version = 3.9.19 (main, Jun 11 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3)\n jinja version = 3.1.4\n libyaml = True\nUsing /var/lib/awx/projects/_8__flushing/ansible.cfg as config file\nReading requirement file at '/var/lib/awx/projects/_8__flushing/requirements.yml'\nthe full traceback was:\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/__init__.py\", line 659, in cli_executor\n exit_code = cli.run()\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 719, in run\n return context.CLIARGS['func']()\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 119, in method_wrapper\n return wrapped_method(*args, **kwargs)\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 1341, in execute_install\n requirements = self._parse_requirements_file(\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 818, in _parse_requirements_file\n requirements['collections'] = [\n File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 819, in <listcomp>\n Requirement.from_requirement_dict(\n File \"/usr/local/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/dataclasses.py\", line 352, in from_requirement_dict\n if req_source is not None and os.path.isdir(req_source):\n File \"/usr/lib64/python3.9/genericpath.py\", line 42, in isdir\n st = os.stat(s)\nTypeError: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI",
"stderr": "ERROR! Unexpected Exception, this is probably a bug: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI",
"rc": 250,
"cmd": [
"ansible-galaxy",
"install",
"-r",
"/var/lib/awx/projects/_8__flushing/requirements.yml",
"-vvv"
],
"start": "2024-07-23 06:16:57.156604",
"end": "2024-07-23 06:16:57.322592",
"delta": "0:00:00.165988",
"msg": "non-zero return code",
"invocation": {
"module_args": {
"chdir": "/var/lib/awx/projects/_8__flushing",
"_raw_params": "ansible-galaxy install -r /var/lib/awx/projects/_8__flushing/requirements.yml -vvv",
"_uses_shell": false,
"stdin_add_newline": true,
"strip_empty_ends": true,
"argv": null,
"executable": null,
"creates": null,
"removes": null,
"stdin": null
}
},
"stdout_lines": [
"ansible-galaxy [core 2.15.12]",
" config file = /var/lib/awx/projects/_8__flushing/ansible.cfg",
" configured module search path = ['/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']",
" ansible python module location = /usr/local/lib/python3.9/site-packages/ansible",
" ansible collection location = /var/lib/awx/projects/.__awx_cache/_8__flushing/stage/requirements_collections",
" executable location = /usr/local/bin/ansible-galaxy",
" python version = 3.9.19 (main, Jun 11 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3)",
" jinja version = 3.1.4",
" libyaml = True",
"Using /var/lib/awx/projects/_8__flushing/ansible.cfg as config file",
"Reading requirement file at '/var/lib/awx/projects/_8__flushing/requirements.yml'",
"the full traceback was:",
"",
"Traceback (most recent call last):",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/__init__.py\", line 659, in cli_executor",
" exit_code = cli.run()",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 719, in run",
" return context.CLIARGS['func']()",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 119, in method_wrapper",
" return wrapped_method(*args, **kwargs)",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 1341, in execute_install",
" requirements = self._parse_requirements_file(",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 818, in _parse_requirements_file",
" requirements['collections'] = [",
" File \"/usr/local/lib/python3.9/site-packages/ansible/cli/galaxy.py\", line 819, in <listcomp>",
" Requirement.from_requirement_dict(",
" File \"/usr/local/lib/python3.9/site-packages/ansible/galaxy/dependency_resolution/dataclasses.py\", line 352, in from_requirement_dict",
" if req_source is not None and os.path.isdir(req_source):",
" File \"/usr/lib64/python3.9/genericpath.py\", line 42, in isdir",
" st = os.stat(s)",
"TypeError: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI"
],
"stderr_lines": [
"ERROR! Unexpected Exception, this is probably a bug: stat: path should be string, bytes, os.PathLike or integer, not GalaxyAPI"
],
"_ansible_no_log": false
}
```
### Additional information
My ansible.cfg is not very special:
```
inventory = hosts.ini
retry_files_enabled = False
```
hosts.ini
```
[workstation]
localhost ansible_connection=local
[androidconnector]
localhost ansible_connection=local
``` | open | 2024-07-23T06:25:40Z | 2024-07-24T17:13:14Z | https://github.com/ansible/awx/issues/15395 | [
"type:bug",
"component:api",
"component:awx_collection",
"needs_triage",
"community"
] | Ruud-cb | 1 |
Kludex/mangum | asyncio | 285 | Allow option to let unhandled Exceptions propagate through the adapter | Hi, thank you for this nice tool to wrap asgi applications. It makes deploying an API to lambda very easy while using a server to emulate it locally.
However, I found out that, in case of unhandled runtime errors in my code, the adapter automatically returns a json with a status code of 500. While that is totally fine, the original error gets caught in the `HTTPCycle.run` method which makes a manual handling of all uncaught errors impossible (afaik). I want to let the error propagate to the lambda execution runtime, so that the Cloudwatch "ERROR" metric for the function recognizes the execution as a failure and thus the alarms are triggered.
The way it is right now, all errors are just caught and not even aws knows that something unexpected happened.
Is there a way to fix that or something essential I am missing? One solution could be to optionally reraise all exceptions in the mentioned method.
Thanks 👋 | open | 2023-01-03T16:06:41Z | 2024-05-18T21:07:51Z | https://github.com/Kludex/mangum/issues/285 | [
"more info needed"
] | jb3rndt | 9 |
alteryx/featuretools | data-science | 1,800 | Why did I get different number of features? | ```
def get_feature(target_dataframe_name):
feature_matrix, feature_defs = ft.dfs(
entityset=es,
target_dataframe_name=target_dataframe_name,
agg_primitives=[
"sum",
"max",
"min",
"mean",
"std",
"count",
"num_unique",
"percent_true",
],
# trans_primitives=[],
# where_primitives=[
# "sum",
# "max",
# "min",
# "mean",
# "std",
# "count",
# "num_unique",
# "percent_true",
# ],
# seed_features=[],
max_depth=2,
)
return feature_matrix, feature_defs
```
I have 6 data tables:
```
tables = [
"T07_CASE_APPLICATION_UH",
"T07_CASE_TRANSACTION_KY_UH",
"T47_TRANSACTION",
"T47_PARTY",
"T47_ID_DEPOSIT",
"T47_INDIVIDUAL",
]
```
When I change the number of rows of the 6 data tables, the number of features generated is different, why ?
Case 1:
```
Entityset: case_scoring
DataFrames:
T07_CASE_APPLICATION_UH [Rows: 1000, Columns: 45]
T07_CASE_TRANSACTION_KY_UH [Rows: 1000, Columns: 10]
T47_TRANSACTION [Rows: 1000, Columns: 103]
T47_PARTY [Rows: 1000, Columns: 46]
T47_ID_DEPOSIT [Rows: 1000, Columns: 45]
T47_INDIVIDUAL [Rows: 1000, Columns: 45]
Relationships:
T07_CASE_TRANSACTION_KY_UH.APPLICATION_NUM -> T07_CASE_APPLICATION_UH.APPLICATION_NUM
T07_CASE_APPLICATION_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T47_TRANSACTION.PARTY_ID -> T47_PARTY.PARTY_ID
T47_ID_DEPOSIT.PARTY_ID -> T47_PARTY.PARTY_ID
T47_INDIVIDUAL.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.TRANSACTIONKEY -> T47_TRANSACTION.TRANSACTIONKEY
```

Case 2:
```
Entityset: case_scoring
DataFrames:
T07_CASE_APPLICATION_UH [Rows: 10000, Columns: 45]
T07_CASE_TRANSACTION_KY_UH [Rows: 10000, Columns: 10]
T47_TRANSACTION [Rows: 10000, Columns: 103]
T47_PARTY [Rows: 10000, Columns: 46]
T47_ID_DEPOSIT [Rows: 10000, Columns: 45]
T47_INDIVIDUAL [Rows: 10000, Columns: 45]
Relationships:
T07_CASE_TRANSACTION_KY_UH.APPLICATION_NUM -> T07_CASE_APPLICATION_UH.APPLICATION_NUM
T07_CASE_APPLICATION_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T47_TRANSACTION.PARTY_ID -> T47_PARTY.PARTY_ID
T47_ID_DEPOSIT.PARTY_ID -> T47_PARTY.PARTY_ID
T47_INDIVIDUAL.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.TRANSACTIONKEY -> T47_TRANSACTION.TRANSACTIONKEY
```

Case 3:
```
Entityset: case_scoring
DataFrames:
T07_CASE_APPLICATION_UH [Rows: 100000, Columns: 45]
T07_CASE_TRANSACTION_KY_UH [Rows: 100000, Columns: 10]
T47_TRANSACTION [Rows: 100000, Columns: 103]
T47_PARTY [Rows: 100000, Columns: 46]
T47_ID_DEPOSIT [Rows: 100000, Columns: 45]
T47_INDIVIDUAL [Rows: 100000, Columns: 45]
Relationships:
T07_CASE_TRANSACTION_KY_UH.APPLICATION_NUM -> T07_CASE_APPLICATION_UH.APPLICATION_NUM
T07_CASE_APPLICATION_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.PARTY_ID -> T47_PARTY.PARTY_ID
T47_TRANSACTION.PARTY_ID -> T47_PARTY.PARTY_ID
T47_ID_DEPOSIT.PARTY_ID -> T47_PARTY.PARTY_ID
T47_INDIVIDUAL.PARTY_ID -> T47_PARTY.PARTY_ID
T07_CASE_TRANSACTION_KY_UH.TRANSACTIONKEY -> T47_TRANSACTION.TRANSACTIONKEY
```

Not only the number of features is different, the features content is very different !

Please help me. Why is this happening? Thank you very much. | closed | 2021-12-06T06:08:10Z | 2021-12-08T14:18:52Z | https://github.com/alteryx/featuretools/issues/1800 | [] | jingsupo | 5 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,874 | [Bug]: ROCM 6.1 NameError: name 'amdsmi' is not defined crash at opening | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1
./webui.sh
NameError: name 'amdsmi' is not defined
During handling of the above exception, another exception occurred:
### Steps to reproduce the problem
1. source venv/bin/activate
2. pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1
3. ./webui.sh
4. crash
### What should have happened?
open normal
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
I can't do that because of can't open it. but my system:

### Console logs
```Shell
./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on b_cansin user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /media/b_cansin/ai/ai/stable-diffusion-webui/venv
################################################################
################################################################
Launching launch.py...
################################################################
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Version: 1.9.3
Commit hash: 7ba3923d5b494b7756d0b12f33acb3716d830b9a
Launching Web UI with arguments:
Traceback (most recent call last):
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 633, in _raw_device_count_amdsmi
amdsmi.amdsmi_init()
NameError: name 'amdsmi' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/media/b_cansin/ai/ai/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/media/b_cansin/ai/ai/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/media/b_cansin/ai/ai/stable-diffusion-webui/webui.py", line 13, in <module>
initialize.imports()
File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/initialize.py", line 26, in imports
from modules import paths, timer, import_hook, errors # noqa: F401
File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/paths.py", line 60, in <module>
import sgm # noqa: F401
File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/__init__.py", line 1, in <module>
from .models import AutoencodingEngine, DiffusionEngine
File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/__init__.py", line 1, in <module>
from .autoencoder import AutoencodingEngine
File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 12, in <module>
from ..modules.diffusionmodules.model import Decoder, Encoder
File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/modules/__init__.py", line 1, in <module>
from .encoders.modules import GeneralConditioner
File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 5, in <module>
import kornia
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/kornia/__init__.py", line 11, in <module>
from . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/kornia/x/__init__.py", line 2, in <module>
from .trainer import Trainer
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/kornia/x/trainer.py", line 11, in <module>
from accelerate import Accelerator
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module>
from .accelerator import Accelerator
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 35, in <module>
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/checkpointing.py", line 24, in <module>
from .utils import (
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/utils/__init__.py", line 64, in <module>
from .modeling import (
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 30, in <module>
from ..state import AcceleratorState
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/state.py", line 47, in <module>
if is_tpu_available(check_device=False):
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/accelerate/utils/imports.py", line 84, in is_tpu_available
if torch.cuda.is_available():
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 122, in is_available
return device_count() > 0
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 834, in device_count
nvml_count = _device_count_amdsmi() if torch.version.hip else _device_count_nvml()
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 756, in _device_count_amdsmi
raw_cnt = _raw_device_count_amdsmi()
File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 634, in _raw_device_count_amdsmi
except amdsmi.AmdSmiException as e:
NameError: name 'amdsmi' is not defined
(venv) b_cansin@b-cansin-ubuntu:/media/b_cansin/ai/ai/stable-diffusion-webui$ pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1 --upgrade
Looking in indexes: https://download.pytorch.org/whl/nightly/rocm6.1
Requirement already satisfied: torch in ./venv/lib/python3.10/site-packages (2.4.0.dev20240523+rocm6.1)
Requirement already satisfied: torchvision in ./venv/lib/python3.10/site-packages (0.19.0.dev20240523+rocm6.1)
Requirement already satisfied: torchaudio in ./venv/lib/python3.10/site-packages (2.2.0.dev20240523+rocm6.1)
Requirement already satisfied: jinja2 in ./venv/lib/python3.10/site-packages (from torch) (3.1.2)
Requirement already satisfied: typing-extensions>=4.8.0 in ./venv/lib/python3.10/site-packages (from torch) (4.11.0)
Requirement already satisfied: pytorch-triton-rocm==3.0.0+bbe6246e37 in ./venv/lib/python3.10/site-packages (from torch) (3.0.0+bbe6246e37)
Requirement already satisfied: networkx in ./venv/lib/python3.10/site-packages (from torch) (3.0)
Requirement already satisfied: sympy in ./venv/lib/python3.10/site-packages (from torch) (1.12)
Requirement already satisfied: filelock in ./venv/lib/python3.10/site-packages (from torch) (3.9.0)
Requirement already satisfied: fsspec in ./venv/lib/python3.10/site-packages (from torch) (2023.12.2)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in ./venv/lib/python3.10/site-packages (from torchvision) (9.5.0)
Requirement already satisfied: numpy in ./venv/lib/python3.10/site-packages (from torchvision) (1.26.2)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.10/site-packages (from jinja2->torch) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in ./venv/lib/python3.10/site-packages (from sympy->torch) (1.3.0)
```
### Additional information
I got ROCM 6.1.1 lastest. | closed | 2024-05-23T22:48:28Z | 2024-05-24T00:01:55Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15874 | [
"bug-report"
] | KEDI103 | 1 |
harry0703/MoneyPrinterTurbo | automation | 91 | 生成16:9比例视频时候素材被莫名拉升 | 我在生成16:9比例视频时
视频原本正常16:9但是合成后出现被拉升状况

出现拉升的视频源文件
https://github.com/harry0703/MoneyPrinterTurbo/assets/95288331/46f60350-1ff1-42d2-8e93-628340753f55
合成后的视频
https://github.com/harry0703/MoneyPrinterTurbo/assets/95288331/73a66a19-c822-40df-9107-d465dc1ccc76
| closed | 2024-03-28T07:53:04Z | 2024-04-03T11:08:03Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/91 | [
"bug"
] | SarcomTDG | 3 |
microsoft/nni | data-science | 5,441 | How to create custom filter in Search Strategy? | **Describe the issue**:
Since nn-meter is not working properly (maybe incompatibility with nni v2.10 model ir?) i was trying to create a custom model filter in the RegularizedEvolution strategy, but i cannot find a way to convert the model ir passed to the filter to something i can elaborate on (like corresponding PyTorch model).
Thanks for the help.
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version: 3.9
- PyTorch/TensorFlow version: 1.12
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
**How to reproduce it?**: | closed | 2023-03-14T16:32:18Z | 2023-04-07T08:09:14Z | https://github.com/microsoft/nni/issues/5441 | [] | chachus | 4 |
saleor/saleor | graphql | 17,393 | Bug: full text search by korean | ### What are you trying to achieve?
I would like to search the product by name but the result keep returns empty
### Steps to reproduce the problem
1. create product with name `닥터스피지에이 아토더마 크림 120ml`
2. search the product (by dashboard or graphql query) with terms: `닥터스피지에이 `
### What did you expect to happen?
it should returns the product with corresponding name
### Logs
_No response_
### Environment
Saleor version: 3.20
OS and version: docker
| open | 2025-02-21T10:41:24Z | 2025-03-13T10:50:11Z | https://github.com/saleor/saleor/issues/17393 | [
"bug",
"question"
] | danieloffical | 1 |
Sanster/IOPaint | pytorch | 530 | [BUG] error network pls | **Model**
Models error = https://huggingface.co/Linaqruf/animagine-xl and https://huggingface.co/cagliostrolab/animagine-xl-3.1 @Sanster

| closed | 2024-05-24T19:19:26Z | 2024-12-26T01:59:33Z | https://github.com/Sanster/IOPaint/issues/530 | [
"stale"
] | 6pu8wtw6 | 2 |
iperov/DeepFaceLab | machine-learning | 5,387 | Result video duration is different from the merged | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
The length of a video produced by 'Merge' step should equal to that of the final output by 'Get Result video.'
## Actual behavior
The length of the final video is greater than that of the merged video.
## Steps to reproduce
I merged with the following config:
mode: seamless
mask: learned-prd*learned-dst
hist_math-threshold: 255
blur_mask_modifier: 202
the rest: default
I merge frames to produce a video length x and the final 'get result video' which uses ffmpeg command produces a video of length x+.
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary) | open | 2021-08-26T00:34:25Z | 2023-06-08T22:42:34Z | https://github.com/iperov/DeepFaceLab/issues/5387 | [] | jinsu35 | 1 |
vvbbnn00/WARP-Clash-API | flask | 13 | 哪个链接能被Surge订阅? | 我看README上面写着支持Surge,我想问一下哪个能被Surge订阅👀
没有的话,我等等开个PR了

| closed | 2024-01-27T15:33:51Z | 2024-01-29T07:41:11Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/13 | [
"enhancement"
] | CorrectRoadH | 4 |
arogozhnikov/einops | tensorflow | 4 | Release branches or tags? | Need to decide a policy on keeping reference for previous releases. | closed | 2018-10-31T18:06:04Z | 2018-11-01T01:01:25Z | https://github.com/arogozhnikov/einops/issues/4 | [] | arogozhnikov | 1 |
moshi4/pyCirclize | matplotlib | 80 | When I set tick_length=10, this parameter does not take effect, but it works when tick_length=2. | When I set tick_length=10, this parameter does not take effect. When tick_length=2, it takes effect. The length of the line does not change when the number is greater than 5,Thankyou!
if sector.name in genechr:
gene_data = genenames[genenames["query_chr"] == sector.name]
label_pos_list = gene_data["query_start"].tolist()
labels = gene_data["genename"].tolist()
outer_track.xticks(
label_pos_list,
labels,
label_orientation="vertical",
label_size=6,
tick_length=10,
line_kws=dict(ec="black"),
) | closed | 2025-01-07T07:33:11Z | 2025-01-11T16:35:42Z | https://github.com/moshi4/pyCirclize/issues/80 | [
"bug"
] | z626093820 | 1 |
2noise/ChatTTS | python | 699 | 刚入门,合成语音出现报错 | ```py
import ChatTTS
import torch
import torchaudio
chat = ChatTTS.Chat()
chat.load(compile=False) # Set to True for better performance
texts = ["PUT YOUR 1st TEXT HERE", "PUT YOUR 2nd TEXT HERE"]
wavs = chat.infer(texts)
torchaudio.save("output1.wav", torch.from_numpy(wavs[0]), 24000)
```
以上是代码
以下是报错
```
AttributeError: partially initialized module 'ChatTTS' has no attribute 'Chat' (most likely due to a circular import)
``` | closed | 2024-08-19T09:45:06Z | 2024-10-04T04:01:32Z | https://github.com/2noise/ChatTTS/issues/699 | [
"question",
"stale"
] | liliyucai123 | 2 |
darrenburns/posting | automation | 111 | Support for Path Variables | Hi, you're doing great work with this product!
I’d really appreciate having support for path variables, similar to what Postman offers. Using .env files doesn’t quite fit my use case, as I have different values for different requests in the same session, but with the same path name. While I could create var_1, var_2, etc., in .env, this approach quickly becomes messy and unmanageable.
P.S. It would also be great to have the ability to invoke an editor (F4) for the URL line. That would be a fantastic addition!
| closed | 2024-09-23T11:10:39Z | 2024-11-18T17:26:05Z | https://github.com/darrenburns/posting/issues/111 | [] | MrBanja | 1 |
lk-geimfari/mimesis | pandas | 986 | Release latest version with fixed types | closed | 2020-12-14T14:01:16Z | 2020-12-21T11:13:00Z | https://github.com/lk-geimfari/mimesis/issues/986 | [] | lk-geimfari | 1 | |
hpcaitech/ColossalAI | deep-learning | 5,937 | [BUG]: a directory will be maked in each epoch | ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
I'm using the latest code, do sft training by applications/Colossal-LLaMA, each epoch generates a directory, just like this:
<img width="617" alt="image" src="https://github.com/user-attachments/assets/401bb2fb-0c3a-4b51-957c-f97d85352a7b">
there is my script:
<img width="1040" alt="image" src="https://github.com/user-attachments/assets/203981b4-b0d4-43e9-868a-f8c3fd725c14">
what should I do to only saving the final checkpoint?
### Environment
_No response_ | closed | 2024-07-24T10:07:14Z | 2024-07-26T03:15:21Z | https://github.com/hpcaitech/ColossalAI/issues/5937 | [
"bug"
] | zhurunhua | 1 |
MagicStack/asyncpg | asyncio | 757 | Segmentation Fault in record_repr at asyncpg/protocol/record/recordobj.c:462 | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.22.0
* **PostgreSQL version**: PostgreSQL 10.16 (Ubuntu 10.16-1.pgdg18.04+1)
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Local 10.16
* **Python version**: 3.9.0+ (heads/3.9:5b1fdcacfc, Oct 18 2020, 00:00:17) \n[GCC 7.5.0]
* **Platform**: Ubuntu 18.04 (5.4.0-73-generic #82~18.04.1-Ubuntu)
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: YES
* **If you built asyncpg locally, which version of Cython did you use?**: No
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: N/D
<!-- Enter your issue details below this comment. -->
The reproducible full project code is located here:
https://github.com/ShahriyarR/ecommerce-nuxtjs-fastapi-backend/tree/episode-5
Steps to start:
* clone the branch
* activate python3.9 venv
* do `poetry install`
* `uvicorn backend.app.main:app`
> NOTE: There is no such issue with Python 3.8, spotted only with 3.9
The `pyproject.toml`:
```toml
[tool.poetry]
name = "backend"
version = "0.1.0"
description = ""
authors = ["Shahriyar Rzayev <rzayev.sehriyar@gmail.com>"]
[tool.poetry.dependencies]
python = "^3.7"
fastapi = "^0.64.0"
gino = {extras = ["pg", "starlette"], version = "^1.0.1"}
uvicorn = "^0.13.4"
gunicorn = "^20.1.0"
alembic = "^1.6.2"
psycopg2 = "^2.8.6"
passlib = {extras = ["bcrypt"], version = "^1.7.4"}
pydantic = {extras = ["dotenv"], version = "^1.8.2"}
[tool.poetry.dev-dependencies]
pytest = "^5.2"
pytest-cov = "^2.10.1"
requests = "^2.25.1"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
```
Basically, I am trying to use FastAPI & Gino to build a simple user registration endpoint.
Start the GDB session:
```shell
(gdb) run -X tracemalloc -m uvicorn app.main:app
Starting program: /home/shako/REPOS/Learning_FastAPI/Djackets/.venv/bin/python -X tracemalloc -m uvicorn app.main:app
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
INFO: Started server process [10345]
INFO: Waiting for application startup.
[New Thread 0x7fffede85700 (LWP 10346)]
[New Thread 0x7fffed432700 (LWP 10348)]
app started
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
```
Sending POST request:
```shell
curl -X POST http://127.0.0.1:8000/users/create -d '{"email": "example@gmail.com", "password": "12345789", "username": "example"}' | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 77 0 0 0 77 0 0 --:--:-- 0:52:57 --:--:-- 0
```
Got the following from gdb side:
```shell
email='example@gmail.com' username='example' email_verified=False is_active=True is_superuser=False created_at=datetime.datetime(2021, 5, 16, 12, 38, 11, 847838) updated_at=datetime.datetime(2021, 5, 16, 12, 38, 11, 847874) password='$2b$12$VfJuQRvRMeFWrvwcbgZHsO3V2gGeigcXEvIePfQEEJqzgy/LdDHxC' salt='$2b$12$3FB/lyWlyPU.WcK8K2WKgu'
[New Thread 0x7fffeca31700 (LWP 10361)]
[New Thread 0x7fffdffff700 (LWP 10362)]
Modules/gcmodule.c:2163: visit_validate: Assertion failed: PyObject_GC_Track() object is not valid
Memory block allocated at (most recent call first):
File "/usr/local/lib/python3.9/asyncio/sslproto.py", line 545
object address : 0x7fffeca32eb0
object refcount : 1
object type : 0x7fffee424a20
object type name: asyncpg.Record
object repr :
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
0x00007fffee2028ae in record_repr (v=0x7fffeca32eb0) at asyncpg/protocol/record/recordobj.c:462
462 asyncpg/protocol/record/recordobj.c: No such file or directory.
```
bt:
```c++
(gdb) bt
#0 0x00007fffee2028ae in record_repr (v=0x7fffeca32eb0) at asyncpg/protocol/record/recordobj.c:462
#1 0x00005555555fd868 in PyObject_Repr (v=v@entry=0x7fffeca32eb0) at Objects/object.c:420
#2 0x00005555555fdb39 in PyObject_Print (op=op@entry=0x7fffeca32eb0, fp=0x7ffff740a680 <_IO_2_1_stderr_>, flags=flags@entry=0) at Objects/object.c:275
#3 0x00005555555fdd48 in _PyObject_Dump (op=op@entry=0x7fffeca32eb0) at Objects/object.c:378
#4 0x00005555555fdf17 in _PyObject_AssertFailed (obj=obj@entry=0x7fffeca32eb0, expr=expr@entry=0x0, msg=msg@entry=0x55555587e860 "PyObject_GC_Track() object is not valid",
file=file@entry=0x55555587e348 "Modules/gcmodule.c", line=line@entry=2163, function=function@entry=0x55555587ec68 <__func__.16393> "visit_validate") at Objects/object.c:2192
#5 0x00005555556ec022 in visit_validate (op=<optimized out>, parent_raw=parent_raw@entry=0x7fffeca32eb0) at Modules/gcmodule.c:2162
#6 0x00007fffee2021c2 in record_traverse (o=0x7fffeca32eb0, visit=0x5555556ebfe6 <visit_validate>, arg=0x7fffeca32eb0) at asyncpg/protocol/record/recordobj.c:125
#7 0x00005555556ee67b in PyObject_GC_Track (op_raw=op_raw@entry=0x7fffeca32eb0) at Modules/gcmodule.c:2188
#8 0x00007fffee20347b in ApgRecord_New (type=type@entry=0x7fffee424a20 <ApgRecord_Type>, desc=desc@entry=0x7fffeca60370, size=size@entry=10)
at asyncpg/protocol/record/recordobj.c:57
#9 0x00007fffee1bc186 in __pyx_f_7asyncpg_8protocol_8protocol_22PreparedStatementState__decode_row (__pyx_v_self=0x7fffecaeb680, __pyx_v_cbuf=<optimized out>,
__pyx_v_buf_len=<optimized out>) at asyncpg/protocol/protocol.c:52281
#10 0x00007fffee1bcf90 in __pyx_f_7asyncpg_8protocol_8protocol_12BaseProtocol__decode_row (__pyx_v_self=<optimized out>, __pyx_v_buf=<optimized out>, __pyx_v_buf_len=<optimized out>)
at asyncpg/protocol/protocol.c:68934
#11 0x00007fffee1b9860 in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__parse_data_msgs (__pyx_v_self=0x7fffedefebd0) at asyncpg/protocol/protocol.c:41432
#12 0x00007fffee18d6fa in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__process__bind_execute (__pyx_v_self=0x7fffedefebd0, __pyx_v_mtype=21 '\025')
at asyncpg/protocol/protocol.c:39047
#13 0x00007fffee18a683 in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__read_server_messages (__pyx_v_self=0x7fffedefebd0) at asyncpg/protocol/protocol.c:37586
#14 0x00007fffee183079 in __pyx_pf_7asyncpg_8protocol_8protocol_12BaseProtocol_61data_received (__pyx_v_data=<optimized out>, __pyx_v_self=0x7fffedefebd0)
at asyncpg/protocol/protocol.c:71031
#15 __pyx_pw_7asyncpg_8protocol_8protocol_12BaseProtocol_62data_received (__pyx_v_self=0x7fffedefebd0, __pyx_v_data=<optimized out>) at asyncpg/protocol/protocol.c:5461
#16 0x000055555578cbe8 in method_vectorcall_O (func=0x7fffee48b6b0, args=0x555558338070, nargsf=<optimized out>, kwnames=<optimized out>) at Objects/descrobject.c:462
#17 0x000055555567caff in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x555558338070, callable=0x7fffee48b6b0, tstate=0x555555bab990)
at ./Include/cpython/abstract.h:118
#18 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775810, args=0x555558338070, callable=0x7fffee48b6b0) at ./Include/cpython/abstract.h:127
```
bt full excerpt:
```c++
(gdb) bt full
#0 0x00007fffee2028ae in record_repr (v=0x7fffeca32eb0) at asyncpg/protocol/record/recordobj.c:462
i = <optimized out>
n = 10
keys_iter = <optimized out>
writer = {buffer = 0x1, data = 0x7ffff7bc53bc <__libc_write+92>, kind = PyUnicode_1BYTE_KIND, maxchar = 0, size = 0, pos = 33, min_length = 14, min_char = 3,
overallocate = 0 '\000', readonly = 0 '\000'}
#1 0x00005555555fd868 in PyObject_Repr (v=v@entry=0x7fffeca32eb0) at Objects/object.c:420
res = <optimized out>
tstate = <optimized out>
__PRETTY_FUNCTION__ = "PyObject_Repr"
#2 0x00005555555fdb39 in PyObject_Print (op=op@entry=0x7fffeca32eb0, fp=0x7ffff740a680 <_IO_2_1_stderr_>, flags=flags@entry=0) at Objects/object.c:275
s = <optimized out>
ret = 0
__PRETTY_FUNCTION__ = "PyObject_Print"
#3 0x00005555555fdd48 in _PyObject_Dump (op=op@entry=0x7fffeca32eb0) at Objects/object.c:378
type = <optimized out>
gil = PyGILState_LOCKED
error_type = 0x0
error_value = 0x0
error_traceback = 0x0
#4 0x00005555555fdf17 in _PyObject_AssertFailed (obj=obj@entry=0x7fffeca32eb0, expr=expr@entry=0x0, msg=msg@entry=0x55555587e860 "PyObject_GC_Track() object is not valid",
file=file@entry=0x55555587e348 "Modules/gcmodule.c", line=line@entry=2163, function=function@entry=0x55555587ec68 <__func__.16393> "visit_validate") at Objects/object.c:2192
ptr = 0x7fffeca32ea0
type = <optimized out>
__func__ = "_PyObject_AssertFailed"
#5 0x00005555556ec022 in visit_validate (op=<optimized out>, parent_raw=parent_raw@entry=0x7fffeca32eb0) at Modules/gcmodule.c:2162
parent = 0x7fffeca32eb0
__func__ = "visit_validate"
#6 0x00007fffee2021c2 in record_traverse (o=0x7fffeca32eb0, visit=0x5555556ebfe6 <visit_validate>, arg=0x7fffeca32eb0) at asyncpg/protocol/record/recordobj.c:125
vret = <optimized out>
i = <optimized out>
#7 0x00005555556ee67b in PyObject_GC_Track (op_raw=op_raw@entry=0x7fffeca32eb0) at Modules/gcmodule.c:2188
op = 0x7fffeca32eb0
__func__ = "PyObject_GC_Track"
traverse = <optimized out>
#8 0x00007fffee20347b in ApgRecord_New (type=type@entry=0x7fffee424a20 <ApgRecord_Type>, desc=desc@entry=0x7fffeca60370, size=size@entry=10)
at asyncpg/protocol/record/recordobj.c:57
o = 0x7fffeca32eb0
i = <optimized out>
#9 0x00007fffee1bc186 in __pyx_f_7asyncpg_8protocol_8protocol_22PreparedStatementState__decode_row (__pyx_v_self=0x7fffecaeb680, __pyx_v_cbuf=<optimized out>,
__pyx_v_buf_len=<optimized out>) at asyncpg/protocol/protocol.c:52281
__pyx_v_codec = 0x0
__pyx_v_fnum = <optimized out>
---Type <return> to continue, or q <return> to quit---
__pyx_v_flen = <optimized out>
__pyx_v_dec_row = 0x0
__pyx_v_rows_codecs = 0x7fffeca89ef0
__pyx_v_settings = 0x7fffecaaf2f0
__pyx_v_i = <optimized out>
__pyx_v_rbuf = {buf = 0x5555587aa0dc "", len = 180}
__pyx_v_bl = <optimized out>
__pyx_v_val = 0x0
__pyx_r = 0x0
__pyx_t_1 = 0x7fffee424a20 <ApgRecord_Type>
__pyx_t_2 = 0xcdcdcdcdcdcdcdcd <error: Cannot access memory at address 0xcdcdcdcdcdcdcdcd>
__pyx_t_3 = <optimized out>
__pyx_t_4 = 0x0
__pyx_t_5 = 0x7fffeca60370
__pyx_t_6 = 0x0
__pyx_t_7 = 0x0
__pyx_t_8 = 0x0
__pyx_t_9 = 0x0
__pyx_t_10 = <optimized out>
__pyx_t_11 = 0x0
__pyx_t_12 = <optimized out>
__pyx_t_13 = <optimized out>
__pyx_t_14 = <optimized out>
__pyx_t_15 = <optimized out>
__pyx_lineno = 0
__pyx_filename = 0x0
__pyx_clineno = 0
#10 0x00007fffee1bcf90 in __pyx_f_7asyncpg_8protocol_8protocol_12BaseProtocol__decode_row (__pyx_v_self=<optimized out>, __pyx_v_buf=<optimized out>, __pyx_v_buf_len=<optimized out>)
at asyncpg/protocol/protocol.c:68934
__pyx_r = 0x0
__pyx_t_2 = <optimized out>
__pyx_t_3 = 0x0
__pyx_lineno = 0
__pyx_filename = 0x0
__pyx_clineno = 0
#11 0x00007fffee1b9860 in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__parse_data_msgs (__pyx_v_self=0x7fffedefebd0) at asyncpg/protocol/protocol.c:41432
__pyx_v_buf = 0x7fffecae22d0
__pyx_v_rows = 0x7fffeca8d780
__pyx_v_decoder = 0x7fffee1bcf80 <__pyx_f_7asyncpg_8protocol_8protocol_12BaseProtocol__decode_row>
__pyx_v_try_consume_message = 0x7fffedf35590 <__pyx_f_7asyncpg_7pgproto_7pgproto_10ReadBuffer_try_consume_message>
__pyx_v_take_message_type = 0x7fffedf377e0 <__pyx_f_7asyncpg_7pgproto_7pgproto_10ReadBuffer_take_message_type>
__pyx_v_cbuf = <optimized out>
__pyx_v_cbuf_len = 182
---Type <return> to continue, or q <return> to quit---
__pyx_v_row = 0x0
__pyx_v_mem = 0x0
__pyx_r = 0x0
__pyx_t_1 = 0x0
__pyx_t_2 = 0
__pyx_t_5 = <optimized out>
__pyx_t_6 = <optimized out>
__pyx_t_9 = <optimized out>
__pyx_lineno = 0
__pyx_filename = 0x0
__pyx_clineno = 0
#12 0x00007fffee18d6fa in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__process__bind_execute (__pyx_v_self=0x7fffedefebd0, __pyx_v_mtype=21 '\025')
at asyncpg/protocol/protocol.c:39047
__pyx_r = 0xcdcdcdcdcdcdcdcd
__pyx_t_1 = <optimized out>
__pyx_lineno = 68
__pyx_filename = <optimized out>
__pyx_clineno = -303043632
#13 0x00007fffee18a683 in __pyx_f_7asyncpg_8protocol_8protocol_12CoreProtocol__read_server_messages (__pyx_v_self=0x7fffedefebd0) at asyncpg/protocol/protocol.c:37586
__pyx_tstate = 0x555555bab990
__pyx_v_mtype = 68 'D'
__pyx_v_state = __pyx_e_7asyncpg_8protocol_8protocol_PROTOCOL_BIND_EXECUTE
__pyx_v_take_message = 0x7fffedf377c0 <__pyx_f_7asyncpg_7pgproto_7pgproto_10ReadBuffer_take_message>
__pyx_v_get_message_type = <optimized out>
__pyx_v_ex = 0x0
__pyx_r = 0x0
__pyx_t_1 = 0x0
__pyx_t_2 = <optimized out>
__pyx_t_3 = 1
__pyx_t_4 = __pyx_e_7asyncpg_8protocol_8protocol_PROTOCOL_BIND_EXECUTE
__pyx_t_5 = 0x555555b58b00 <_Py_NoneStruct>
__pyx_t_6 = 0x0
__pyx_t_7 = 0x0
__pyx_t_8 = 0x0
__pyx_t_9 = 0x0
__pyx_t_10 = <optimized out>
__pyx_t_11 = <optimized out>
__pyx_t_12 = 0x0
__pyx_t_13 = 0x0
__pyx_t_14 = <optimized out>
__pyx_t_15 = <optimized out>
__pyx_t_16 = <optimized out>
__pyx_t_17 = 0x0
---Type <return> to continue, or q <return> to quit---
__pyx_t_18 = 0x0
__pyx_t_19 = 0x0
__pyx_t_20 = 0x0
__pyx_t_21 = 0x0
__pyx_t_22 = 0x0
__pyx_t_23 = <optimized out>
__pyx_lineno = <optimized out>
__pyx_filename = <optimized out>
__pyx_clineno = <optimized out>
#14 0x00007fffee183079 in __pyx_pf_7asyncpg_8protocol_8protocol_12BaseProtocol_61data_received (__pyx_v_data=<optimized out>, __pyx_v_self=0x7fffedefebd0)
at asyncpg/protocol/protocol.c:71031
__pyx_r = 0x0
__pyx_t_1 = 0x0
__pyx_lineno = 0
__pyx_filename = 0x0
__pyx_clineno = 0
__pyx_r = <optimized out>
__pyx_t_1 = <optimized out>
__pyx_lineno = <optimized out>
__pyx_filename = <optimized out>
__pyx_clineno = <optimized out>
#15 __pyx_pw_7asyncpg_8protocol_8protocol_12BaseProtocol_62data_received (__pyx_v_self=0x7fffedefebd0, __pyx_v_data=<optimized out>) at asyncpg/protocol/protocol.c:5461
__pyx_r = 0x0
``` | closed | 2021-05-16T09:42:56Z | 2021-05-18T16:22:19Z | https://github.com/MagicStack/asyncpg/issues/757 | [] | ShahriyarR | 2 |
noirbizarre/flask-restplus | flask | 261 | fields.List example displays incorrectly | ## Source Code (test.py)
from flask import Flask
from flask_restplus import Resource, Api, fields
app = Flask(__name__)
api = Api(app)
calendar = api.inherit('Calendar',{
'weekDay': fields.List(fields.String(example='Mon\',\'Tue')),
})
@api.route('/calendar')
class Calendar(Resource):
@api.expect(calendar)
def get(self):
pass
if __name__ == '__main__':
app.run(debug=True)
## Model | Example Value
{"weekDay": ["Mon','Tue"]}
## Inquire
How can the List be displayed correctly?
{"weekDay": ["Mon","Tue"]} | open | 2017-03-19T02:35:11Z | 2017-03-19T02:35:11Z | https://github.com/noirbizarre/flask-restplus/issues/261 | [] | sperarafa | 0 |
coqui-ai/TTS | python | 3,224 | [Bug] ValueError: Model is not multi-lingual but `language` is provided. | ### Describe the bug
I encounter error while trying to use xtts_v2 multilingual model in loop to generate several audio clips. This happens only for this model, VITS works fine while looping over existing tts object.
### To Reproduce
**Reproduction code:**
```python
tts = TTS('tts_models/multilingual/multi-dataset/xtts_v2', gpu=True)
for i in range(5):
tts.tts_with_vc_to_file(text=f"Как оно {i}?", speaker_wav='/content/drive/MyDrive/Data/SPEAKER_00_voice_clips.wav', language='ru', file_path=f'/content/drive/MyDrive/Data/test{i}.wav')
```
**Error:**
```python
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_with_vc_to_file(self, text, language, speaker_wav, file_path)
486 Output file path. Defaults to "output.wav".
487 """
--> 488 wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav)
489 save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate)
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_with_vc(self, text, language, speaker_wav)
461 with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
462 # Lazy code... save it to a temp file to resample it while reading it for VC
--> 463 self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name, speaker_wav=speaker_wav)
464 if self.voice_converter is None:
465 self.load_vc_model_by_name("voice_conversion_models/multilingual/vctk/freevc24")
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_to_file(self, text, speaker, language, speaker_wav, emotion, speed, pipe_out, file_path, **kwargs)
389 Additional arguments for the model.
390 """
--> 391 self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
392
393 if self.csapi is not None:
/usr/local/lib/python3.10/dist-packages/TTS/api.py in _check_arguments(self, speaker, language, speaker_wav, emotion, speed, **kwargs)
240 raise ValueError("Model is not multi-speaker but `speaker` is provided.")
241 if not self.is_multi_lingual and language is not None:
--> 242 raise ValueError("Model is not multi-lingual but `language` is provided.")
243 if not emotion is None and not speed is None:
244 raise ValueError("Emotion and speed can only be used with Coqui Studio models.")
ValueError: Model is not multi-lingual but `language` is provided.
```
Then, if you remove parameter, the launch will trigger exception requiring you to return `language` parameter back.
```python
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_with_vc_to_file(self, text, language, speaker_wav, file_path)
486 Output file path. Defaults to "output.wav".
487 """
--> 488 wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav)
489 save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate)
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_with_vc(self, text, language, speaker_wav)
461 with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
462 # Lazy code... save it to a temp file to resample it while reading it for VC
--> 463 self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name, speaker_wav=speaker_wav)
464 if self.voice_converter is None:
465 self.load_vc_model_by_name("voice_conversion_models/multilingual/vctk/freevc24")
/usr/local/lib/python3.10/dist-packages/TTS/api.py in tts_to_file(self, text, speaker, language, speaker_wav, emotion, speed, pipe_out, file_path, **kwargs)
389 Additional arguments for the model.
390 """
--> 391 self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
392
393 if self.csapi is not None:
/usr/local/lib/python3.10/dist-packages/TTS/api.py in _check_arguments(self, speaker, language, speaker_wav, emotion, speed, **kwargs)
236 raise ValueError("Model is multi-speaker but no `speaker` is provided.")
237 if self.is_multi_lingual and language is None:
--> 238 raise ValueError("Model is multi-lingual but no `language` is provided.")
239 if not self.is_multi_speaker and speaker is not None and "voice_dir" not in kwargs:
240 raise ValueError("Model is not multi-speaker but `speaker` is provided.")
ValueError: Model is multi-lingual but no `language` is provided.
```
**Temporary fix:**
As a workaround I have put `tts` object within loop and for some reason it works with constant reinitialization. Although it's eating away RAM(up to 9gb and then stops), the error disappears and the generation works.
```python
for i in range(5):
tts = TTS('tts_models/multilingual/multi-dataset/xtts_v2', gpu=True)
tts.tts_with_vc_to_file(text=f"Как оно {i}?", speaker_wav='/content/drive/MyDrive/Data/SPEAKER_00_voice_clips.wav', language='ru', file_path=f'/content/drive/MyDrive/Data/test{i}.wav')
del tts; import gc; gc.collect(); torch.cuda.empty_cache()
```
### Expected behavior
No error while generating using same xtts_v2 object
### Environment
```shell
- TTS built from main
- Google Colab environment
```
| closed | 2023-11-15T10:17:05Z | 2023-11-27T18:42:53Z | https://github.com/coqui-ai/TTS/issues/3224 | [
"bug"
] | vitaliy-sharandin | 6 |
tableau/server-client-python | rest-api | 762 | Error connecting to Tableau server | Hello Team,
**Problem statement:** the implementation to connect and authenticate to Tableau server (2020.3) is failing starting today and this is the error message that I am receiving -
```
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): tableauqa.companyname.com:443
Error on line 66 ConnectionError HTTPSConnectionPool(host='tableauqa.companyname.com', port=443): Max retries exceeded with url: /api/3.6/auth/signin (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9306f6d080>: Failed to establish a new connection: [Errno -2] Name or service not known',))
```
This is the first time that the code is failing with this issue, as it was working well. Code snippet -
```
import tableauserverclient as tsc ## tableauserverclient 0.14.1
tableau_auth = tsc.PersonalAccessTokenAuth(token_name=token_name, personal_access_token=token_value, site_id=sitename)
server = tsc.Server(url) ## https://tableauqa.companyname.com
server.version = '3.6'
with server.auth.sign_in_with_personal_access_token(tableau_auth): ##this is line 66
print('Logged in successfully')
```
| closed | 2020-12-16T19:07:43Z | 2020-12-22T17:24:42Z | https://github.com/tableau/server-client-python/issues/762 | [] | iamabhishekchakraborty | 4 |
supabase/supabase-py | fastapi | 440 | Supabase Does Not Provide Access Token To Storage | **Describe the bug**
Hey All,
Thanks for the communities work on the Python SDK!!
I recently tried using Supabase storage in the python backend of our app. After a day and a half of debugging, I think I have pegged the issue. It appears that the access token is not being provided to the storage client correctly.
I have checked that the RLS is correct. I compared this to JS and found this problem.
See below
**To Reproduce**
```
supabase: Client = create_client(
supabase_url=self.supabaseUrl, supabase_key=self.supabaseAnonKey
)
response = supabase.auth.sign_in_with_password(
{"email": self.supabaseStandEmail, "password": self.supabaseStandPassword}
)
session = response.session
# Due to a bug detailed here https://github.com/supabase-community/supabase-py/issues/185
# The postgrest client needs to be reinitialized with the auth token
postgrest_client = supabase.postgrest
postgrest_client.auth(session.access_token)
# I had to do this in order to get the storage to make the request correctly.
storageSessionDict = supabase.storage.session.__dict__
storageSessionDict["_headers"]["authorization"] = (
"Bearer " + supabase.auth.get_session().__dict__["access_token"]
)
```
**Expected behavior**
I expect supabase.storage to just work like the JS SDK.
Thanks for the help! Let me know if i'm missing something.
Jack
| closed | 2023-05-15T05:52:16Z | 2023-09-28T11:52:23Z | https://github.com/supabase/supabase-py/issues/440 | [] | jackeverydayzero | 3 |
aiogram/aiogram | asyncio | 863 | Simple script that sends a message raises `RuntimeError: Event loop is closed` on Windows | ## Context
* Operating System: Windows 10
* Python Version: 3.9, 3.10
* aiogram version: 2.19, 3.0.0b2
* aiohttp version: 3.8.1
## Expected Behavior
Process finishes gracefully
## Current Behavior
```
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000207916D4040>
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 746, in call_soon
self._check_closed()
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```
## Failure Information (for bugs)
### Steps to Reproduce
```python
import asyncio
from aiogram import Bot
TOKEN = "" # put your token here
CHAT_ID = 0 # put your user ID here
async def main():
bot = Bot(TOKEN)
await bot.send_message(CHAT_ID, "Hello there!")
await bot.session.close() # replace with `await bot.close()` on 2.x
# await asyncio.sleep(0.1)
if __name__ == '__main__':
asyncio.run(main())
```
If you uncomment `sleep` method, it works fine and finishes without traceback. I guess the problem is in aiohttp.
**The message is delivered anyway**, with or without `sleep`
| closed | 2022-03-15T05:04:20Z | 2023-08-04T18:24:32Z | https://github.com/aiogram/aiogram/issues/863 | [
"bug",
"upstream",
"3.x",
"confirmed",
"2.x"
] | evgfilim1 | 4 |
AntonOsika/gpt-engineer | python | 911 | Placeholders and "will follow a similar structure, so I will not repeat them here for brevity." |
## Expected Behavior
It should, or should have the option to, write full solutions without placeholders or omisssions.
## Current Behavior
"// Add explicit instantiation for other types as needed"
" will follow a similar structure, so I will not repeat them here for brevity."
## Failure Information
Ask it to create a solution or project with more than a few classes.
| closed | 2023-12-17T09:46:39Z | 2023-12-19T09:52:18Z | https://github.com/AntonOsika/gpt-engineer/issues/911 | [
"bug",
"triage"
] | h5kk | 1 |
streamlit/streamlit | deep-learning | 10,416 | Add primary/secondary/tertiary type to `st.popover` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
`st.popover` is styled like a button but unlike other buttons, it doesn't have `type="primary"|"secondary"|"tertiary"`. We should add that! Especially tertiary type could allow cool interactions, e.g. like below in OpenAI's settings dialog:
https://github.com/user-attachments/assets/d5073a08-7b29-4978-b917-703a081ad53c
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-02-17T14:52:18Z | 2025-02-17T14:52:42Z | https://github.com/streamlit/streamlit/issues/10416 | [
"type:enhancement",
"feature:st.popover"
] | sfc-gh-jrieke | 1 |
plotly/plotly.py | plotly | 4,240 | Font size units missing in docs | Looking at https://plotly.com/python/reference/layout/#layout-font, we see
> **size**
> *Code:* `fig.update_layout(font_size=<VALUE>)`
> *Type:* number greater than or equal to 1
> *Default:* `12`
But what are the units of this number? Points? Pixels? cm? in? Experimentation shows that it's most likely points, but an official record of that would be nice.
Hopefully this doesn't take more than a couple of minutes. Thank you for your time. | open | 2023-06-09T10:48:39Z | 2025-03-18T13:46:34Z | https://github.com/plotly/plotly.py/issues/4240 | [
"bug",
"P3",
"documentation"
] | madphysicist | 1 |
google-research/bert | nlp | 1,219 | PC requirements for running BERT - do you need a GPU with lots of VRAM or just a fast CPU and lots of RAM? | Hi,
We've got a postgrad student doing a research project with BERT, among other things, and I've been asked to sort out a PC for her to work with.
The other work she needs to do (mainly GIS and some big data analysis with Python / R) needs a decent CPU, lots of RAM and fast local storage anyway.
I've read the 'Readme' for BERT, and gone through the forum, and it's not quite clear whether I need a GPU with lots of VRAM or not.
I suspect that it's not required, as the work can be done on CPU only .. but I want to be sure before spending somebody else's money. :-)
If those who have experience with running BERT (without using the Google TPUs) can let me know what they were running it on, and the performance they had, I would greatly appreciate it. :-)
If it helps .. the PC I'm looking at for the research work would include a 10-core i9 CPU, 128Gb of RAM and a 2 Tb NVMe super-fast hard drive.
My main question is whether I need to include something like a 10Gb or 12Gb VRAM GPU card as well.
Thanks in advance.
Brendon
| open | 2021-04-13T21:11:10Z | 2021-04-13T21:11:10Z | https://github.com/google-research/bert/issues/1219 | [] | brendonsly | 0 |
jupyter/nbviewer | jupyter | 1,060 | nbviewer isn't rendering my notebook and keeps giving me a 503 backend read error | Please is there a maximum size for jupyter notebooks which nbviewer can render? I have a jupyter notebook of about 200MB but can't seem to render it on nbviewer no matter what I try. As I know, the maximum size of jupyter notebook that Github will render is 25MB (without storing it as LFS), so I tried using HuggingFace, but after I input the link, it doesn't render on nbviewer, it just keeps loading for minutes to hours and then returns a 503 backend read error. I have also tried uploading the notebook in Google colab and using the link but I'm encountering the same problem.
Below is the link that HuggingFace generated for me, but it isn't rendering on nbviewer. Loads for some time and then gives me the 503 error:
https://nbviewer.org/urls/huggingface.co/spaces/OgeAno/Customer_Segmentation/resolve/main/Bank_Customer_Segmentation_17-02-24.ipynb
Please could someone suggest to me any other way I can achieve this. Any help would be appreciated. All I want is to render the Jupyter notebook on nbviewer. Thank you. | open | 2024-05-27T00:33:30Z | 2024-06-02T21:52:13Z | https://github.com/jupyter/nbviewer/issues/1060 | [] | OgeAno | 0 |
erdewit/ib_insync | asyncio | 492 | WARNING: An illegal reflective access operation has occurred | When I use IBC, show these warning on macOS 12.4. Can somebody tell me how to fix it? Or is it ibcAlpha/IBC's issue? Thanks.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by feature.search.recorder.JtsMultiLookAndFeel (file:/XXX/jars/twslaunch-1012.jar) to method javax.swing.UIManager.getLAFState()
WARNING: Please consider reporting this to the maintainers of feature.search.recorder.JtsMultiLookAndFeel
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Warning: Nashorn engine is planned to be removed from a future JDK release
Warning: Nashorn engine is planned to be removed from a future JDK release | closed | 2022-07-21T12:59:51Z | 2022-07-24T03:55:17Z | https://github.com/erdewit/ib_insync/issues/492 | [] | giphoneix | 2 |
modelscope/data-juicer | streamlit | 87 | [Bug]: 无法安装simhash-py | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
Ubuntu
### Installation Method 安装方式
from source
### Data-Juicer Version Data-Juicer版本
_No response_
### Python Version Python版本
3.9
### Describe the bug 描述这个bug
/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/Cython/Compiler/Main.py:381: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /tmp/pip-install-642i4337/simhash-py_707a16c4f0d24a878223a92ec6376dbd/simhash/simhash.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
Error compiling Cython file:
------------------------------------------------------------
...
import hashlib
import struct
from simhash cimport compute as c_compute
^
------------------------------------------------------------
simhash/simhash.pyx:4:0: 'simhash/compute.pxd' not found
Error compiling Cython file:
------------------------------------------------------------
...
import hashlib
import struct
from simhash cimport compute as c_compute
from simhash cimport find_all as c_find_all
^
------------------------------------------------------------
simhash/simhash.pyx:5:0: 'simhash/find_all.pxd' not found
Error compiling Cython file:
------------------------------------------------------------
...
Find the set of all matches within the provided vector of hashes.
The provided hashes are manipulated in place, but upon completion are
restored to their original state.
'''
cdef matches_t results_set = c_find_all(hashes, number_of_blocks, different_bits)
^
------------------------------------------------------------
simhash/simhash.pyx:26:9: 'matches_t' is not a type identifier
Error compiling Cython file:
------------------------------------------------------------
...
The provided hashes are manipulated in place, but upon completion are
restored to their original state.
'''
cdef matches_t results_set = c_find_all(hashes, number_of_blocks, different_bits)
cdef vector[match_t] results_vector
^
------------------------------------------------------------
simhash/simhash.pyx:27:9: 'vector' is not a type identifier
Error compiling Cython file:
------------------------------------------------------------
...
The provided hashes are manipulated in place, but upon completion are
restored to their original state.
'''
cdef matches_t results_set = c_find_all(hashes, number_of_blocks, different_bits)
cdef vector[match_t] results_vector
^
------------------------------------------------------------
simhash/simhash.pyx:27:9: 'vector' is not a type identifier
Error compiling Cython file:
------------------------------------------------------------
...
# Unpacks the binary bytes in digest into a Python integer
return struct.unpack('>Q', digest)[0] & 0xFFFFFFFFFFFFFFFF
def compute(hashes):
'''Compute the simhash of a vector of hashes.'''
return c_compute(hashes)
^
------------------------------------------------------------
simhash/simhash.pyx:17:11: 'c_compute' is not a constant, variable or function identifier
Error compiling Cython file:
------------------------------------------------------------
...
Find the set of all matches within the provided vector of hashes.
The provided hashes are manipulated in place, but upon completion are
restored to their original state.
'''
cdef matches_t results_set = c_find_all(hashes, number_of_blocks, different_bits)
^
------------------------------------------------------------
simhash/simhash.pyx:26:33: 'c_find_all' is not a constant, variable or function identifier
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-642i4337/simhash-py_707a16c4f0d24a878223a92ec6376dbd/setup.py", line 38, in <module>
setup(
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 364, in run
self.run_command("build")
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/Cython/Distutils/build_ext.py", line 130, in build_extension
new_ext = cythonize(
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: simhash/simhash.pyx
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /home/kemove/anaconda3/envs/sakura/bin/python -u -c '
exec(compile('"'"''"'"''"'"'
# This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
#
# - It imports setuptools before invoking setup.py, to enable projects that directly
# import from `distutils.core` to work with newer packaging standards.
# - It provides a clear error message when setuptools is not installed.
# - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
# setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
# manifest_maker: standard file '"'"'-c'"'"' not found".
# - It generates a shim setup.py, for handling setup.cfg-only projects.
import os, sys, tokenize
try:
import setuptools
except ImportError as error:
print(
"ERROR: Can not execute `setup.py` since setuptools is not available in "
"the build environment.",
file=sys.stderr,
)
sys.exit(1)
__file__ = %r
sys.argv[0] = __file__
if os.path.exists(__file__):
filename = __file__
with tokenize.open(__file__) as f:
setup_py_code = f.read()
else:
filename = "<auto-generated setuptools caller>"
setup_py_code = "from setuptools import setup; setup()"
exec(compile(setup_py_code, filename, "exec"))
'"'"''"'"''"'"' % ('"'"'/tmp/pip-install-642i4337/simhash-py_707a16c4f0d24a878223a92ec6376dbd/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' bdist_wheel -d /tmp/pip-wheel-oyoenh4d
cwd: /tmp/pip-install-642i4337/simhash-py_707a16c4f0d24a878223a92ec6376dbd/
Building wheel for simhash-py (setup.py) ... error
ERROR: Failed building wheel for simhash-py
Running setup.py clean for simhash-py
Running command python setup.py clean
Building from Cython
/home/kemove/anaconda3/envs/sakura/lib/python3.9/site-packages/setuptools/dist.py:723: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running clean
removing 'build/lib.linux-x86_64-3.9' (and everything under it)
'build/bdist.linux-x86_64' does not exist -- can't clean it
'build/scripts-3.9' does not exist -- can't clean it
Failed to build simhash-py
ERROR: Could not build wheels for simhash-py, which is required to install pyproject.toml-based projects
WARNING: There was an error checking the latest version of pip.
### To Reproduce 如何复现
cd data-juicer
pip install -v -e .
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2023-11-17T11:31:07Z | 2023-11-23T04:01:12Z | https://github.com/modelscope/data-juicer/issues/87 | [
"bug",
"wontfix"
] | AnitaSherry | 7 |
microsoft/unilm | nlp | 1,139 | Is there a plan to release the dit pretraining code publicly? I am very interested in it. | **Describe**
Model I am using (UniLM, MiniLM, LayoutLM ...):
| open | 2023-06-14T01:43:15Z | 2023-08-29T08:50:58Z | https://github.com/microsoft/unilm/issues/1139 | [] | Masterchenyong | 1 |
aiogram/aiogram | asyncio | 779 | Make local files wrapper bi-directional | TelegramAPIServer in local mode can wrap files path to the file-system. Lets make this mechanism bi-directional (server-client, client-server) | closed | 2021-12-12T16:03:46Z | 2021-12-19T03:44:15Z | https://github.com/aiogram/aiogram/issues/779 | [
"3.x"
] | JrooTJunior | 0 |
comfyanonymous/ComfyUI | pytorch | 7,266 | WanVideoTextEncode Allocation on device | ### Your question
# ComfyUI Error Report
## Error Details
- **Node ID:** 17
- **Node Type:** WanVideoImageClipEncode
- **Exception Type:** torch.cuda.OutOfMemoryError
- **Exception Message:** Allocation on device
## Stack Trace
```
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
```
## System Information
- **ComfyUI Version:** 0.3.26
- **Arguments:** D:\ComfyUI\main.py --auto-launch --preview-method auto --disable-cuda-malloc
- **OS:** nt
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.3.1+cu121
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 8589279232
- **VRAM Free:** 16989184
- **Torch VRAM Total:** 7851737088
- **Torch VRAM Free:** 16989184
## Logs
```
2025-03-16T15:49:52.560331 - File "D:\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1340, in default_cache_update
2025-03-16T15:49:52.561308 - 2025-03-16T15:49:52.561308 - await asyncio.gather(a, b, c, d, e)2025-03-16T15:49:52.561308 -
2025-03-16T15:49:52.561308 - File "D:\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1327, in get_cache
2025-03-16T15:49:52.562283 - 2025-03-16T15:49:52.562283 - json_obj = await core.get_data(uri, True)2025-03-16T15:49:52.562283 -
2025-03-16T15:49:52.562283 - File "D:\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_core.py", line 620, in get_data
2025-03-16T15:49:52.563259 - 2025-03-16T15:49:52.563259 - async with session.get(uri) as resp:2025-03-16T15:49:52.563259 -
2025-03-16T15:49:52.563259 - File "D:\ComfyUI\python\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
2025-03-16T15:49:52.564237 - 2025-03-16T15:49:52.564237 - self._resp: _RetType = await self._coro2025-03-16T15:49:52.564237 -
2025-03-16T15:49:52.564237 - File "D:\ComfyUI\python\lib\site-packages\aiohttp\client.py", line 703, in _request
2025-03-16T15:49:52.565213 - 2025-03-16T15:49:52.565213 - conn = await self._connector.connect(2025-03-16T15:49:52.565213 -
2025-03-16T15:49:52.566188 - File "D:\ComfyUI\python\lib\site-packages\aiohttp\connector.py", line 548, in connect
2025-03-16T15:49:52.566188 - 2025-03-16T15:49:52.566188 - proto = await self._create_connection(req, traces, timeout)2025-03-16T15:49:52.566188 -
2025-03-16T15:49:52.566188 - File "D:\ComfyUI\python\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
2025-03-16T15:49:52.567164 - 2025-03-16T15:49:52.567164 - _, proto = await self._create_direct_connection(req, traces, timeout)2025-03-16T15:49:52.567164 -
2025-03-16T15:49:52.567164 - File "D:\ComfyUI\python\lib\site-packages\aiohttp\connector.py", line 1368, in _create_direct_connection
2025-03-16T15:49:52.568140 - 2025-03-16T15:49:52.568140 - raise ClientConnectorDNSError(req.connection_key, exc) from exc2025-03-16T15:49:52.568140 -
2025-03-16T15:49:52.568140 - aiohttp.client_exceptions2025-03-16T15:49:52.568140 - .2025-03-16T15:49:52.568140 - ClientConnectorDNSError2025-03-16T15:49:52.568140 - : 2025-03-16T15:49:52.569116 - Cannot connect to host raw.githubusercontent.com:443 ssl:default [getaddrinfo failed]2025-03-16T15:49:52.569116 -
2025-03-16T15:49:54.075727 - --------------
2025-03-16T15:49:54.075727 - [91m ### Mixlab Nodes: [93mLoaded
2025-03-16T15:49:54.085488 - json_repair## OK2025-03-16T15:49:54.085488 -
2025-03-16T15:49:54.097201 - ChatGPT.available True
2025-03-16T15:49:54.098177 - edit_mask.available True
2025-03-16T15:49:55.988420 - ## clip_interrogator_model not found: D:\ComfyUI\models\clip_interrogator\Salesforce\blip-image-captioning-base, pls download from https://huggingface.co/Salesforce/blip-image-captioning-base2025-03-16T15:49:55.988420 -
2025-03-16T15:49:55.988420 - ClipInterrogator.available True
2025-03-16T15:49:56.051859 - ## text_generator_model not found: D:\ComfyUI\models\prompt_generator\text2image-prompt-generator, pls download from https://huggingface.co/succinctly/text2image-prompt-generator/tree/main2025-03-16T15:49:56.051859 -
2025-03-16T15:49:56.051859 - ## zh_en_model not found: D:\ComfyUI\models\prompt_generator\opus-mt-zh-en, pls download from https://huggingface.co/Helsinki-NLP/opus-mt-zh-en/tree/main2025-03-16T15:49:56.051859 -
2025-03-16T15:49:56.051859 - PromptGenerate.available True
2025-03-16T15:49:56.052835 - ChinesePrompt.available True
2025-03-16T15:49:56.052835 - RembgNode_.available True
2025-03-16T15:49:56.059667 - ffmpeg could not be found. Using ffmpeg from imageio-ffmpeg.2025-03-16T15:49:56.059667 -
2025-03-16T15:49:56.540836 - TripoSR.available
2025-03-16T15:49:56.541812 - MiniCPMNode.available
2025-03-16T15:49:56.592564 - Scenedetect.available
2025-03-16T15:49:56.719444 - FishSpeech.available
2025-03-16T15:49:56.732132 - SenseVoice.available
2025-03-16T15:49:56.974358 - Whisper.available False
2025-03-16T15:49:56.977286 - fal-client## OK2025-03-16T15:49:56.977286 -
2025-03-16T15:49:56.989975 - FalVideo.available
2025-03-16T15:49:56.990950 - [93m -------------- [0m
2025-03-16T15:49:58.448487 - D:\ComfyUI\python\lib\site-packages\albumentations\__init__.py:13: UserWarning: A new version of Albumentations is available: 2.0.5 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
2025-03-16T15:49:58.556823 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-03-16T15:49:58.557799 -
2025-03-16T15:49:58.661254 - [0;33m[ReActor][0m - [38;5;173mSTATUS[0m - [0;32mRunning v0.4.1-b11 in ComfyUI[0m2025-03-16T15:49:58.662230 -
2025-03-16T15:49:58.680774 - Torch version: 2.3.1+cu1212025-03-16T15:49:58.680774 -
2025-03-16T15:49:59.320563 - Warning: Could not load sageattention: No module named 'sageattention'2025-03-16T15:49:59.320563 -
2025-03-16T15:49:59.320563 - sageattention package is not installed2025-03-16T15:49:59.320563 -
2025-03-16T15:49:59.335204 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2025-03-16T15:49:59.335204 -
2025-03-16T15:49:59.335204 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-03-16T15:49:59.335204 -
2025-03-16T15:49:59.370339 - Workspace manager - Openning file hash dict2025-03-16T15:49:59.370339 -
2025-03-16T15:49:59.370339 - 🦄🦄Loading: Workspace Manager (V2.1.0)2025-03-16T15:49:59.370339 -
2025-03-16T15:49:59.409379 - ------------------------------------------2025-03-16T15:49:59.409379 -
2025-03-16T15:49:59.409379 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-03-16T15:49:59.409379 -
2025-03-16T15:49:59.409379 - ------------------------------------------2025-03-16T15:49:59.410356 -
2025-03-16T15:49:59.410356 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-03-16T15:49:59.410356 -
2025-03-16T15:49:59.410356 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-03-16T15:49:59.410356 -
2025-03-16T15:49:59.410356 - ------------------------------------------2025-03-16T15:49:59.410356 -
2025-03-16T15:49:59.420116 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-03-16T15:49:59.421091 - [36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-03-16T15:49:59.421091 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-03-16T15:49:59.447442 - DWPose: Onnxruntime with acceleration providers detected2025-03-16T15:49:59.447442 -
2025-03-16T15:49:59.473795 - [1;35m### [START] ComfyUI AlekPet Nodes [1;34mv1.0.22[0m[1;35m ###[0m2025-03-16T15:49:59.473795 -
2025-03-16T15:50:03.776582 - [92mNode -> ArgosTranslateNode: [93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode[0m [92m[Loading] [0m2025-03-16T15:50:03.776582 -
2025-03-16T15:50:03.787319 - [92mNode -> DeepTranslatorNode: [93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode[0m [92m[Loading] [0m2025-03-16T15:50:03.787319 -
2025-03-16T15:50:03.795126 - [92mNode -> GoogleTranslateNode: [93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode[0m [92m[Loading] [0m2025-03-16T15:50:03.795126 -
2025-03-16T15:50:03.800982 - [92mNode -> ExtrasNode: [93mPreviewTextNode, HexToHueNode, ColorsCorrectNode[0m [92m[Loading] [0m2025-03-16T15:50:03.801958 -
2025-03-16T15:50:03.805862 - [92mNode -> PoseNode: [93mPoseNode[0m [92m[Loading] [0m2025-03-16T15:50:03.805862 -
2025-03-16T15:50:03.877619 - [92mNode -> IDENode: [93mIDENode[0m [92m[Loading] [0m2025-03-16T15:50:03.877619 -
2025-03-16T15:50:04.148267 - [92mNode -> PainterNode: [93mPainterNode[0m [92m[Loading] [0m2025-03-16T15:50:04.148267 -
2025-03-16T15:50:04.148267 - [1;35m### [END] ComfyUI AlekPet Nodes ###[0m2025-03-16T15:50:04.148267 -
2025-03-16T15:50:04.687324 - [34mFizzleDorf Custom Nodes: [92mLoaded[0m2025-03-16T15:50:04.687324 -
2025-03-16T15:50:04.713676 - # 😺dzNodes: LayerStyle -> [1;33mInvalid FONT directory, default to be used. check D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle\resource_dir.ini[m2025-03-16T15:50:04.713676 -
2025-03-16T15:50:04.713676 - # 😺dzNodes: LayerStyle -> [1;33mInvalid LUT directory, default to be used. check D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle\resource_dir.ini[m2025-03-16T15:50:04.713676 -
2025-03-16T15:50:04.713676 - # 😺dzNodes: LayerStyle -> [1;33mFind 1 LUTs in D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle\lut[m2025-03-16T15:50:04.713676 -
2025-03-16T15:50:04.714652 - # 😺dzNodes: LayerStyle -> [1;33mFind 1 Fonts in D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle\font[m2025-03-16T15:50:04.714652 -
2025-03-16T15:50:04.905010 - [36;20m[comfy_mtb] | INFO -> loaded [96m83[0m nodes successfuly[0m
2025-03-16T15:50:04.905010 - [36;20m[comfy_mtb] | INFO -> Some nodes (2) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.[0m
2025-03-16T15:50:04.946000 -
[36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m2025-03-16T15:50:04.946000 -
2025-03-16T15:50:04.951857 - Patching UNetModel.forward2025-03-16T15:50:04.951857 -
2025-03-16T15:50:04.951857 - UNetModel.forward has been successfully patched.2025-03-16T15:50:04.951857 -
2025-03-16T15:50:04.963569 - [1;32m[Power Noise Suite]: 🦚🦚🦚 [93m[3mSqueaa-squee!!![0m 🦚🦚🦚2025-03-16T15:50:04.963569 -
2025-03-16T15:50:04.963569 - [1;32m[Power Noise Suite]:[0m Tamed [93m11[0m wild nodes.2025-03-16T15:50:04.963569 -
2025-03-16T15:50:04.976257 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-03-16T15:50:04.976257 -
2025-03-16T15:50:05.093883 -
2025-03-16T15:50:05.093883 - [92m[rgthree] Loaded 42 magnificent nodes.[00m2025-03-16T15:50:05.093883 -
2025-03-16T15:50:05.093883 - [33m[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.[00m2025-03-16T15:50:05.093883 -
2025-03-16T15:50:05.093883 -
2025-03-16T15:50:05.106572 - Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 2147, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 879, in exec_module
File "<frozen importlib._bootstrap_external>", line 1016, in get_code
File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\ComfyUI\\custom_nodes\\was-node-suite-comfyui\\__init__.py'
2025-03-16T15:50:05.106572 - Cannot import D:\ComfyUI\custom_nodes\was-node-suite-comfyui module for custom nodes: [Errno 2] No such file or directory: 'D:\\ComfyUI\\custom_nodes\\was-node-suite-comfyui\\__init__.py'
2025-03-16T15:50:05.119259 - [34mWAS Node Suite: [0mBlenderNeko's Advanced CLIP Text Encode found, attempting to enable `CLIPTextEncode` support.[0m2025-03-16T15:50:05.119259 -
2025-03-16T15:50:05.119259 - [34mWAS Node Suite: [0m`CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.[0m2025-03-16T15:50:05.119259 -
2025-03-16T15:50:08.864124 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-03-16T15:50:08.864124 -
2025-03-16T15:50:08.864124 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `D:\ComfyUI\custom_nodes\was-node-suite-comfyui-main\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-03-16T15:50:08.864124 -
2025-03-16T15:50:12.708387 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m221[0m [32mnodes successfully.[0m2025-03-16T15:50:12.708387 -
2025-03-16T15:50:12.708387 -
[3m[93m"Believe you deserve it and the universe will serve it."[0m[3m - Unknown[0m
2025-03-16T15:50:12.708387 -
2025-03-16T15:50:12.717172 -
Import times for custom nodes:
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\websocket_image_save.py
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_AdvancedRefluxControl-main
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\ControlNet-LLLite-ComfyUI
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\FreeU_Advanced
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\stability-ComfyUI-nodes
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\Comfyui_TTP_Toolset-main
2025-03-16T15:50:12.717172 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_InstantID
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_experiments
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\PowerNoiseSuite
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes-main
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\images-grid-comfy-plugin
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2025-03-16T15:50:12.718148 - 0.0 seconds (IMPORT FAILED): D:\ComfyUI\custom_nodes\was-node-suite-comfyui
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_essentials
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation-main
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\Comfyui_CXH_joy_caption-main
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2025-03-16T15:50:12.718148 - 0.0 seconds: D:\ComfyUI\custom_nodes\rgthree-comfy
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-GGUF-main
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_FaceAnalysis
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\comfyui-workspace-manager
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-KJNodes
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-03-16T15:50:12.719124 - 0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
2025-03-16T15:50:12.719124 - 0.1 seconds: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-03-16T15:50:12.719124 - 0.1 seconds: D:\ComfyUI\custom_nodes\comfy_mtb
2025-03-16T15:50:12.719124 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Marigold
2025-03-16T15:50:12.719124 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Crystools
2025-03-16T15:50:12.719124 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2025-03-16T15:50:12.720100 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2025-03-16T15:50:12.720100 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper
2025-03-16T15:50:12.720100 - 0.1 seconds: D:\ComfyUI\custom_nodes\comfyui-reactor-node
2025-03-16T15:50:12.720100 - 0.1 seconds: D:\ComfyUI\custom_nodes\PuLID_ComfyUI
2025-03-16T15:50:12.720100 - 0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle
2025-03-16T15:50:12.720100 - 0.3 seconds: D:\ComfyUI\custom_nodes\ComfyUI_FizzNodes
2025-03-16T15:50:12.720100 - 0.5 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager
2025-03-16T15:50:12.720100 - 0.5 seconds: D:\ComfyUI\custom_nodes\ComfyUI-SUPIR
2025-03-16T15:50:12.720100 - 0.9 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-03-16T15:50:12.720100 - 1.7 seconds: D:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-master
2025-03-16T15:50:12.720100 - 3.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-AdvancedLivePortrait-main
2025-03-16T15:50:12.720100 - 4.8 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2025-03-16T15:50:12.720100 - 5.1 seconds: D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes
2025-03-16T15:50:12.720100 - 7.6 seconds: D:\ComfyUI\custom_nodes\was-node-suite-comfyui-main
2025-03-16T15:50:12.720100 -
2025-03-16T15:50:12.746452 - Starting server
2025-03-16T15:50:12.747427 - To see the GUI go to: http://127.0.0.1:8188
2025-03-16T15:50:14.725313 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe-lightbox.esm.min.js2025-03-16T15:50:14.725313 -
2025-03-16T15:50:14.803395 - FETCH DATA from: D:\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-03-16T15:50:14.803395 - 2025-03-16T15:50:14.810275 - [DONE]2025-03-16T15:50:14.811251 -
2025-03-16T15:50:14.845463 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/photoswipe.min.css2025-03-16T15:50:14.845463 -
2025-03-16T15:50:14.846440 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/pickr.min.js2025-03-16T15:50:14.846440 -
2025-03-16T15:50:15.054224 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/model-viewer.min.js2025-03-16T15:50:15.054224 -
2025-03-16T15:50:15.077649 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/classic.min.css2025-03-16T15:50:15.077649 -
2025-03-16T15:50:15.083505 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.min.js2025-03-16T15:50:15.083505 -
2025-03-16T15:50:15.085457 - D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes\webApp\lib/juxtapose.css2025-03-16T15:50:15.085457 -
2025-03-16T15:50:54.987818 - got prompt
2025-03-16T15:50:56.629816 - # 😺dzNodes: LayerStyle -> [1;32mImageScaleByAspectRatio V2 Processed 1 image(s).[m2025-03-16T15:50:56.629816 -
2025-03-16T15:51:00.708645 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T15:51:00.708645 -
2025-03-16T15:51:55.961112 - !!! Exception during processing !!! Allocation on device
2025-03-16T15:51:55.963064 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 885, in process
encoder.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T15:51:55.963064 - Got an OOM, unloading all loaded models.
2025-03-16T15:51:55.964040 - Prompt executed in 60.97 seconds
2025-03-16T16:24:22.190684 - got prompt
2025-03-16T16:24:22.389758 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:24:22.390735 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 885, in process
encoder.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:24:22.390735 - Got an OOM, unloading all loaded models.
2025-03-16T16:24:23.540464 - Prompt executed in 1.34 seconds
2025-03-16T16:27:20.382468 - got prompt
2025-03-16T16:27:20.795408 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:27:20.811057 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 885, in process
encoder.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:27:20.811057 - Got an OOM, unloading all loaded models.
2025-03-16T16:27:20.967271 - Prompt executed in 0.57 seconds
2025-03-16T16:27:59.782680 - got prompt
2025-03-16T16:28:00.766073 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:28:00.766073 -
2025-03-16T16:28:00.860753 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:28:00.861750 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:28:00.861750 - Got an OOM, unloading all loaded models.
2025-03-16T16:28:00.862748 - Prompt executed in 1.06 seconds
2025-03-16T16:28:18.266533 - got prompt
2025-03-16T16:28:19.223298 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:28:19.223298 -
2025-03-16T16:28:19.227287 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:28:19.227287 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:28:19.228285 - Got an OOM, unloading all loaded models.
2025-03-16T16:28:19.229282 - Prompt executed in 0.94 seconds
2025-03-16T16:28:39.900157 - got prompt
2025-03-16T16:28:39.927825 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:28:39.927825 -
2025-03-16T16:28:39.931843 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:28:39.932813 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:28:39.932813 - Got an OOM, unloading all loaded models.
2025-03-16T16:28:39.933812 - Prompt executed in 0.02 seconds
2025-03-16T16:29:14.034462 - got prompt
2025-03-16T16:29:14.051447 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:29:14.052443 -
2025-03-16T16:29:14.055434 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:29:14.056438 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:29:14.056438 - Got an OOM, unloading all loaded models.
2025-03-16T16:29:14.057432 - Prompt executed in 0.01 seconds
2025-03-16T16:29:20.150633 - got prompt
2025-03-16T16:29:20.191912 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:29:20.191912 -
2025-03-16T16:29:20.195901 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:29:20.195901 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:29:20.196899 - Got an OOM, unloading all loaded models.
2025-03-16T16:29:20.197927 - Prompt executed in 0.03 seconds
2025-03-16T16:34:08.689905 - got prompt
2025-03-16T16:34:08.706859 - <ComfyUI-WanVideoWrapper-main.wanvideo.modules.clip.CLIPModel object at 0x00000238E5E42920>2025-03-16T16:34:08.706859 -
2025-03-16T16:34:08.710879 - !!! Exception during processing !!! Allocation on device
2025-03-16T16:34:08.710879 - Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper-main\nodes.py", line 972, in process
clip_vision.model.to(device)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
return self._apply(convert)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
param_applied = fn(param)
File "D:\ComfyUI\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
return t.to(
torch.cuda.OutOfMemoryError: Allocation on device
2025-03-16T16:34:08.710879 - Got an OOM, unloading all loaded models.
2025-03-16T16:34:08.711845 - Prompt executed in 0.01 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":57,"last_link_id":76,"nodes":[{"id":17,"type":"WanVideoImageClipEncode","pos":[875.01025390625,278.4588623046875],"size":[315,266],"flags":{},"order":13,"mode":0,"inputs":[{"name":"clip_vision","localized_name":"clip_vision","label":"clip_vision","type":"CLIP_VISION","link":17},{"name":"image","localized_name":"image","label":"image","type":"IMAGE","link":71},{"name":"vae","localized_name":"vae","label":"vae","type":"WANVAE","link":21}],"outputs":[{"name":"image_embeds","localized_name":"image_embeds","label":"image_embeds","type":"WANVIDIMAGE_EMBEDS","links":[32],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoImageClipEncode"},"widgets_values":[480,872,81,true,0,1,1,true]},{"id":27,"type":"WanVideoSampler","pos":[1271.9066162109375,-365.5596008300781],"size":[315,458],"flags":{},"order":14,"mode":0,"inputs":[{"name":"model","localized_name":"model","label":"model","type":"WANVIDEOMODEL","link":29},{"name":"text_embeds","localized_name":"text_embeds","label":"text_embeds","type":"WANVIDEOTEXTEMBEDS","link":30},{"name":"image_embeds","localized_name":"image_embeds","label":"image_embeds","type":"WANVIDIMAGE_EMBEDS","link":32},{"name":"samples","localized_name":"samples","label":"samples","type":"LATENT","shape":7},{"name":"feta_args","localized_name":"feta_args","label":"feta_args","type":"FETAARGS","shape":7,"link":76},{"name":"context_options","localized_name":"context_options","label":"context_options","type":"WANVIDCONTEXT","shape":7},{"name":"teacache_args","localized_name":"teacache_args","label":"teacache_args","type":"TEACACHEARGS","shape":7},{"name":"flowedit_args","localized_name":"flowedit_args","label":"flowedit_args","type":"FLOWEDITARGS","shape":7,"link":null},{"name":"slg_args","localized_name":"slg_args","label":"slg_args","type":"SLGARGS","shape":7,"link":null}],"outputs":[{"name":"samples","localized_name":"samples","label":"samples","type":"LATENT","links":[33],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoSampler"},"widgets_values":[10,6,5,1087608934701559,"randomize",true,"dpm++",0,1,false,"default"]},{"id":32,"type":"WanVideoBlockSwap","pos":[232.6178741455078,-338.61407470703125],"size":[315,130],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","localized_name":"block_swap_args","label":"block_swap_args","type":"BLOCKSWAPARGS","links":[39],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoBlockSwap"},"widgets_values":[10,false,false,true]},{"id":35,"type":"WanVideoTorchCompileSettings","pos":[124.67726135253906,-627.7935180664062],"size":[390.5999755859375,178],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"torch_compile_args","localized_name":"torch_compile_args","label":"torch_compile_args","type":"WANCOMPILEARGS","links":[],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoTorchCompileSettings"},"widgets_values":["inductor",false,"default",false,64,true]},{"id":37,"type":"VHS_LoadVideo","pos":[3479.03955078125,-84.91875457763672],"size":[247.455078125,262],"flags":{},"order":2,"mode":4,"inputs":[{"name":"meta_batch","localized_name":"meta_batch","label":"批次管理","type":"VHS_BatchManager","shape":7},{"name":"vae","label":"vae","type":"VAE","shape":7}],"outputs":[{"name":"IMAGE","localized_name":"图像","label":"图像","type":"IMAGE","links":[41],"slot_index":0},{"name":"frame_count","localized_name":"frame_count","label":"帧计数","type":"INT"},{"name":"audio","localized_name":"audio","label":"音频","type":"VHS_AUDIO"},{"name":"video_info","localized_name":"video_info","label":"视频信息","type":"VHS_VIDEOINFO"}],"properties":{"Node name for S&R":"VHS_LoadVideo"},"widgets_values":{"video":"WanVideo2_1_00026.mp4","force_rate":0,"force_size":"Disabled","custom_width":512,"custom_height":512,"frame_load_cap":0,"skip_first_frames":0,"select_every_nth":1,"choose video to upload":"image","videopreview":{"paused":false,"hidden":false,"params":{"force_rate":0,"filename":"WanVideo2_1_00026.mp4","select_every_nth":1,"frame_load_cap":0,"format":"video/mp4","skip_first_frames":0,"type":"input"},"muted":false}}},{"id":38,"type":"VHS_VideoCombine","pos":[3952.923583984375,-139.33578491210938],"size":[414.69610595703125,238],"flags":{},"order":10,"mode":4,"inputs":[{"name":"images","localized_name":"images","label":"图像","type":"IMAGE","link":41},{"name":"audio","localized_name":"audio","label":"音频","type":"VHS_AUDIO","shape":7},{"name":"meta_batch","localized_name":"meta_batch","label":"批次管理","type":"VHS_BatchManager","shape":7},{"name":"vae","label":"vae","type":"VAE","shape":7}],"outputs":[{"name":"Filenames","localized_name":"Filenames","label":"文件名","type":"VHS_FILENAMES"}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"wan/WanVideo2_1","format":"image/gif","pingpong":false,"save_output":true,"videopreview":{"paused":false,"hidden":false,"params":{"filename":"WanVideo2_1_00028.gif","workflow":"WanVideo2_1_00011.png","fullpath":"N:\\AI\\ComfyUI\\output\\WanVideo2_1_00011.mp4","format":"image/gif","subfolder":"wan","type":"output","frame_rate":16}}}},{"id":42,"type":"LayerUtility: ImageScaleByAspectRatio V2","pos":[-432,207],"size":[504,330],"flags":{},"order":11,"mode":0,"inputs":[{"name":"image","localized_name":"image","label":"图像","type":"IMAGE","shape":7,"link":42},{"name":"mask","localized_name":"mask","label":"遮罩","type":"MASK","shape":7}],"outputs":[{"name":"image","localized_name":"image","label":"图像","type":"IMAGE","links":[71],"slot_index":0},{"name":"mask","localized_name":"mask","label":"遮罩","type":"MASK"},{"name":"original_size","localized_name":"original_size","label":"原始大小","type":"BOX"},{"name":"width","localized_name":"width","label":"width","type":"INT"},{"name":"height","localized_name":"height","label":"height","type":"INT"}],"properties":{"Node name for S&R":"LayerUtility: ImageScaleByAspectRatio V2"},"widgets_values":["original",1,1,"letterbox","lanczos","8","longest",1024,"#000000"],"color":"rgba(38, 73, 116, 0.7)"},{"id":30,"type":"VHS_VideoCombine","pos":[1700,-240],"size":[903.1542358398438,310],"flags":{},"order":16,"mode":0,"inputs":[{"name":"images","localized_name":"images","label":"图像","type":"IMAGE","link":36},{"name":"audio","localized_name":"audio","label":"音频","type":"VHS_AUDIO","shape":7},{"name":"meta_batch","localized_name":"meta_batch","label":"批次管理","type":"VHS_BatchManager","shape":7},{"name":"vae","label":"vae","type":"VAE","shape":7}],"outputs":[{"name":"Filenames","localized_name":"Filenames","label":"文件名","type":"VHS_FILENAMES"}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"wan/WanVideo2_1","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"pingpong":false,"save_output":true,"videopreview":{"paused":false,"hidden":false,"params":{"filename":"WanVideo2_1_00002_tlppy_1740733355_pcdrg_1740733358.mp4","workflow":"WanVideo2_1_00011.png","fullpath":"N:\\AI\\ComfyUI\\output\\WanVideo2_1_00011.mp4","format":"video/h264-mp4","subfolder":"wan","type":"output","frame_rate":16}}}},{"id":16,"type":"WanVideoTextEncode","pos":[675.8850708007812,-36.032100677490234],"size":[400,200],"flags":{},"order":12,"mode":0,"inputs":[{"name":"t5","localized_name":"t5","label":"t5","type":"WANTEXTENCODER","link":15},{"name":"positive_prompt","label":"positive_prompt","type":"STRING","widget":{"name":"positive_prompt"},"link":75,"slot_index":1},{"name":"model_to_offload","localized_name":"model_to_offload","label":"model_to_offload","type":"WANVIDEOMODEL","shape":7,"link":null}],"outputs":[{"name":"text_embeds","localized_name":"text_embeds","label":"text_embeds","type":"WANVIDEOTEXTEMBEDS","links":[30],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoTextEncode"},"widgets_values":["一个男人突然出现强吻女人,女人惊慌失措","",true,true,true]},{"id":56,"type":"TextInput_","pos":[-306,-165],"size":[400,200],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"STRING","localized_name":"字符串","label":"字符串","type":"STRING","shape":3,"links":[75]}],"properties":{"Node name for S&R":"TextInput_"},"widgets_values":["一个男人突然出现强吻女人,女人惊慌失措",true]},{"id":57,"type":"WanVideoEnhanceAVideo","pos":[769,-671],"size":[315,106],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"feta_args","localized_name":"feta_args","label":"feta_args","type":"FETAARGS","shape":3,"links":[76],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoEnhanceAVideo"},"widgets_values":[2,0,1]},{"id":18,"type":"LoadImage","pos":[-900,205],"size":[315,314],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","localized_name":"图像","label":"图像","type":"IMAGE","links":[42],"slot_index":0},{"name":"MASK","localized_name":"遮罩","label":"遮罩","type":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["56243ae26b44e33635cf318f81a8a01c6108f76288ee3897ef3b95e27d4ad605.webp.jpg","image"]},{"id":13,"type":"LoadWanVideoClipTextEncoder","pos":[153.7196502685547,226.0297088623047],"size":[510.6601257324219,106],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"wan_clip_vision","localized_name":"wan_clip_vision","label":"wan_clip_vision","type":"CLIP_VISION","links":[17],"slot_index":0}],"properties":{"Node name for S&R":"LoadWanVideoClipTextEncoder"},"widgets_values":["open-clip-xlm-roberta-large-vit-huge-14_visual_fp32.safetensors","fp16","offload_device"]},{"id":11,"type":"LoadWanVideoT5TextEncoder","pos":[224.15325927734375,-34.481563568115234],"size":[377.1661376953125,130],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"wan_t5_model","localized_name":"wan_t5_model","label":"wan_t5_model","type":"WANTEXTENCODER","links":[15],"slot_index":0}],"properties":{"Node name for S&R":"LoadWanVideoT5TextEncoder"},"widgets_values":["umt5-xxl-enc-bf16.safetensors","bf16","offload_device","disabled"]},{"id":21,"type":"WanVideoVAELoader","pos":[401.8250427246094,393.2132873535156],"size":[315,82],"flags":{},"order":8,"mode":0,"inputs":[],"outputs":[{"name":"vae","localized_name":"vae","label":"vae","type":"WANVAE","links":[21,34],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoVAELoader"},"widgets_values":["Wan2_1_VAE_fp32.safetensors","bf16"]},{"id":28,"type":"WanVideoDecode","pos":[1294.1009521484375,208.92510986328125],"size":[315,174],"flags":{},"order":15,"mode":0,"inputs":[{"name":"vae","localized_name":"vae","label":"vae","type":"WANVAE","link":34},{"name":"samples","localized_name":"samples","label":"samples","type":"LATENT","link":33}],"outputs":[{"name":"images","localized_name":"images","label":"images","type":"IMAGE","links":[36],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoDecode"},"widgets_values":[true,272,272,144,128]},{"id":22,"type":"WanVideoModelLoader","pos":[620.3950805664062,-357.8426818847656],"size":[477.4410095214844,226.43276977539062],"flags":{},"order":9,"mode":0,"inputs":[{"name":"compile_args","localized_name":"compile_args","label":"compile_args","type":"WANCOMPILEARGS","shape":7},{"name":"block_swap_args","localized_name":"block_swap_args","label":"block_swap_args","type":"BLOCKSWAPARGS","shape":7,"link":39},{"name":"lora","localized_name":"lora","label":"lora","type":"WANVIDLORA","shape":7},{"name":"vram_management_args","localized_name":"vram_management_args","label":"vram_management_args","type":"VRAM_MANAGEMENTARGS","shape":7}],"outputs":[{"name":"model","localized_name":"model","label":"model","type":"WANVIDEOMODEL","links":[29],"slot_index":0}],"properties":{"Node name for S&R":"WanVideoModelLoader"},"widgets_values":["Wan2_1-I2V-14B-480P_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","offload_device","sdpa"]}],"links":[[15,11,0,16,0,"WANTEXTENCODER"],[17,13,0,17,0,"WANCLIP"],[21,21,0,17,2,"VAE"],[29,22,0,27,0,"WANVIDEOMODEL"],[30,16,0,27,1,"WANVIDEOTEXTEMBEDS"],[32,17,0,27,2,"WANVIDIMAGE_EMBEDS"],[33,27,0,28,1,"LATENT"],[34,21,0,28,0,"VAE"],[36,28,0,30,0,"IMAGE"],[39,32,0,22,1,"BLOCKSWAPARGS"],[41,37,0,38,0,"IMAGE"],[42,18,0,42,0,"IMAGE"],[71,42,0,17,1,"IMAGE"],[75,56,0,16,1,"STRING"],[76,57,0,27,4,"FETAARGS"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7972024500000021,"offset":[662.8885121606961,772.1018519744364]},"VHS_KeepIntermediate":true,"VHS_MetadataImage":true,"0246.VERSION":[0,0,4],"VHS_latentpreviewrate":0,"VHS_latentpreview":true,"groupNodes":{},"node_versions":{"ComfyUI-WanVideoWrapper":"c83f47e4d97b5891058555df16db5e33d16afab1","ComfyUI-VideoHelperSuite":"2c25b8b53835aaeb63f831b3137c705cf9f85dce","comfy-core":"0.3.14"}},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Logs
```powershell
```
### Other
_No response_ | open | 2025-03-16T08:35:00Z | 2025-03-16T09:51:28Z | https://github.com/comfyanonymous/ComfyUI/issues/7266 | [
"User Support"
] | xuanyoyo | 2 |
apachecn/ailearning | python | 542 | 加入我们 | 大佬,怎么加入群聊学习呢?为啥子加不上呢?是不是需要有某些要求呢? | closed | 2019-09-03T02:40:48Z | 2021-09-07T17:45:14Z | https://github.com/apachecn/ailearning/issues/542 | [] | achievejia | 1 |
chezou/tabula-py | pandas | 220 | Typo in readme | tabula.convert_into_by_batch("input_directory", output_format='csv', pages='all)
should be
tabula.convert_into_by_batch("input_directory", output_format='csv', pages='all')
which adds a single quote to the end of all | closed | 2020-02-28T19:55:52Z | 2020-02-28T19:56:07Z | https://github.com/chezou/tabula-py/issues/220 | [] | nubonics | 1 |
remsky/Kokoro-FastAPI | fastapi | 170 | Add Instructions to deploy | Currently not possible to deploy to digital ocean or Render. | closed | 2025-02-14T02:45:08Z | 2025-02-14T14:34:23Z | https://github.com/remsky/Kokoro-FastAPI/issues/170 | [] | chiho13 | 3 |
marshmallow-code/apispec | rest-api | 918 | Security scheme documentation | Hi,
I believe the optional security suite example provided in `apispec/docs/index.rst` requires the line `security=[{"ApiKeyAuth": []}]` added at line 37. If you don't add this line then the authorization header isn't added to any requests made from the Swagger UI.
Posting here (rather than forking/PR) as I'm not sure if there's a more elegant way of achieving this as part of the block at lines 51-53, but the above is functional. | closed | 2024-06-25T15:00:09Z | 2024-07-10T08:33:19Z | https://github.com/marshmallow-code/apispec/issues/918 | [
"documentation"
] | MattoElGato | 1 |
JaidedAI/EasyOCR | pytorch | 1,281 | Missing Confidence Score in Paragraph Mode | When extract text from images in paragraph mode, the confidence scores for the detected text are not included in the results.
Only `bbox` and `text` are in results.
Any idea of how to solve this? | open | 2024-07-17T20:58:30Z | 2024-07-27T13:09:39Z | https://github.com/JaidedAI/EasyOCR/issues/1281 | [] | devalnor | 2 |
chatanywhere/GPT_API_free | api | 68 | 能不能刷新key,担心无意泄露 | 如题 | closed | 2023-07-22T11:00:33Z | 2023-07-24T13:30:56Z | https://github.com/chatanywhere/GPT_API_free/issues/68 | [] | ethanzhao2001 | 1 |
healthchecks/healthchecks | django | 425 | Issues with using Sendgrid SMTP API | ## Expected Behavior
Expecting to send email with login link
## Current Behavior
SMTP settings seem to error out, unclear what is causing the issue but I'm unsure any information is returned from the inability to connect.
```
healthchecks | Traceback (most recent call last):
healthchecks | File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
healthchecks | self.run()
healthchecks | File "./hc/lib/emails.py", line 23, in run
healthchecks | msg.send()
healthchecks | File "/usr/lib/python3.8/site-packages/django/core/mail/message.py", line 276, in send
healthchecks | return self.get_connection(fail_silently).send_messages([self])
healthchecks | File "/usr/lib/python3.8/site-packages/django/core/mail/backends/smtp.py", line 102, in send_messages
healthchecks | new_conn_created = self.open()
healthchecks | File "/usr/lib/python3.8/site-packages/django/core/mail/backends/smtp.py", line 69, in open
healthchecks | self.connection.login(self.username, self.password)
healthchecks | File "/usr/lib/python3.8/smtplib.py", line 723, in login
healthchecks | (code, resp) = self.auth(
healthchecks | File "/usr/lib/python3.8/smtplib.py", line 635, in auth
healthchecks | (code, resp) = self.docmd("AUTH", mechanism + " " + response)
healthchecks | File "/usr/lib/python3.8/smtplib.py", line 425, in docmd
healthchecks | return self.getreply()
healthchecks | File "/usr/lib/python3.8/smtplib.py", line 398, in getreply
healthchecks | raise SMTPServerDisconnected("Connection unexpectedly closed")
healthchecks | smtplib.SMTPServerDisconnected: Connection unexpectedly closed
```
This was working for me in previous image versions. It is unclear to me what has changed.
## Steps to Reproduce
I'm currently Sendgrid's SMTP API and have set the following environment variables:
```
- DEFAULT_FROM_EMAIL="<email>"
- EMAIL_HOST=smtp.sendgrid.net
- EMAIL_PORT=587
- EMAIL_HOST_USER=apikey
- EMAIL_HOST_PASSWORD=<api key>
- EMAIL_USE_TLS=True
```
| closed | 2020-09-10T12:27:31Z | 2020-10-05T08:11:34Z | https://github.com/healthchecks/healthchecks/issues/425 | [] | dalanmiller | 6 |
dgtlmoon/changedetection.io | web-scraping | 2,166 | [feature] Add change detection support for tree-sitter to monitor javascript (for errors/bugs/etc and other information) | Please add support for [tree-sitter](https://tree-sitter.github.io/tree-sitter/) or something to support javascript and web assembly monitoring. Here is a pretty straight forward example showing how tree-sitter is used for bug bounty monitoring. https://alexvec.github.io/posts/monitoring-js-files/
Ideally, people could create their own tree-sitter queries and changedetection would do a diff on those each run. | open | 2024-02-06T18:31:22Z | 2024-07-10T11:23:08Z | https://github.com/dgtlmoon/changedetection.io/issues/2166 | [
"enhancement",
"plugin-candidate"
] | ResistanceIsUseless | 1 |
erdewit/ib_insync | asyncio | 705 | can not place order with float quantity | Hello, I wrote stock, vol, stop sell price on excel as below.
[stock.xlsx](https://github.com/erdewit/ib_insync/files/14489807/stock.xlsx)
Then I use the code below to transmit order for me. The code below can not place order on ib tws.
I think the problem might be: 'vol': float
if I change 'vol': float to 'vol': int, the code will place order to ib tws correctly.
Please advice how could I place with float quantity to ib tws. Thanks in advnace.
---------------
from ib_insync import *
import pandas as pd
# Connect to Interactive Brokers (IB)
ib = IB()
ib.connect('127.0.0.1', 7496, clientId=1)
df = pd.read_excel('/Users/ctu1121/Downloads/stock.xlsx', dtype={'stock': str, 'vol': float, 'price': float}) #Error,might be 'vol': float
# Iterate over each row in the DataFrame
for index, row in df.iterrows():
# Create the contract object for the stock
stock = Stock(row['stock'], 'SMART', 'USD')
# Create a market buy order and set FA information
buy_order = MarketOrder("BUY", row['vol'])
buy_order.faGroup = '20240302' # Specify the FA portfolio name
# Place the market buy order
buy_trade = ib.placeOrder(stock, buy_order)
# Create a stop sell order
stop_price = row['price'] # Get the stop sell price from the Excel file
sell_order = StopOrder("SELL", row['vol'], stop_price)
sell_order.faGroup = '20240302' # Use the same FA portfolio name
# Place the stop sell order
sell_trade = ib.placeOrder(stock, sell_order)
# Print trade information
print(buy_trade)
print(sell_trade)
# Disconnect from IB
ib.disconnect()
| closed | 2024-03-05T03:09:40Z | 2024-03-05T08:42:26Z | https://github.com/erdewit/ib_insync/issues/705 | [] | ctu1121 | 0 |
neuml/txtai | nlp | 109 | Add ONNX support for Embeddings and Pipelines | Add ONNX support for Embeddings and Pipelines.
Sequence to Sequence models (summarization, transcription, translation) will be added later once ONNX support for encoder-decoder models is more mature. | closed | 2021-08-27T22:38:35Z | 2022-10-17T18:04:09Z | https://github.com/neuml/txtai/issues/109 | [] | davidmezzetti | 3 |
dynaconf/dynaconf | fastapi | 1,208 | fix: CLI must discover dynaconf instance when DJANGO_SETTINGS_MODULE points to __init__.py | When `DJANGO_SETTINGS_MODULE=module` and it is `module/__init__.py` the dynaconf CLI must inspect that module and lookup for a LazySettings instance | closed | 2024-12-20T18:27:16Z | 2025-01-21T17:13:27Z | https://github.com/dynaconf/dynaconf/issues/1208 | [
"bug",
"in progress",
"aap"
] | rochacbruno | 0 |
ShishirPatil/gorilla | api | 546 | [BFCL] Is this the same data used for the leaderboard? | hello.
Thank you for making the BFCL public.
I have a question. Is the data in “Berkeley-Feature-Currency-Leaderboard/Data” the same data you used for the leaderboard (https://gorilla.cs.berkeley.edu/leaderboard.html)?
I'm curious if the data used for the leaderboard is blind data, different from github.
Thank you in advance for your response. | closed | 2024-07-24T07:26:46Z | 2024-10-16T07:38:03Z | https://github.com/ShishirPatil/gorilla/issues/546 | [
"BFCL-General"
] | hexists | 4 |
pandas-dev/pandas | data-science | 60,358 | ENH: Strip/Trim `github.event.comment.user.login` on `issue_assign` job in `comment_commands.yml` | ### Feature Type
- [X] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
Comments containing "take" with leading spaces gets skipped by the `issue_assign` job. Got a couple of experiences with this because I usually click Enter by reflex after typing in a comment. So my comment becomes`take\n`, which gets skipped by this job
### Feature Description
https://github.com/pandas-dev/pandas/blob/6a7685faf104f8582e0e75f1fae58e09ae97e2fe/.github/workflows/comment-commands.yml#L14
Add `trim()` command inside `github.event.comment.body`
```
if: (!github.event.issue.pull_request) && trim(github.event.comment.body) == 'take'
```
### Alternative Solutions
NA; Enhancement is straight forward
### Additional Context
_No response_ | closed | 2024-11-19T00:58:22Z | 2024-11-21T16:31:20Z | https://github.com/pandas-dev/pandas/issues/60358 | [
"Enhancement",
"CI"
] | KevsterAmp | 2 |
SALib/SALib | numpy | 116 | [JOSS review] Adaptations of the `contributing.md` | [JOSS review](https://github.com/openjournals/joss-reviews/issues/97)
The current contribution guidelines provide essential information to interpret the current implementation and objectives, such as the design principle of the decoupling of the individual methods instead of a OOP approach. This is of great help.
However, I suggest to adapt the guidelines as follows:
* The contributing guidelines do not provide additional information about how to contribute new code. For example, provide information on how to provide pull requests (maybe different for bug fixes versus new SA methods) and how revision of new methods is organised (it would be good to formalize this revision procedure, e.g. which default examples should be tested).
* Some additional information about how to report problems could be useful. This could maybe provide classification on the kind of issue: (1) pure python `Error` or (2) irregularities of the SA outcome (the functions work, but output is 'strange'). The first one is basically a bug as such, whereas the second one is more on the interpretation side. Making this difference to the user in the contribution guidelines could be helpful.
* The last section of the guidelines provides information on ideas for the future. I would suggest to get these out of the contribution guidelines and put these in separate issues, so progress can be checked for each of them and closed when finished. The latter is harder to track in the guidelines.
For these comment, I do think that the addition of some new/custom [labels](https://help.github.com/articles/creating-and-editing-labels-for-issues-and-pull-requests/) could provide sufficient abilities to support these adaptations. For example, a special label for *interpretation* questions/issues and just `bug`. The ideas of new methods could be grouped with label `addmethod`, whereas other features are `enhancement`. The usage of these labels can be explained in the guidelines.
| closed | 2016-12-13T04:14:11Z | 2017-01-06T16:00:05Z | https://github.com/SALib/SALib/issues/116 | [] | stijnvanhoey | 2 |
pytest-dev/pytest-mock | pytest | 349 | mocker.patch does not change return value | In
https://github.com/different-ai/embedbase/blob/601c4c227993b8c081ddf3eeaddeca66423bd1ca/embedbase/test_end_to_end.py#L334
I'm trying to mock `embedbase.utils.get_user_id` returning `user1` but when calling the function it does return None instead, is there something I did wrong or misunderstood?
```py
mocker.patch("embedbase.utils.get_user_id", return_value="user1")
```
Thanks!
| closed | 2023-03-15T07:11:41Z | 2023-04-21T15:15:08Z | https://github.com/pytest-dev/pytest-mock/issues/349 | [
"question"
] | louis030195 | 3 |
openapi-generators/openapi-python-client | fastapi | 671 | Optional date-time attribute with value None gets isoparse'd | **Describe the bug**
I have a similar problem to https://github.com/openapi-generators/openapi-python-client/issues/456, but in my case `date-time` attribute isn't marked as required. Yet, the model's `from_dict` method has following code:
```python
_retry_at = d.pop("retry_at", UNSET)
retry_at: Union[Unset, datetime.datetime]
if isinstance(_retry_at, Unset):
retry_at = UNSET
else:
retry_at = isoparse(_retry_at)
```
which tries to parse `None` value if there is no `retry_at` in the server response.
**To Reproduce**
Client is generated like this
```
poetry run openapi-python-client generate --path ../swagger/$(API_VERSION)/swagger.json
```
**Expected behavior**
Client doesn't try to parse absent `date-time` attributes in the response.
**OpenAPI Spec File**
```json
{
"openapi": "3.0.0",
"servers": [
{
"url": "/api/v1"
}
],
"components": {
"schemas": {
"Operation": {
"type": "object",
"required": [
"id"
],
"properties": {
"id": {
"type": "string"
},
"retry_at": {
"type": "string",
"format": "date-time"
}
}
}
}
}
}
```
**Desktop (please complete the following information):**
- OS: Linux, PopOS 22
- Python Version: 3.9.0
- openapi-python-client version: 0.11.6
| closed | 2022-09-16T21:24:31Z | 2022-09-19T17:38:38Z | https://github.com/openapi-generators/openapi-python-client/issues/671 | [
"🐞bug"
] | ololobus | 3 |
pytest-dev/pytest-qt | pytest | 261 | Expand troubleshooting documentation | See this discussion on Twitter: https://twitter.com/VeronicaInPink/status/1129848173084717056
Ideas what to mention:
- What to do on segfaults/aborts (running pytest with `-s`, `pytest-faulthandler`)
- Missing `libxkbcommon-x11-0` dependency (e.g. on Travis) | open | 2019-05-21T11:30:42Z | 2022-05-31T15:19:28Z | https://github.com/pytest-dev/pytest-qt/issues/261 | [
"docs :book:"
] | The-Compiler | 8 |
AntonOsika/gpt-engineer | python | 211 | File name is not correct | The generated py files is not named correct.
It got the extra symbol `]` in the file extension name.
<img width="483" alt="image" src="https://github.com/AntonOsika/gpt-engineer/assets/629338/8e57a119-36bc-4c6a-baa5-7d6fac82bc72">
| closed | 2023-06-19T18:15:38Z | 2023-06-20T10:09:29Z | https://github.com/AntonOsika/gpt-engineer/issues/211 | [] | jjhesk | 1 |
zappa/Zappa | flask | 717 | [Migrated] AuthorizationScopes not supported | Originally from: https://github.com/Miserlou/Zappa/issues/1816 by [urluba](https://github.com/urluba)
## Context
When using Cognito, API Gateway provides the authorizationScopes property on the API Gateway Method to match against scopes in the access token.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizationscopes
> A list of authorization scopes configured on the method. The scopes are used with a COGNITO_USER_POOLS authorizer to authorize the method invocation. The authorization works by matching the method scopes against the scopes parsed from the access token in the incoming request. The method invocation is authorized if any method scopes matches a claimed scope in the access token. Otherwise, the invocation is not authorized. When the method scope is configured, the client must provide an access token instead of an identity token for authorization purposes.
## Expected Behavior
Using a COGNITO_USERS_POOLS, we should be able to provide a list of scopes. By example:
```
{
"authorizer": {
"type": "COGNITO_USER_POOLS",
"provider_arns": [
"arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id}"
],
"authorization_scopes": [
"scope_1",
"scope_2",
]
}
}
```
## Actual Behavior
All invocations are not authorized as no matching scopes are found
## Possible Fix
Enrich the CFN template with the AutorizationScopes attributes
## Steps to Reproduce
N/A
## Your Environment
N/A | closed | 2021-02-20T12:41:02Z | 2024-04-13T18:14:30Z | https://github.com/zappa/Zappa/issues/717 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
gee-community/geemap | jupyter | 2,084 | after entering m = gee map.Map() | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```python
import geemap
geemap.Report()
```
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| closed | 2024-07-18T14:15:04Z | 2024-07-18T14:17:59Z | https://github.com/gee-community/geemap/issues/2084 | [
"bug"
] | douglagug | 3 |
hbldh/bleak | asyncio | 1,358 | RPi - BlueZ: client.services returning empty | * bleak version: 0.20.2
* Python version: 3.9.2
* Operating System: Linux (Raspbian running on Raspberry Pi 4)
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.55
### Description
I am trying to run a GATT server on a raspberry pi 4 and have been using the following example to get me started: https://github.com/PunchThrough/espresso-ble. The only modification I made to this code was to comment out the content of the AurthorizeService method in the Agent class to automatically return without requesting a "yes" from the user when polled by another connected device.
This server seems to run alright, I can identify it using my Android device running nRF connect, I can connect with it, and read / write to the service / characteristics.
On a second raspberry pi I am trying to run a simple BLE central to scan, detect, connect, and read/write to the previously mentioned peripheral. I am utilizing bleak for this portion of the project.
On this Pi I am running the service_explorer.py example from this repository to connect with the other device running the Vivaldi service and request a list of services / characteristics.
While it does connect to the peripheral pi, and the peripheral pi does print out that it automatically granted service request to the central pi, I see nothing printed out to the terminal.

It seems as if though client.services is returning empty.
I tried the same but using the outdated BluePy module running this script:
```
from bluepy.btle import Peripheral, BTLEException
def main():
device_mac_address = "D8:3A:DD:15:6A:87"
try:
print(f"Attemptingt to connect to device: {device_mac_address}")
p = Peripheral(device_mac_address)
for service in p.getServices():
print(f"Service: {service}")
for char in service.getCharacteristics():
print(f"\tCharacteristic: {char}")
except BTLEException as e:
print(f"Unable to connect to device {e}")
finally:
print(f"Disconnecting form device: {device_mac_address}")
p.disconnect()
if __name__ == "__main__":
main()
```
And I got the following when it was able to successfully connect.

The service / characteristics at the bottom are what I was looking for.
### Logs
Attached below is a portion of the BTMON log running on the central device (The pi running the bleak example).
[central_btmon_log.txt](https://github.com/hbldh/bleak/files/11944377/central_btmon_log.txt)
The central address is : D8:3A:DD:0C:AF:9F
The peripheral address is : D8:3A:DD:15:6A:87
The service uuid of interest is : "12634d89-d598-4874-8e86-7d042ee07ba7".
If you search for uuid in the log of the central, you can find the service when it ran the bleak example. | open | 2023-07-04T04:35:10Z | 2023-07-19T01:09:19Z | https://github.com/hbldh/bleak/issues/1358 | [
"Backend: BlueZ",
"more info required"
] | Rxb2300 | 2 |
autokey/autokey | automation | 470 | Unexpected behavior from highlevel.visgrep() | ## Classification:
Bug
## Reproducibility:
Always for weird results, sometimes it doesn't run the command at all.
## Version
AutoKey version: tested in both develop and 0.95.10
xwd: 1.0.7
visgrep: v1.09
Used GUI (Gtk, Qt, or both):
GTK, but I would assume the behavior is the same in Qt
Installed via: apt-get
Linux Distribution: ubuntu 20.04
## Summary
In `highlevel.visgrep` the screenshot file passed to `visgrep` is generated via `xwd` and then converted to a temporary png file. This leads to some unexpected results, along with throwing arbitratily throwing a `libpng error: Not a PNG file` when running the autokey script (no idea what is making this happen).
Running the command that `highlevel` uses to generate the screenshots produces unexpected results, windows from different workspaces appear where they are in those workspaces even if you are taking the screen shot from a different workspace (I have a Libreoffice window in workspace 2 but run the below command in workspace 1 it displays the libreoffice window).
In addition this method of generating screenshots leaves weird black borders around some of the windows (I assume something to do with backshadow rendering?)
`xwd -root -silent -display :0 | convert xwd:- png:./test.png`
Produces the following screenshot;

Using `gnome-screenshot` I get the actual version of what I see on my screen:

Note that in the first screenshot, generated the way autokey does for `visgrep` there are the black outlines on windows along with windows from other workspaces visible.
## Steps to Reproduce (if applicable)
You can generate "screenshots" the way autokey does using this command:
`xwd -root -silent -display :0 | convert xwd:- png:./test.png`
## Notes
I assume that this isn't desired behavior? This seems to be a result from converting to `xwd` to `png`, it seems like the `xwd` format is something that gathers all of the information about the location of all of the windows, even ones of other workspaces.
I'm not sure if there is a way to change the behavior or `xwd`, from reading the man page I don't see any obvious options to do so. Maybe it would be easier to alter the code to use a different screenshot program.
`import -window root ./filename` generates screenshots the way I would expect them to be generated. (from the imagemagick library)
| open | 2020-11-13T21:50:00Z | 2023-12-23T23:31:34Z | https://github.com/autokey/autokey/issues/470 | [
"bug",
"help-wanted",
"scripting"
] | sebastiansam55 | 7 |
dmlc/gluon-cv | computer-vision | 1,777 | transform_test() - bug in original image output | When using the function `load_test()` or `transform_test()` there is a return value `origs`: "a numpy ndarray as original un-normalized color image for display."
(see here: https://cv.gluon.ai/_modules/gluoncv/data/transforms/presets/yolo.html)
That image does not look like the original though, because it is resized with the "area-based" interpolation method.
(see here: https://cv.gluon.ai/_modules/gluoncv/data/transforms/image.html#resize_short_within)
With that method, values are outside of the 0-255 range and then, when `astype('uint8')` is applied, the image values over/underflow:
```
img = timage.resize_short_within(img, short, max_size, mult_base=stride)
orig_img = img.asnumpy().astype('uint8')
```
When applying functions afterwards, like `gluoncv.utils.viz.plot_bbox()`, the image looks incorrect at some spots. | closed | 2024-01-03T15:34:32Z | 2024-04-10T06:33:52Z | https://github.com/dmlc/gluon-cv/issues/1777 | [
"Stale"
] | ninafiona | 1 |
jmcnamara/XlsxWriter | pandas | 209 | Date Forrmat Bug: Slashes Replaced with Dashes | When I try to format the date using slashes, it replaces them with dashes.
```
>>> import xlsxwriter
>>> from dateutil import parser
>>> wb = xlsxwriter.Workbook("C:\\test.xlsx")
>>> sh = wb.add_worksheet()
>>> d = parser.parse("2014-03-02 4:30 PM")
>>> style = wb.add_format({"num_format": "YYYY/MM/DD"})
>>> sh.write(0,0,d,style)
0
>>> wb.close()
```
The date is displayed in the Excel file as:
```
2014-03-02
```
All the other formats I've tried work, it is only the slashes that it seems to be having a problem with.
| closed | 2015-01-08T00:32:19Z | 2021-08-23T17:36:08Z | https://github.com/jmcnamara/XlsxWriter/issues/209 | [
"question",
"ready to close"
] | NinjaMeTimbers | 7 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 132 | 请问 代码提供的模型是基于哪些数据集训练的 | closed | 2019-08-01T06:22:20Z | 2019-08-02T08:17:12Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/132 | [] | genghuan2005 | 1 | |
alirezamika/autoscraper | automation | 77 | How to scrap read more data given in the reviews? | I want to scrap the reviews from website. But when the answer have Read more with a link, its scraping as it is as Read more and not the entire data | closed | 2022-10-16T11:51:39Z | 2022-12-11T10:30:49Z | https://github.com/alirezamika/autoscraper/issues/77 | [] | Priyasganesan | 1 |
pyg-team/pytorch_geometric | deep-learning | 9,739 | Error when downloading `ShapeNet` dataset. | ### 🐛 Describe the bug
I am trying to download the `ShapeNet` dataset for a project in which I need to use PyTorch Geometric. However, I am dealing with some problems when downloading the dataset. It seems that there might be an error in the url from which the dataset is downloaded or something like that, as it is refusing the connection.
To reproduce the error you can execute the following.
```python
import torch
from torch_geometric.data import Data
import urllib.error
from torch_geometric.datasets import ShapeNet
try:
dataset = ShapeNet(
root='tmp/ShapeNet',
categories = ["Airplane"]
)
except urllib.error.URLError as e:
print(f"Error al intentar descargar el dataset: {e}")
```
I am getting the following error:
```bash
Downloading https://shapenet.cs.stanford.edu/media/shapenetcore_partanno_segmentation_benchmark_v0_normal.zip
Error al intentar descargar el dataset: <urlopen error [Errno 61] Connection refused>
```
Has anyone already experienced this error? How can I solve it? I need to process the ShapeNet dataset using PyTorch Geometric.
### Versions
. | open | 2024-10-28T08:23:17Z | 2024-10-29T09:25:48Z | https://github.com/pyg-team/pytorch_geometric/issues/9739 | [
"bug"
] | nachoogriis | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.