repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ymcui/Chinese-BERT-wwm | tensorflow | 138 | 训练语料 | 请问训练语料可以公开吗? | closed | 2020-08-19T02:39:06Z | 2020-08-19T03:16:09Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/138 | [] | liuwei1206 | 1 |
ultralytics/ultralytics | pytorch | 18,851 | Why are the modules named AConv and ADown? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I’m currently working on improving the `docstrings` in [block.py](https://github.com/ultralytics/ultralytics/blob/5d9535b008b92ba86e90b1a67281dbd93f53e43e/ultralytics/nn/modules/block.py) for better documentation, and I came across the `AConv` and `ADown` modules. While I know these names were carried over from the [original YOLOv9 repository](https://github.com/WongKinYiu/yolov9/blob/main/models/yolo.py), I couldn’t find a clear explanation for why they’re named this way.
https://github.com/ultralytics/ultralytics/blob/5d9535b008b92ba86e90b1a67281dbd93f53e43e/ultralytics/nn/modules/block.py#L621-L652
I’m curious about the reasoning behind the names:
1. AConv:
- Is the "A" for Average pooling (since it uses `avg_pool2d`)?
2. ADown:
- It starts with `avg_pool2d` but also uses `max_pool2d`. Why not "MDown" or something else?
- Or does it stand for something else, like `Asymmetric`?
If anyone has insights into why these names were chosen, or if you have theories.
Looking forward to hearing your thoughts! 🧠💡
### Additional
_No response_ | open | 2025-01-23T15:38:27Z | 2025-01-23T16:04:33Z | https://github.com/ultralytics/ultralytics/issues/18851 | [
"documentation",
"question"
] | visionNoob | 2 |
huggingface/datasets | machine-learning | 7,386 | Add bookfolder Dataset Builder for Digital Book Formats | ### Feature request
This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF.
### Motivation
Currently, loading datasets of these digital book files requires manual effort. This would also lower the barrier to entry for working with these formats, enabling more diverse and interesting datasets to be used within the Hugging Face ecosystem.
### Your contribution
This feature is rather simple as it will be based on the folder-based builder, similar to imagefolder. I'm willing to contribute to this feature by submitting a PR | closed | 2025-02-08T14:27:55Z | 2025-02-08T14:30:10Z | https://github.com/huggingface/datasets/issues/7386 | [
"enhancement"
] | shikanime | 1 |
vitalik/django-ninja | django | 521 | Router.path() doesn't exist. | When I started Djangoninja for the first time, I tried to configure the operation with Class based. But I followed the example below and found that router.path() is a non-existent function.
Is this function replaced or missing by another function?
```python
from ninja import Router
router = Router()
@router.path('/project/{project_id}/tasks')
class Tasks:
def __init__(self, request, project_id=int):
user_projects = request.user.project_set
self.project = get_object_or_404(user_projects, id=project_id))
self.tasks = self.project.task_set.all()
@router.get('/', response=List[TaskOut])
def task_list(self, request):
return self.tasks
@router.get('/{task_id}/', response=TaskOut)
def details(self, request, task_id: int):
return get_object_or_404(self.tasks, id=task_id)
@router.post('/{task_id}/complete', response=TaskOut)
def complete(self, request, task_id: int):
task = get_object_or_404(self.tasks, id=task_id)
task.completed = True
task.save()
return task
``` | closed | 2022-08-04T01:15:13Z | 2022-08-04T07:58:58Z | https://github.com/vitalik/django-ninja/issues/521 | [] | shiroXgodG | 2 |
huggingface/datasets | machine-learning | 6,597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_description="Convert dataset to Parquet.",
create_pr=True,
token=token,
)
```
creates the additional dataset `albertvillanova/caner`. | closed | 2024-01-16T11:27:07Z | 2024-02-05T12:29:37Z | https://github.com/huggingface/datasets/issues/6597 | [
"bug"
] | albertvillanova | 6 |
django-cms/django-cms | django | 7,225 | The fields `created_by` of PageUser, PageGroup and `user`/`group` of PagePermission can cause unwanted user/group deletion | ## Description
As all of these fields are defined as `on_delete=models.CASCADE`. That means, if I delete User A, that has previously created a `PageUser` B, the user B will be deleted as well. I never was really sure what these are anyway... see also #5517 .
## Steps to reproduce
Create User A. Log in with A, create PageUser B. Delete User A. User B is gone as well.
## Expected behaviour
User B should not be deleted. Instead, one could show (deleted User), and define the fields as `on_delete=models.SET_NULL`. Or remove the fields completly.
## Actual behaviour
User B is gone/deleted.
Also, discovered that normal new Users that are created trigger the creation of a `PageUser`, if `CMS_PERMISSION` is true. This can cause confusion, and makes it almost impossible to delete useres on such a system.
## Additional information (CMS/Python/Django versions)
for discussion since 2016 #5517
## Do you want to help fix this issue?
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [x] No, I only want to report the issue.
| closed | 2022-02-09T23:59:23Z | 2025-03-01T18:47:13Z | https://github.com/django-cms/django-cms/issues/7225 | [
"blocker",
"status: accepted"
] | benzkji | 17 |
Gozargah/Marzban | api | 1,158 | Fixed | Fixed | closed | 2024-07-21T03:58:20Z | 2024-07-22T09:00:57Z | https://github.com/Gozargah/Marzban/issues/1158 | [
"Feature"
] | mahdiismailpuri | 0 |
StackStorm/st2 | automation | 5,933 | run st2 with docker-compose, failed to connect to mongo | rabbitmq/mongo/redis are running OK, but st2 apps are always restarting.


| closed | 2023-03-13T10:03:25Z | 2023-03-18T13:13:34Z | https://github.com/StackStorm/st2/issues/5933 | [] | gudan803 | 2 |
allenai/allennlp | pytorch | 5,258 | Add a conda install for Mac | There is a [conda-forge](https://anaconda.org/conda-forge/allennlp) install for Linux. Could you please also add one for Mac?
https://conda-forge.org/#add_recipe
| closed | 2021-06-13T14:11:13Z | 2022-01-17T12:19:30Z | https://github.com/allenai/allennlp/issues/5258 | [
"Feature request"
] | codeananda | 19 |
neuml/txtai | nlp | 215 | Add Console Task | Add a new task that prints inputs and outputs to stdout. Mainly used for debugging. | closed | 2022-02-02T02:15:30Z | 2022-02-02T02:25:21Z | https://github.com/neuml/txtai/issues/215 | [] | davidmezzetti | 0 |
sanic-org/sanic | asyncio | 2,596 | Missing priority functionality in app-wide middlware | ## Description
In v22.9 the priority feature was added, but unfortunately it appears it was overlooked and there is priority parameter to `register_middleware()` for `Sanic`.
I am looking to use this feature because I have a Blueprint-specific middleware which appears to run before the application-wide middleware. In my case the application-wide middleware does authorization and updates `request.ctx` which is then used in the Blueprint-specific middleware.
### Is there an existing issue for this?
- [X] I have searched the existing issues
## Environment
Sanic (22.9.1; Routing 22.8.0) is ran on Windows using ASGI.
| closed | 2022-11-05T14:29:00Z | 2022-12-19T17:14:48Z | https://github.com/sanic-org/sanic/issues/2596 | [
"bug"
] | Bluenix2 | 0 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,159 | handling of outlook safelink protection | ### Proposal
is it possible to make the password reset link able to be active with outlook/defender safelink protection by enabeling the link to be "clicked" two times?
### Motivation and context
when we reset the users password for their globaleaks account they cant use the link, because its already been clicked (if they follow the link)
example https://eur05.safelinks.protection.outlook.com/?url=...............................................................
| closed | 2024-08-20T12:52:27Z | 2024-10-07T14:36:10Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4159 | [] | FFD8FFE1 | 2 |
home-assistant/core | python | 141,180 | Changing water temperature not working | ### The problem
Hello,
When trying to change the hot water temperature of my GAS boiler, nothing happens. Although when changing it via TADO app, the change is communicated back to HA.
Would be great if I can change this value from HA.
Looking in the debug logs, nothing is crashing. The value is trying to be updated, but in the end nothing seems to be happening.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
/
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tado
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/tado
### Diagnostics information
[home-assistant_tado_2025-03-23T09-04-47.207Z.log](https://github.com/user-attachments/files/19408025/home-assistant_tado_2025-03-23T09-04-47.207Z.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-23T09:14:45Z | 2025-03-23T09:14:50Z | https://github.com/home-assistant/core/issues/141180 | [
"integration: tado"
] | Formatrick | 1 |
MaartenGr/BERTopic | nlp | 1,344 | KeyBERTInspired issue running | Hi Marteen,
I was working on :
```
from umap import UMAP
from bertopic import BERTopic
# Using a custom UMAP model
umap_model = UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine', random_state=42)
# Train our model
topic_model = BERTopic(umap_model=umap_model)
i am trying: Topics generated with c-TF-IDF serve as a good first ranking of words with respect to their topic. In this section, these initial rankings of words can be considered candidate keywords for a topic as we might change their rankings based on any representation model.
# Save original representations
from copy import deepcopy
original_topics = deepcopy(topic_model.topic_representations_)
def topic_differences(model, original_topics, max_length=75, nr_topics=10):
""" For the first 10 topics, show the differences in
topic representations between two models """
for topic in range(nr_topics):
# Extract top 5 words per topic per model
og_words = " | ".join(list(zip(*original_topics[topic]))[0][:5])
new_words = " | ".join(list(zip(*model.get_topic(topic)))[0][:5])
# Print a 'before' and 'after'
whitespaces = " " * (max_length - len(og_words))
print(f"Topic: {topic} {og_words}{whitespaces}--> {new_words}")
Further i tried:
# KeyBERTInspired
from bertopic.representation import KeyBERTInspired
from bertopic import BERTopic
representation_model = KeyBERTInspired()
# Update our topic representations
new_topic_model = BERTopic(representation_model=representation_model).fit(sentences)
(i got the idea from : https://zenodo.org/record/7987071)
# Show topic differences
topic_differences(topic_model, new_topic_model)
```
but getting error:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 10>:10 │
│ in topic_differences:7 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'BERTopic' object is not subscriptable
```
Here are my Questions as follows:
--- Please advise on how to fix it. Additionally, what are best practises to do attention on topics.
--- Do you prefer cleaning and removing of stopwords?, I hope you can add a page with best practices.
--- Additionally, can you share dataset: maartengr/arxiv_nlp on huggingface?
Thanks Again!!
| closed | 2023-06-16T06:49:57Z | 2023-09-27T09:12:35Z | https://github.com/MaartenGr/BERTopic/issues/1344 | [] | andysingal | 7 |
dfki-ric/pytransform3d | matplotlib | 16 | Publish pytransform at JOSS | https://joss.theoj.org/
New publication: https://joss.theoj.org/papers/new | closed | 2018-10-29T21:44:06Z | 2019-01-31T17:28:13Z | https://github.com/dfki-ric/pytransform3d/issues/16 | [] | AlexanderFabisch | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 394 | Extend Missing Values Visualizers to plot missing values against the target (y) | Follow up item to #363
Extend Bar and Dispersion visualizers to create optionally create visualizers targeted against the target. For example, it would plot the number of missing values for each target variable.
| closed | 2018-04-26T01:58:47Z | 2018-07-24T14:40:59Z | https://github.com/DistrictDataLabs/yellowbrick/issues/394 | [
"type: feature"
] | ndanielsen | 2 |
stanford-oval/storm | nlp | 337 | [BUG]sample code spelling error | **Describe the bug**
in `readme.md` ,the sample code `costorm_runner.knowledge_base.reorganize()` should be `costorm_runner.knowledge_base.reogranize()`
| open | 2025-03-11T03:11:33Z | 2025-03-11T03:11:33Z | https://github.com/stanford-oval/storm/issues/337 | [] | JV-X | 0 |
encode/httpx | asyncio | 2,337 | closedResourceError intermittently when using httpx asyncclient to send the request | we are observing closedResourceError when sending multiple requests using httpx async client. httpx version used is 0.19.0. is this issue known and fixed in any later version?
Logs:
resp = await asyncio.gather(*[self.async_send_request(method,self.dest_host,self.dest_port,uri,i,payload=payload,headers=headers) for i in range(int(times) )])
File "/env/lib64/python3.9/site-packages/ocnftest_lib/scp_Pcf_client.py", line 175, in async_send_request
return await self.conn.request(method,"http://{}:{}{}".format(self.dest_host,self.dest_port,uri),data=payload,headers=headers,allow_redirects=False)
File "/env/lib64/python3.9/site-packages/httpx/_client.py", line 1494, in request
response = await self.send(
File "/env/lib64/python3.9/site-packages/httpx/_client.py", line 1586, in send
response = await self._send_handling_auth(
File "/env/lib64/python3.9/site-packages/httpx/_client.py", line 1616, in _send_handling_auth
response = await self._send_handling_redirects(
File "/env/lib64/python3.9/site-packages/httpx/_client.py", line 1655, in _send_handling_redirects
response = await self._send_single_request(request, timeout)
File "/env/lib64/python3.9/site-packages/httpx/_client.py", line 1699, in _send_single_request
) = await transport.handle_async_request(
File "/env/lib64/python3.9/site-packages/httpx/_transports/default.py", line 281, in handle_async_request
) = await self._pool.handle_async_request(
File "/env/lib64/python3.9/site-packages/httpcore/_async/connection_pool.py", line 234, in handle_async_request
response = await connection.handle_async_request(
File "/env/lib64/python3.9/site-packages/httpcore/_async/connection.py", line 148, in handle_async_request
return await self.connection.handle_async_request(
File "/env/lib64/python3.9/site-packages/httpcore/_async/http2.py", line 165, in handle_async_request
return await h2_stream.handle_async_request(
File "/env/lib64/python3.9/site-packages/httpcore/_async/http2.py", line 339, in handle_async_request
await self.send_headers(method, url, headers, has_body, timeout)
File "/env/lib64/python3.9/site-packages/httpcore/_async/http2.py", line 398, in send_headers
await self.connection.send_headers(self.stream_id, headers, end_stream, timeout)
File "/env/lib64/python3.9/site-packages/httpcore/_async/http2.py", line 274, in send_headers
await self.socket.write(data_to_send, timeout)
File "/env/lib64/python3.9/site-packages/httpcore/_backends/anyio.py", line 77, in write
return await self.stream.send(data)
File "/env/lib64/python3.9/site-packages/anyio/_backends/_asyncio.py", line 1116, in send
raise ClosedResourceError
anyio.ClosedResourceError
Sample code:
async def send_request(self,payload,times,method,uri,headers):
resp = await asyncio.gather(*[self.async_send_request(method,self.dest_host,self.dest_port,uri,i,payload=payload,headers=headers) for i in range(int(times) )])
await self.conn.aclose()
log.logger.info("[+] @{} {} {}".format(method,resp[int(times)-1].status_code,uri))
return resp[int(times)-1]
async def async_send_request(self,method,dest_host,dest_port,uri,times,payload=None,headers=None):
if self.security == 'secure':
return await self.conn.request(method,"https://{}:{}{}".format(self.dest_host,self.dest_port,uri),data=payload,headers=headers,allow_redirects=False)
else:
return await self.conn.request(method,"http://{}:{}{}".format(self.dest_host,self.dest_port,uri),data=payload,headers=headers,allow_redirects=False)
def send_message:
self.conn = httpx.AsyncClient(http2=True,http1=False,proxies=self.proxies,timeout=10)
response = asyncio.run(self.send_request(payload,times,method,uri,headers))
| closed | 2022-08-12T04:58:47Z | 2023-10-21T10:17:23Z | https://github.com/encode/httpx/issues/2337 | [] | jainaj81 | 5 |
huggingface/transformers | deep-learning | 36,348 | None | closed | 2025-02-22T18:12:03Z | 2025-02-22T18:25:42Z | https://github.com/huggingface/transformers/issues/36348 | [] | Hashmapw | 0 | |
deepfakes/faceswap | deep-learning | 441 | Can’t handler occluded face such as using microphone | GAN (use perceptual loss) and Org are can’t handler occluded case such as microphone cover the mouth, may I know whether it is config problem or bug so that to solve it? Thank you.
| closed | 2018-06-22T01:27:09Z | 2018-06-22T08:41:00Z | https://github.com/deepfakes/faceswap/issues/441 | [] | g0147 | 5 |
iperov/DeepFaceLab | machine-learning | 666 | AMD GPU | hey guy,will it work on amd gpu? | closed | 2020-03-21T12:25:47Z | 2021-04-23T07:01:06Z | https://github.com/iperov/DeepFaceLab/issues/666 | [] | saltfishh | 4 |
yunjey/pytorch-tutorial | pytorch | 235 | ValueError: num_samples should be a positive integer value, but got num_samples=0 | Traceback (most recent call last):
File "D:/PycharmWorkspace/pytorch-tutorial/tutorials/01-basics/pytorch_basics/main.py", line 154, in <module>
train_loader = torch.utils.data.DataLoader(dataset=custom_dataset,
File "D:\anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 262, in __init__
sampler = RandomSampler(dataset, generator=generator) # type: ignore
File "D:\anaconda3\envs\torch\lib\site-packages\torch\utils\data\sampler.py", line 103, in __init__
raise ValueError("num_samples should be a positive integer "
ValueError: num_samples should be a positive integer value, but got num_samples=0 | open | 2021-08-26T18:07:06Z | 2022-04-12T16:37:06Z | https://github.com/yunjey/pytorch-tutorial/issues/235 | [] | zherCyber | 1 |
Neoteroi/BlackSheep | asyncio | 6 | Improve built-in synch logging | in the built-in integration with sync logging,
consider (TBD):
1. use a dedicated file for each process
1. create a file on disk only if sync logging is used
1. when logging exceptions (built-in error handling), include the `id` of the process that was handling the request | closed | 2019-02-20T10:28:12Z | 2019-02-20T16:17:46Z | https://github.com/Neoteroi/BlackSheep/issues/6 | [
"enhancement"
] | RobertoPrevato | 0 |
iperov/DeepFaceLab | deep-learning | 868 | hello, have this bug when use merger | when i try change mask_mode to some of Xseg, most time to xseg-prd*xseg-dst , console show mi this
DeepFaceLab_Linux/DeepFaceLab/merger/MergeMasked.py:238: RuntimeWarning: invalid value encountered in multiply
out_img = img_bgr*(1-img_face_mask_a) + (out_img*img_face_mask_a)
this show mi always, if i working with 1 proces, or 4
can somebody help my? Thanx and best regards
Python 3.8.3 (default, Jul 2 2020, 16:21:59)
[GCC 7.3.0] :: Anaconda, Inc. on linux | open | 2020-08-20T10:55:00Z | 2023-06-08T21:21:19Z | https://github.com/iperov/DeepFaceLab/issues/868 | [] | tembel123456 | 5 |
wiseodd/generative-models | tensorflow | 68 | Typo mistakes | hey first of all thank you for your great job it's very clear in general and helpful
My first question is your loop enumerate V_s which is something completely random does it has to enumerate in X_mb
https://github.com/wiseodd/generative-models/blob/b930d5fa9e2f69adfd4ea8ec759f38f6ce6da4c2/RBM/rbm_binary_pcd.py#L54
My second I think in this line you have to change v_s to v_prime like CD if not why ?
https://github.com/wiseodd/generative-models/blob/b930d5fa9e2f69adfd4ea8ec759f38f6ce6da4c2/RBM/rbm_binary_pcd.py#L57 | open | 2018-12-22T18:13:58Z | 2018-12-22T18:27:23Z | https://github.com/wiseodd/generative-models/issues/68 | [] | karimkalimu | 0 |
QuivrHQ/quivr | api | 3,426 | Add Relevant Parameters to file endpoint | To Parse correctly the file we need the following parameters :
* Method to use {Unstructured, LlamaParse, Megaparse-Vision}
* If Unstructured, what strategy ? {fast, auto, hi_res}
* If Unstructured, do we use LLM Format Checker ? : bool | closed | 2024-10-25T08:18:14Z | 2024-11-07T09:29:42Z | https://github.com/QuivrHQ/quivr/issues/3426 | [
"enhancement"
] | chloedia | 1 |
matterport/Mask_RCNN | tensorflow | 2,785 | ROOT_DIR does not exist. Did you forget to read the instructions above? | Hi All,
I'm new here and trying to implement this code and faced the following error, any help please ??

| open | 2022-03-06T05:57:36Z | 2022-03-06T05:57:36Z | https://github.com/matterport/Mask_RCNN/issues/2785 | [] | Isamalatby | 0 |
jupyter/nbgrader | jupyter | 1,159 | Unpin version of ipython in 0.5.6.dev and release 0.5.6 | The current release has `ipython` pinned to `<=6.2.1` and that is preventing us from installing `nbgrader` alongside modern packages. See https://github.com/jupyter/nbgrader/pull/1050
Looks like that limitation was already fixed. Can we get a new release? | closed | 2019-06-27T20:07:55Z | 2019-08-21T23:41:43Z | https://github.com/jupyter/nbgrader/issues/1159 | [
"maintenance"
] | ocefpaf | 12 |
Avaiga/taipy | automation | 1,605 | Scenario Selector returns an empty string when clicking on cycles | ### Description
The scenario selector returns an empty string when clicking on a cycle. This sometimes broke my code without my noticing it. It should return nothing or None, I imagine.
Run this code and click on the cycles appearing in the scenario selector. This returns an empty string.
```python
# Import necessary libraries
import pandas as pd
import taipy as tp
from taipy import Config, Scope, Frequency
import time
import datetime as dt
Config.configure_job_executions(mode="standalone", max_nb_of_workers=2)
# Function to run a Dataiku scenario
def run_something(input_1, input_2):
datetime = dt.datetime.now()
date = dt.date(2018, 1, 1)
int_var = 10
string_var = "String"
for i in range(10):
time.sleep(1)
print("In loop in task", i)
return datetime, date, int_var, string_var
data = {"toto": [i for i in range(10_000)],
"titi": [2*i for i in range(10_000)],
"tata": [4*i for i in range(10_000)]}
input_1_cfg = Config.configure_data_node(
id="input_1_data_node",
default_data=data,
)
input_2_cfg = Config.configure_data_node(
id="input_2_data_node",
default_data=data,
)
datetime_cfg = Config.configure_data_node(id="datetime_data_node")
date_cfg = Config.configure_data_node(id="date_data_node")
int_cfg = Config.configure_data_node(id="int_data_node")
string_cfg = Config.configure_data_node(id="string_data_node")
# Scenario and task configuration in Taipy
scenario_task_cfg = Config.configure_task(
id="scenario_task",
function=run_something,
input=[input_1_cfg, input_2_cfg],
output=[datetime_cfg, date_cfg, int_cfg, string_cfg]
)
scenario_cfg = Config.configure_scenario(
id="scenario",
task_configs=[scenario_task_cfg],
frequency=Frequency.DAILY)
# GUI Markdown content
scenario_md = """
<|{scenario}|scenario_selector|>
"""
def on_change(state, var_name, var_value):
if var_name == "scenario":
print(type(state.scenario), state.scenario)
print("Is type scenario str", isinstance(state.scenario, str))
# Main execution block with GUI setup
if __name__ == "__main__":
tp.Core().run()
scenario = tp.create_scenario(scenario_cfg)
scenario = tp.create_scenario(scenario_cfg)
print("Before submit")
scenario.submit(wait=True,timeout=30)
print("After submit")
tp.Gui(scenario_md).run(title="Bug replication", port=3248)
```
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-30T07:36:07Z | 2024-08-09T07:53:28Z | https://github.com/Avaiga/taipy/issues/1605 | [
"📈 Improvement",
"🖰 GUI",
"🟨 Priority: Medium"
] | FlorianJacta | 0 |
Gozargah/Marzban | api | 979 | کرش TeleBot | سلام، بعد مدتی فعال کردن لاگ های تلگرام این ارور رو میگیرم که باعث میشه کانفیگ ها تایموت بخورن و xray به درستی کار نمیکنه.
ERROR - TeleBot: "Infinity polling exception: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))"
این ارور در لاگ های مرزبان به صورت کرش پایتون میوفته. | closed | 2024-05-10T20:02:09Z | 2024-07-16T09:03:54Z | https://github.com/Gozargah/Marzban/issues/979 | [] | hoootan | 1 |
deepset-ai/haystack | pytorch | 8,297 | Remove Multiplexer from `Others` overview documentation page | We removed the Multiplexer from Haystack and also its documentation. We should also remove it from the `Others` overview documentation page: https://docs.haystack.deepset.ai/docs/other
https://github.com/deepset-ai/haystack/pull/8020 | closed | 2024-08-27T15:32:34Z | 2024-09-02T13:01:00Z | https://github.com/deepset-ai/haystack/issues/8297 | [
"type:documentation",
"P2"
] | julian-risch | 0 |
apache/airflow | machine-learning | 47,889 | OperatorExtra links xcom keys should be pushed to xcom db | ### Body
After #45481, we need to check if the operator extra links are being pushed to the right place and not to the custom xcom backend.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | open | 2025-03-18T06:22:41Z | 2025-03-18T06:24:56Z | https://github.com/apache/airflow/issues/47889 | [
"area:core",
"area:core-operators",
"area:task-sdk"
] | amoghrajesh | 0 |
erdewit/ib_insync | asyncio | 43 | Allow download of expired futures historical data | Allow setting of includeExpired in contract field | closed | 2018-02-09T14:09:45Z | 2018-02-09T14:30:17Z | https://github.com/erdewit/ib_insync/issues/43 | [] | cwengc | 1 |
python-restx/flask-restx | flask | 144 | Future of Restx | **Ask a question**
I'm coming from the background of Flask Restplus and we have developed web apps and rest api's which were fully functional and working in production but the things gone really awry when we updated Werkzeug==1.0.0 and thereafter we learnt that flask restplus is dead ([Issue#770](https://github.com/noirbizarre/flask-restplus/issues/770))
We are planning to upgrade the flask restplus to restx but wanted to check with the Maintainer and Core-developers how reliable it will be to go with the Restx from a future perspective. Since we don't want to get into the same situation as Flask restplus So wanted to confirm if Restx can be trusted for production update and if there will be a support in a long run and will not met the fate of restplus. | closed | 2020-05-28T15:58:43Z | 2020-05-29T13:02:08Z | https://github.com/python-restx/flask-restx/issues/144 | [
"question"
] | min2bro | 4 |
plotly/dash-bio | dash | 464 | Update pull request template to fix outdated sections | Re: https://github.com/plotly/dash-bio/pull/459#issuecomment-575357861
> Speaking of the PR template, it looks like the "steps to take before merging" and some other parts of it are outdated -- could you open up an issue and assign it to me to ensure that gets updated?
| closed | 2020-01-16T21:40:12Z | 2020-01-24T16:34:06Z | https://github.com/plotly/dash-bio/issues/464 | [] | josegonzalez | 0 |
matterport/Mask_RCNN | tensorflow | 2,736 | I want to train custom network | I followed up the guide, train the model using custom DataSet.
However, I worked at learning R&D to develop the custom Product,
So I want to train the optimized network(User-defined)
To do this, How can i do this?
| open | 2021-12-09T00:55:50Z | 2021-12-09T00:55:50Z | https://github.com/matterport/Mask_RCNN/issues/2736 | [] | donggru | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 130 | 请问制作长词条有什么好建议? | 1. 词条包含近50个字,有什么处理建议,我加了\,。类似的标号
2. 多大数据量finetune才会有效果?
3. 5000对finetune需要多大的迭代数?
@yangapku | open | 2023-06-05T09:31:13Z | 2024-12-18T06:24:18Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/130 | [] | Huang9495 | 3 |
microsoft/nni | machine-learning | 4,853 | nni=2.7; torch.cuda.is_available() is False in evaluate_model. | I coding as the script of https://github.com/microsoft/nni/blob/v2.7/examples/nas/multi-trial/mnist/search.py
nni version is 2.7
in "if __name__ == '__main__':",torch.cuda.is_available() is Triue
in function evaluate_model() ,torch.cuda.is_available() is False and os.environ["CUDA_VISIBLE_DEVICES"] is -1
I add one line code of "exp_config.training_service.gpu_indices = [0,1,2]"
run the command:
```
python search.py
```
I use nni to search hyperparameter with nnictl .But In nas search, I confused how to control the trails. | closed | 2022-05-10T06:47:37Z | 2022-05-10T07:26:10Z | https://github.com/microsoft/nni/issues/4853 | [] | DavideHe | 1 |
plotly/dash | plotly | 2,525 | [BUG] | Hi everyone,
**Describe your context**
I'm trying to create a multi-page Dash application, using a Flask server.
Here are main requirements :
dash==2.7.1
flask==2.2.2
Here is my project folder:
dashapp/
- app.py
- pages /
- un.py
- deux.py
Content of app.py :
``` from dash import html
from dash import dcc
import dash
from flask import Flask
server = Flask(__name__)
server.config.update(SECRET_KEY="dash123KeySecret")
app = dash.Dash(__name__, server=server, use_pages=True)
app.layout = html.Div(
[
#One Link for each page in page_registry
html.Div(
[
html.Div(
dcc.Link(
f"{page['name']} - {page['path']}", href=page["path"]
)
)
for page in dash.page_registry.values()
]
),
dash.page_container
]
)
```
Content of un.py :
```
from dash import html
from dash_labs.plugins import register_page
def layout():
return html.Div(children=[
html.H1('1')
]
)
register_page(__name__, path='/')
```
Content of deux.py :
```
from dash import html
from dash_labs.plugins import register_page
def layout():
return html.Div(children=[
html.H1('2')
]
)
register_page(__name__, path='/deux')
```
**bug**
When running the app, i get a 404 error on the layout of the page "deux" whereas page "un" is ok


**Expected behavior**
Like a one-page application, get the choosen layout displayed | closed | 2023-05-05T12:12:24Z | 2023-05-05T13:40:06Z | https://github.com/plotly/dash/issues/2525 | [] | kevin35ledy | 1 |
litestar-org/litestar | asyncio | 4,019 | Bug: Documentation generation adding erroneus http 201 response | ### Description
Hello!
I'm using Litestar and it's automatic documentation generation (the Scalar variant if prevalent) and I've found that a POST request marking a HTTP 202 response seems to be adding an erroneous and bare HTTP 201 response to the openapi.json file.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, post
from litestar.openapi.config import OpenAPIConfig
from litestar.openapi.datastructures import ResponseSpec
@post(
path="/",
description="Bar",
name="Foo",
responses={202: ResponseSpec(data_container=dict[str, str], description="Results of the refresh")},
sync_to_thread=True,
)
def foobar() -> None:
...
app = Litestar(
route_handlers=[foobar],
openapi_config=OpenAPIConfig(
title="FooBar",
description="FooBarBaz",
version="0.0.1",
render_plugins=[ScalarRenderPlugin()],
path="/docs",
),
)
```
### Steps to reproduce
1. Run your app (I use `uvicorn`).
2. Navigate to BASE_URL:PORT/docs
3. See erroneous HTTP 201 entry in generated documentation.
See also: `/docs/openapi.json` and see the erroneous entry in there too.
### Screenshots


### Logs
```text
```
### Litestar Version
```sh
$ pdm show litestar
Name: litestar
Latest version: 2.14.0
Latest stable version: 2.14.0
Installed version: 2.14.0
```
### Platform
- [x] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2025-02-19T23:02:41Z | 2025-02-23T18:14:35Z | https://github.com/litestar-org/litestar/issues/4019 | [
"Bug :bug:"
] | AbstractUmbra | 6 |
ets-labs/python-dependency-injector | asyncio | 73 | Review and update ExternalDependency provider docs | closed | 2015-07-13T07:33:19Z | 2015-07-16T22:15:30Z | https://github.com/ets-labs/python-dependency-injector/issues/73 | [
"docs"
] | rmk135 | 0 | |
CPJKU/madmom | numpy | 518 | On the madmom version of drum transcription | There was an automatic drum transcription [model](https://arxiv.org/pdf/1806.06676.pdf) that included in madmom, mentioned at [here](http://ifs.tuwien.ac.at/~vogl/dafx2018/#:~:text=Trained%20models%20are%20available,in%20help%20and%20documentation.). But I cannot find the 0.16.dev version on the PyPI page, nor in the documentation. Could you please provide further instruction on how to use the __madmom/bin/DrumTranscripto__ ? | closed | 2023-03-01T23:04:48Z | 2023-03-02T07:05:56Z | https://github.com/CPJKU/madmom/issues/518 | [] | nicolaus625 | 1 |
BeanieODM/beanie | pydantic | 538 | Add support for $explain operator | ### Discussed in https://github.com/roman-right/beanie/discussions/485
<div type='discussions-op-text'>
<sup>Originally posted by **suyashdeshpande** February 7, 2023</sup>
Add support for $explain operator so we can analyze query performance</div> | open | 2023-04-13T19:56:46Z | 2024-12-08T21:54:18Z | https://github.com/BeanieODM/beanie/issues/538 | [
"feature request"
] | roman-right | 1 |
idealo/imagededup | computer-vision | 203 | Filenames not being passed to plotter correctly? | Code:
```
import sys
import os
from imagededup.methods import PHash
import numpy as np
if __name__ == '__main__':
phasher = PHash()
encodings = phasher.encode_images(image_dir=r'mydir')
duplicates = phasher.find_duplicates(
encoding_map=encodings,
max_distance_threshold=0,
scores=True)
from imagededup.utils import plot_duplicates
plot_duplicates(image_dir=r'mydir,
duplicate_map=encodings,
filename='myfile.jpg')
```
Error:
```
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1764/1764 [04:28<00:00, 6.57it/s]
2023-08-01 22:47:09,489: INFO End: Calculating hashes!
/home/rob/.local/lib/python3.11/site-packages/imagededup/methods/hashing.py:317: RuntimeWarning: Parameter num_enc_workers has no effect since encodings are already provided
warnings.warn('Parameter num_enc_workers has no effect since encodings are already provided', RuntimeWarning)
2023-08-01 22:47:09,490: INFO Start: Evaluating hamming distances for getting duplicates
2023-08-01 22:47:09,490: INFO Start: Retrieving duplicates using Cython Brute force algorithm
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1233/1233 [00:00<00:00, 15854.01it/s]
2023-08-01 22:47:09,661: INFO End: Retrieving duplicates using Cython Brute force algorithm
2023-08-01 22:47:09,661: INFO End: Evaluating hamming distances for getting duplicates
Traceback (most recent call last):
File "python-script.py", line 17, in <module>
plot_duplicates(image_dir=r'mydir',
File "/home/rob/.local/lib/python3.11/site-packages/imagededup/utils/plotter.py", line 136, in plot_duplicates
_plot_images(
File "/home/rob/.local/lib/python3.11/site-packages/imagededup/utils/plotter.py", line 61, in _plot_images
ax.imshow(Image.open(image_dir / image_list[i]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/site-packages/PIL/Image.py", line 3236, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/mydir/e'
```
Why does the script think a literal filename of `e` was a file in `mydir`? There are no files that even begin with a letter in that directory, just numbers... | closed | 2023-08-02T06:02:18Z | 2023-08-02T15:56:43Z | https://github.com/idealo/imagededup/issues/203 | [] | biggestsonicfan | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 665 | Visual artifacts | Hi,
Thank you for this great implementation!
I am using it to generate simple "maps", like a segmentation task.
I am trying your pix2pix unet256 implementation of the network with my own loader,
I get these artifacts : [sample image link](https://ibb.co/ctBykgk).
It is lots of single or dual pixels of red green or blue "dots" spread over the prediction where it should predict no data (ie. black).
Do you know where could it come from? Is it due to the network ? Thank you
| open | 2019-06-05T15:52:00Z | 2022-06-02T06:00:14Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/665 | [] | olivier-gillet | 4 |
httpie/cli | python | 567 | error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure | New to HTTPie, this works in curl but not in HTTPie. I have not upgraded to python 3 as I am working on
AWS Lambda and it only supports python 2.7.
bpruss@Ubuntu-AWSD01:~/helloworld$ http --debug https://pdffjaur49.execute-api.us-east-1.amazonaws.com/dev/
HTTPie 1.0.0-dev
Requests 2.13.0
Pygments 2.2.0
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4]
/usr/bin/python
Linux 4.2.0-35-generic
<Environment {
"colors": 8,
"config": {
"__meta__": {
"about": "u'HTTPie configuration file'",
"help": "u'https://httpie.org/docs#config'",
"httpie": "u'0.9.9'"
},
"default_options": "[]"
},
"config_dir": "/home/bpruss/.httpie",
"is_windows": false,
"stderr": "<open file '<stderr>', mode 'w' at 0x7f1140bb31e0>",
"stderr_isatty": true,
"stdin": "<open file '<stdin>', mode 'r' at 0x7f1140bb30c0>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<open file '<stdout>', mode 'w' at 0x7f1140bb3150>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
>>> requests.request(**{
"allow_redirects": false,
"auth": "None",
"cert": "None",
"data": {},
"files": {},
"headers": {
"User-Agent": "HTTPie/1.0.0-dev"
},
"method": "get",
"params": {},
"proxies": {},
"stream": true,
"timeout": 30,
"url": "u'https://pdffjaur49.execute-api.us-east-1.amazonaws.com/dev/'",
"verify": true
})
http: error: SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure while doing GET request to URL: https://pdffjaur49.execute-api.us-east-1.amazonaws.com/dev/
Traceback (most recent call last):
File "/usr/local/bin/http", line 11, in <module>
load_entry_point('httpie==1.0.0.dev0', 'console_scripts', 'http')()
File "/usr/local/lib/python2.7/dist-packages/httpie/__main__.py", line 11, in main
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/httpie/core.py", line 227, in main
log_error=log_error,
File "/usr/local/lib/python2.7/dist-packages/httpie/core.py", line 99, in program
final_response = get_response(args, config_dir=env.config.directory)
File "/usr/local/lib/python2.7/dist-packages/httpie/client.py", line 70, in get_response
response = requests_session.request(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 497, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
bpruss@Ubuntu-AWSD01:~/helloworld$ curl https://pdffjaur49.execute-api.us-east-1.amazonaws.com/dev/
{"hello": "world Bernie"}bpruss@Ubuntu-AWSD01:~/helloworld$ | closed | 2017-03-10T13:17:20Z | 2017-03-10T13:29:33Z | https://github.com/httpie/cli/issues/567 | [] | bpruss | 1 |
babysor/MockingBird | deep-learning | 64 | Mac下无法使用麦克风 Error opening InputStream: Internal PortAudio error | closed | 2021-08-30T13:59:32Z | 2021-08-30T14:00:47Z | https://github.com/babysor/MockingBird/issues/64 | [] | AeroXi | 1 | |
gunthercox/ChatterBot | machine-learning | 2,325 | not able to install chatterbot | every time when i am installing the chatterbot it is showing error , is it happening because of python version? | closed | 2023-09-14T11:02:03Z | 2025-02-17T19:23:15Z | https://github.com/gunthercox/ChatterBot/issues/2325 | [] | RohitGitTech | 2 |
huggingface/diffusers | pytorch | 10,075 | flux:different results on different machines. | Hi!
During the debugging phase, I've noticed that training the [FLUX-Controlnet-Inpainting](https://github.com/alimama-creative/FLUX-Controlnet-Inpainting) model yields different results on different machines.
Specifically, on one machine, everything trains fine,
but on another, I'm getting an error that says "Gradient for module.xxx contains NaN!".
I've double-checked and confirmed that my code is exactly the same on both machines, and use the same seed in set_seed().
Has anyone else encountered this same issue? I'm curious to know if there are any potential causes for this discrepancy in training results across different machines. Any insights or suggestions would be greatly appreciated. | open | 2024-12-02T07:22:00Z | 2025-01-01T15:03:03Z | https://github.com/huggingface/diffusers/issues/10075 | [
"stale"
] | D222097 | 3 |
clovaai/donut | nlp | 170 | Register Model in Mlfow | Hello guys I have trained my own custom Donut model and than I tried to use Mlflow for logging parameters, metrics and artifacts but I want to register model on Mlflow can anyone guide me on how do I that ??
Thanks in advance. | open | 2023-03-28T02:27:23Z | 2023-04-06T16:19:52Z | https://github.com/clovaai/donut/issues/170 | [] | rajsaraiya009 | 1 |
deepfakes/faceswap | machine-learning | 1,348 | Faceswap doesn't use RX 7900 XT | **Describe the bug**
When trying to extract faces, Faceswap falls back to CPU with this message:
2023-08-31 15:12:03.523307: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1990] Ignoring visible gpu device (device: 0, name: AMD Radeon RX 7900 XT, pci bus id: 0000:03:00.0) with AMDGPU version : gfx1100. The supported AMDGPU versions are gfx1030, gfx900, gfx906, gfx908, gfx90a.
**To Reproduce**
Steps to reproduce the behavior:
Start face extraction with any settings (in my case it were s3fd, fan and vgg-obstructed)
**Expected behavior**
The extraction should use the RX 7900 XT
**Desktop (please complete the following information):**
- OS: Arch Linux
- Python Version 3.10.12
- Conda Version 23.5.2
- Commit ID 5d00025 | closed | 2023-08-31T13:22:09Z | 2023-10-23T00:08:54Z | https://github.com/deepfakes/faceswap/issues/1348 | [] | RPochyly | 1 |
pydata/pandas-datareader | pandas | 4 | improve coverage | At the moment it's [](https://coveralls.io/r/pydata/pandas-datareader).
_Note: several tests are skipped atm (mostly these are labelled unreliable), this may partially explain results._
| closed | 2015-01-16T08:07:46Z | 2015-11-14T19:48:32Z | https://github.com/pydata/pandas-datareader/issues/4 | [] | hayd | 1 |
thtrieu/darkflow | tensorflow | 1,213 | Convert annotation video mat files to COCO | Hello, I have such a problem. I am part of the Ukrainian team that is developing a detection system that could help save many lives. I'm on the team as a data scientist. But I ran into a problem, I need to convert the annotations to the video in mat format to COCO format in order to train YOLOv7. Help me please.
<img width="1261" alt="Знімок екрана 2022-10-29 о 19 50 06" src="https://user-images.githubusercontent.com/104899570/198843469-5713c034-95d5-4abe-9bdd-b07d35cb854a.png">
| open | 2022-10-29T16:50:18Z | 2022-10-29T16:50:18Z | https://github.com/thtrieu/darkflow/issues/1213 | [] | UkranianAndreii | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,059 | Failing to reconstruct high value pixels with pix2pix | Hi,
Using pix2pix I've experienced bad reconstructions related to colour consistency, especially with really high-valued pixels (white mainly that becomes grey). Firstly I thought it could be the way I was converting the numpy array (which ranges from [-1,1]) wrongly to 0-255:
```python
prediction = (0.5 * prediction + 0.5) * 255.0
#prediction = ((prediction + 1) / 2.0) * 255.0
```
But it is not. Here an example → [https://imgur.com/gallery/O7hhgXN](https://imgur.com/gallery/O7hhgXN)
What could it be? How can I solve it? Thanks :D | closed | 2020-06-09T13:37:13Z | 2024-02-27T18:26:41Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1059 | [] | adriacabeza | 6 |
vaexio/vaex | data-science | 1,378 | [FEATURE-REQUEST] Create OHLC from tick data | **Description**
Trying to create OHLC (Open High Low Close) data from price tick data. Can sort of create Open, High, Low, but Open is wrong and times have some offset. Not sure about close.
```python
import pandas as pd
import vaex
import numpy as np
import time
dates = pd.date_range("01-01-2019", "14-04-2020", freq="60s")
num = len(dates)
vdf = vaex.from_arrays(ts=pd.to_datetime(dates), x=np.random.randint(1, 1000, num))
print(vdf.head(17))
print()
# Desired output
print(vdf.to_pandas_df().resample('15Min', on='ts')['x'].ohlc())
print()
# Create Open High Low (Bit off)
vdf2 = vdf.groupby(by=vaex.BinnerTime(vdf.ts, resolution='m', every=15), agg={'O': vaex.agg.first('x', 'ts'), 'H': vaex.agg.max('x'), 'L': vaex.agg.min('x')})
print(vdf2)
print()
# Create Close?
vdf3 = vdf.groupby(by=vaex.BinnerTime(vdf.ts, resolution='m', every=15), agg={'C': vaex.agg.first('x', '-ts')})
print(vdf3)
```
Output:
```bash
# ts x
0 2019-01-01 00:00:00.000000000 5
1 2019-01-01 00:01:00.000000000 20
2 2019-01-01 00:02:00.000000000 690
3 2019-01-01 00:03:00.000000000 434
4 2019-01-01 00:04:00.000000000 686
... ... ...
12 2019-01-01 00:12:00.000000000 182
13 2019-01-01 00:13:00.000000000 530
14 2019-01-01 00:14:00.000000000 659
15 2019-01-01 00:15:00.000000000 929
16 2019-01-01 00:16:00.000000000 734
open high low close
ts
2019-01-01 00:00:00 5 894 5 659
2019-01-01 00:15:00 929 929 217 611
2019-01-01 00:30:00 424 966 41 228
2019-01-01 00:45:00 19 977 19 42
2019-01-01 01:00:00 137 989 96 686
... ... ... ... ...
2020-04-13 23:00:00 756 994 99 204
2020-04-13 23:15:00 510 847 3 3
2020-04-13 23:30:00 128 898 62 501
2020-04-13 23:45:00 920 937 54 626
2020-04-14 00:00:00 694 694 694 694
[45025 rows x 4 columns]
# ts O H L
0 2018-12-31 23:59 690 894 5
1 2019-01-01 00:14 929 929 217
2 2019-01-01 00:29 938 966 41
3 2019-01-01 00:44 904 977 19
4 2019-01-01 00:59 96 989 42
... ... ... ... ...
45,020 2020-04-13 22:59 440 994 3
45,021 2020-04-13 23:14 204 847 117
45,022 2020-04-13 23:29 263 898 3
45,023 2020-04-13 23:44 54 937 54
45,024 2020-04-13 23:59 626 694 626
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
KeyError: "Unknown variables or column: '-ts'"
During handling of the above exception, another exception occurred:
UFuncTypeError: ufunc 'negative' did not contain a loop with signature matching types dtype('<M8[ns]') -> dtype('<M8[ns]')
During handling of the above exception, another exception occurred:
...
KeyError: "Unknown variables or column: '-ts'"
During handling of the above exception, another exception occurred:
...
UFuncTypeError: ufunc 'negative' did not contain a loop with signature matching types dtype('<M8[ns]') -> dtype('<M8[ns]')
```
**Additional Context**
Trying to emulate [Pandas OHLC](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.resample.Resampler.ohlc.html)
| closed | 2021-05-28T01:50:25Z | 2022-03-03T12:15:28Z | https://github.com/vaexio/vaex/issues/1378 | [] | Penacillin | 19 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 507 | Defining your entry point module as "site.py" causes NoAppException | If you create a file called `site.py` which defines your Flask app, and then set the env var as `FLASK_APP="site.py"`, Flask will find the file but it can't find the application instance.
```
Traceback (most recent call last):
File "/asdf/venv/lib/python3.4/site-packages/flask/cli.py", line 48, in find_best_app
'using a factory function.' % module.__name__)
flask.cli.NoAppException: Failed to find application in module "site". Are you sure it contains a Flask application? Maybe you wrapped it in a WSGI middleware or you are using a factory function.
```
Folder structure: http://i.imgur.com/CR88kUo.png
requirements.txt
```
Flask==0.12.1
Flask-SQLAlchemy==2.2
Jinja2==2.9.6
MarkupSafe==1.0
SQLAlchemy==1.1.10
Werkzeug==0.12.1
click==6.7
itsdangerous==0.24
``` | closed | 2017-06-18T01:44:19Z | 2020-12-05T20:55:48Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/507 | [] | Kangaroux | 2 |
amisadmin/fastapi-amis-admin | fastapi | 95 | 在表中新建条目之后,并没有立马能够查询到,是缓存的问题吗? | 在表中新建条目之后,并没有立马能够查询到,是缓存的问题吗? | open | 2023-05-16T07:45:43Z | 2023-05-16T09:37:33Z | https://github.com/amisadmin/fastapi-amis-admin/issues/95 | [] | xuehaoweng | 1 |
widgetti/solara | fastapi | 599 | feat: support ibis tables (...and define a more formal protocol) | I would love to render (very very large, ie millions of rows) ibis tables in solara.DataTable. This currently doesn't work because
- solara calls `len(df)` on the data. ibis uses `.count()` instead, to make the execution very explicit
- the logic to go from ibis table to a list[dict] of records isn't implemented for ibis
I see a few ways to support this:
- manually add ibis support :)
- Make some more formal protocol how solara interacts with dataframes, and then let users implement the translation layer themselves.
- Use ibis itself as your dataframe abstraction layer :) Another dependency, doesn't support vaex, probably not worth it.
I'm cooking up a little PR right now that at least refactors things to make them a bit cleaner, regardless of if we do something more drastic. Thank you!
| open | 2024-04-11T19:48:57Z | 2024-04-26T21:00:18Z | https://github.com/widgetti/solara/issues/599 | [] | NickCrews | 5 |
pytest-dev/pytest-cov | pytest | 161 | More tolerant in certain circumstances where report generation was failed | Back in version 2.2.1 [we were more tolerant](https://github.com/pytest-dev/pytest-cov/blob/0696bc72ad6f5ce0017d6904c04141376e291467/src/pytest_cov/plugin.py#L194-L198) when something went wrong with coverage report generation:
```
if not (self.failed and self.options.no_cov_on_fail):
try:
total = self.cov_controller.summary(terminalreporter.writer)
except CoverageException as exc:
terminalreporter.writer.write('Failed to generate report: %s\n' % exc)
total = 0
```
With 2.5.1, we raise exception instead:
```
if not self._is_slave(session) and self._should_report():
try:
self.cov_total = self.cov_controller.summary(self.cov_report)
except CoverageException as exc:
raise pytest.UsageError(
'Failed to generate report: %s\n' % exc
)
assert self.cov_total is not None, 'Test coverage should never be `None`'
```
Do you think it would make sense if we make it to be tolerant like the old days? I'm having some projects that are built on multiple Python implementations with the same code base. One of the implementation (Jython) doesn't support code coverage and will cause CoverageException. If we re-raise the exception here, the whole test will fail.
I can send a PR for this, but I would like to get some opinions first. | closed | 2017-06-21T22:05:51Z | 2017-08-01T12:17:20Z | https://github.com/pytest-dev/pytest-cov/issues/161 | [] | huyphan | 17 |
pallets/flask | python | 4,860 | Improve JSONIFY_MIMETYPE deprecation warning | With version 2.2 of flask, `JSONIFY_MIMETYPE` was deprecated in favor of `DefaultJSONProvider`'s `mimetype` class property.
When `JSONIFY_MIMETYPE` is still used, a `DeprecationWarning` is raised:
> The 'JSONIFY_MIMETYPE' config key is deprecated and will be removed in Flask 2.3. Set 'app.json.mimetype' instead.
When following this instruction, [pyright](https://github.com/RobertCraigie/pyright-python) reports an error, though:
> error: Cannot assign member "mimetype" for type "JSONProvider": Member "mimetype" is unknown - Pyright[reportGeneralTypeIssues]
I think pyright is correct: the `mimetype` property does not exist on `JSONProvider` - as it's only meant to be a base class allowing users to customize behavior. Only its subclass `DefaultJSONProvider` has this property, and is expected to act accordingly.
So I think the deprecation hint should instead say:
> Please set flask.json.provider.DefaultJSONProvider.mimetype instead.
If you agree, I'm happy to provide a pull request!
Environment:
- Python version: 3.10.8
- Flask version: 2.2.2
- pyright version: 1.1.278
| closed | 2022-11-07T11:12:38Z | 2023-01-10T00:05:56Z | https://github.com/pallets/flask/issues/4860 | [] | mephinet | 2 |
dsdanielpark/Bard-API | api | 57 | Responce error | Response code not 200. Response Status is 302 | closed | 2023-06-08T04:44:37Z | 2023-06-08T12:26:00Z | https://github.com/dsdanielpark/Bard-API/issues/57 | [] | Ridoy302583 | 1 |
mwaskom/seaborn | pandas | 3,098 | Figure aspect ratio sometimes off in docs on mobile | I noticed this in portrait mode on Chrome on Android.
<details>
<summary>Example:</summary>

</details> | open | 2022-10-18T15:09:34Z | 2022-11-04T10:39:37Z | https://github.com/mwaskom/seaborn/issues/3098 | [
"docs"
] | zmoon | 0 |
fbdesignpro/sweetviz | data-visualization | 179 | module not found or their is no such module names "sweetviz" | When trying to import Sweetviz, it says that there is no such module. I tried all the possible solutions given on the library documentation page, but still, nothing is working. | open | 2024-09-10T06:18:53Z | 2024-11-21T16:01:02Z | https://github.com/fbdesignpro/sweetviz/issues/179 | [] | Nikhil-Bhalerao | 1 |
pyg-team/pytorch_geometric | deep-learning | 10,004 | add thumbnail for recently added GraphTransformer tutorial | ### 📚 Describe the documentation issue
@xnuohz
<img width="824" alt="Image" src="https://github.com/user-attachments/assets/b496009e-2c84-43e9-8b8e-880020b12515" />
https://pytorch-geometric--8144.org.readthedocs.build/en/8144/tutorial/application.html
I thoroughly read through your tutorial but i did not double check the thumbnail. please add it.
related to https://github.com/pyg-team/pytorch_geometric/pull/8144/
### Suggest a potential alternative/fix
_No response_ | closed | 2025-02-05T15:59:21Z | 2025-02-10T17:31:30Z | https://github.com/pyg-team/pytorch_geometric/issues/10004 | [
"documentation"
] | puririshi98 | 0 |
influxdata/influxdb-client-python | jupyter | 231 | TypeError: query() got an unexpected keyword argument 'params' | Hi,
I'm trying to query some data according your example (Query: using Bind parameters). Instead of data I'm getting the following error:
```
Traceback (most recent call last):
File "test.py", line 53, in <module>
tables = query_api.query('''
TypeError: query() got an unexpected keyword argument 'params'
```
Best,
Stefan | closed | 2021-04-22T07:57:19Z | 2021-04-29T09:45:49Z | https://github.com/influxdata/influxdb-client-python/issues/231 | [
"question",
"wontfix"
] | staeglis | 2 |
ydataai/ydata-profiling | jupyter | 1,596 | How to get the dataframe cleaned (processed by ydata-profiling)? | ### Missing functionality
Getting the processed clean dataframe
### Proposed feature
Give an option to get the cleaned dataframe after processing it through ydata-profiling.
### Alternatives considered
_No response_
### Additional context
_No response_ | open | 2024-05-20T10:23:29Z | 2024-06-07T17:37:18Z | https://github.com/ydataai/ydata-profiling/issues/1596 | [
"feature request 💬"
] | francesco-gariboldi | 1 |
HIT-SCIR/ltp | nlp | 211 | 为何本地部署ltp,用python调用ltp_test.exe时显示停止运行? | 代码如下:
# -*- coding:UTF-8 -*-
import os
LTP_path = "D:\\myprojects\\LTP"
model_exe = "ltp_test"
threads_num = " --threads 3"
last_stage = " --last-stage" + "all"
input_path = " --input" + "D:\\myprojects\\LTP\\file\\test.txt"
seg_lexicon = ""
pos_lexicon = ""
output_path = "D:\\myprojects\\LTP\\result\\out.txt"
command = "cd " + LTP_path + "& " + model_exe + threads_num + input_path + last_stage + " >" + output_path
os.system(command) | closed | 2017-04-06T10:48:28Z | 2017-04-13T15:22:47Z | https://github.com/HIT-SCIR/ltp/issues/211 | [] | smallflying | 7 |
littlecodersh/ItChat | api | 838 | send_raw_msg type 42 能否实现发送好友名片? | 在提交前,请确保您已经检查了以下内容!
- [yes ] 您可以在浏览器中登陆微信账号
- [yes ] 我已经阅读并按[文档][document] 中的指引进行了操作
- [yes ] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [yes ] 本问题确实关于`itchat`, 而不是其他项目.
- [no ] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
您的itchat版本为:`[1.3.10]`。
```
itchat.send_raw_msg(42, user_info_dict, 'filehelper')
```
所得的反馈是
```
{u'MsgID': u'', u'LocalID': u'', u'BaseResponse': {u'ErrMsg': u'', u'Ret': 1200, 'RawMsg': u''}}
```
是不能用吗?
user_info_dict 是从 search 来获得的,有有效的UserName | open | 2019-06-10T06:27:31Z | 2019-06-16T07:41:29Z | https://github.com/littlecodersh/ItChat/issues/838 | [] | coronin | 0 |
deepspeedai/DeepSpeed | machine-learning | 6,629 | when take --bind_cores_to_rank on,only half of CPUs is used | #!/bin/bash
NUM_NODES=1
NUM_GPUS=3
EXPERTS=3
deepspeed --num_nodes=${NUM_NODES}\
--num_gpus=${NUM_GPUS} \
--bind_cores_to_rank \
train_autodirect_moe.py \
--log-interval 100 \
--noisy-gate-policy 'RSample' \
--moe-param-group \
--config ../configs/autoDirectFinal/XXX.json \
--gpus ${NUM_GPUS} \
--lr 0.0003 \
--clip 1.0
this is my deepspeed sh,and when i take -bind_cores_to_rank on, only half of cpus is used

when take it off,all of cpus is used and it results in lower performance

is there anyway to control deepspeed to use more cpu?thanks
| closed | 2024-10-16T08:54:33Z | 2024-10-25T17:24:02Z | https://github.com/deepspeedai/DeepSpeed/issues/6629 | [] | GensokyoLover | 2 |
littlecodersh/ItChat | api | 61 | 请问如何把微信群内某个人踢出 | closed | 2016-08-09T05:15:26Z | 2016-08-09T08:29:05Z | https://github.com/littlecodersh/ItChat/issues/61 | [
"question"
] | xxiyj | 1 | |
davidsandberg/facenet | computer-vision | 539 | the filter_dataset function don't filter the dataset | @davidsandberg I modify the filter_dataset function like this:
def filter_dataset(dataset, data_filename, percentile, min_nrof_images_per_class):
with h5py.File(data_filename,'r') as f:
distance_to_center = np.array(f.get('distance_to_center'))
label_list = np.array(f.get('label_list'))
image_list = np.array(f.get('image_list'))
distance_to_center_threshold = find_threshold(distance_to_center, percentile)
indices = np.where(distance_to_center>=distance_to_center_threshold)[0]
filtered_dataset = dataset
removelist = []
for i in indices:
label = label_list[i]
image = image_list[i]
if image in filtered_dataset[label].image_paths:
filtered_dataset[label].image_paths.remove(image)
for j in range(len(filtered_dataset)):
if len(filtered_dataset[j].image_paths) < min_nrof_images_per_class:
removelist.append(j)
#del (filtered_dataset[j])
ix = sorted(list(set(removelist)), reverse=True)
for i in ix:
del(filtered_dataset[i])
return filtered_dataset
| open | 2017-11-17T08:30:08Z | 2017-11-17T08:33:41Z | https://github.com/davidsandberg/facenet/issues/539 | [] | allenxcp | 0 |
inventree/InvenTree | django | 8,980 | [FR] Convert purchased items to base currency on receipt | When receiving line items against a purchase order, it would be useful to automatically convert the "unit cost" of the received stock to the internal base currency.
This provides two major benefits:
### Consistent Pricing
Internal stock value is all tracked in the same currency, making for easier comparisons / statistics / etc
### Price Freezing
Freezes the exchange rate at the time of purchase, so that historical exchange rates do not need to be considered. | closed | 2025-01-28T21:49:55Z | 2025-03-20T14:47:39Z | https://github.com/inventree/InvenTree/issues/8980 | [
"enhancement",
"pricing",
"feature"
] | SchrodingersGat | 0 |
ultralytics/yolov5 | pytorch | 13,367 | Issue using PaddleDetection's YOLOv5 model in val.py after converting to ONNX - Error with scale_factor | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi all,
I'm working with a YOLOE plus model from PaddleDetection and trying to use its .pdparams weights in the val.py script. My process involves converting the Paddle model to ONNX and then using .onnx model for validation. (I wanted to use val.py from this repo as I want to compare some of my previous results with the yoloe plus). However, I am encountering an issue related to scale_factor during this process.
Issue Details:
Model: YOLOv5 from PaddleDetection (weights in .pdparams format).
Error: The script throws an error related to scale_factor.

Environment:
PaddleDetection version: 3.0.0-betal
PaddlePaddle version: release/2.5
Question:
Is there any known compatibility issue or additional steps required when using a PaddleDetection YOLOv5 model converted to ONNX with scale_factor in the val.py script? How should I adjust the scale_factor parameter properly in the val.py scenario to ensure correct evaluation?
Any guidance or suggestions would be greatly appreciated!
Thanks in advance.
### Additional
_No response_ | open | 2024-10-18T10:49:05Z | 2024-10-22T14:13:10Z | https://github.com/ultralytics/yolov5/issues/13367 | [
"question"
] | gauricollab09 | 3 |
kizniche/Mycodo | automation | 1,395 | devices/atlas_scientific_uart.py misses the build_string(data) function. |
### Versions:
- Mycodo Version: 8.16.0
- Raspberry Pi Version: 3B
- Raspbian OS Version: Bookworm
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
If you activate a Atlas Scientific PH Sensor with uart interface and try to calibrate, you get an exception in the log, that the buil_string(..) function is not declared. And if you check the devices/atlas_scientific_uart.py file, you will see that it is not there.
If you copy the the function from the i2c version calibarion will work again.
### Expected behavior
No Exception.
| open | 2024-10-06T12:28:53Z | 2025-01-08T19:48:04Z | https://github.com/kizniche/Mycodo/issues/1395 | [
"bug"
] | domonoky | 1 |
MolSSI/cookiecutter-cms | pytest | 69 | Replacing Versioneer | The `versioneer` repository appears to be dead; however, this isn't necessarily a bad thing since `versioneer` works for all use cases and examples that we can find. In addition, versioneer is static and installed so there are no dependance issues. However, this likely will not be the case forever and watching for replacements like [setuptools_scm](https://github.com/pypa/setuptools_scm/issues) is something that we should continue to evaluate. | closed | 2019-02-14T14:41:28Z | 2019-06-05T15:49:36Z | https://github.com/MolSSI/cookiecutter-cms/issues/69 | [] | dgasmith | 1 |
neuml/txtai | nlp | 441 | Resolve application references in pipelines | Add logic to detect application pipelines that have a configuration parameter named `application`. When that parameter is passed as part of the pipeline configuration, it will be resolved to the currently running application.
The application framework already has similar logic for setting the `embeddings` on `extractor` pipelines. | closed | 2023-02-23T20:33:45Z | 2023-02-23T20:36:45Z | https://github.com/neuml/txtai/issues/441 | [] | davidmezzetti | 0 |
darrenburns/posting | automation | 41 | Support for dev certificates | In our local dev environment we use self signed certs generated with mkcert. I do not want to disable ssl verification, but wasn't able to find a way to let posting use my local root CA generated by mkcert. I've tried injection python package "truststore" to posting as well as setting python config param global.cert.
Every request to my local backend fails with the following:

Did I miss something or does posting not support adding a CA for SSL verification currently? | closed | 2024-07-16T06:37:10Z | 2024-07-17T09:04:57Z | https://github.com/darrenburns/posting/issues/41 | [] | Persi | 9 |
charlesq34/pointnet | tensorflow | 150 | about prepare data | Hi,
When I sample points(use commends the function utils/data_pre_util.py/get_sampling_command provide),
pointnet-master/utils/third_party/mesh_sampling/build/pcsample ../data/my_data/veg.txt ../data/my_data/veg.ply -n_samples 2048 -leaf_size 0.005000
it occurs error like this,
pointnet-master/utils/third_party/mesh_sampling/build/pcsample: No such file or directory
so can I get the file third_party/mesh_sampling/build/pcsample.
Thanks in advance. | open | 2018-10-29T08:15:11Z | 2019-05-16T11:18:12Z | https://github.com/charlesq34/pointnet/issues/150 | [] | programmerkiki | 3 |
facebookresearch/fairseq | pytorch | 5,533 | Wav2Vec2 Pretraining | ## ❓ Questions and Help
#### I want to perform wav2vec2 Pretraining from scratch and while following the documentation for same on https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec it is mentioned that all audio clips should be in single directory. The issue is i have too much data to keep in a single directory.
I have data in multiple directories on different disks and can't move complete data in single directory due to storage issue. Is it possible to pretrain the model in this scenario? | open | 2024-08-08T07:49:08Z | 2024-09-02T13:43:22Z | https://github.com/facebookresearch/fairseq/issues/5533 | [
"question",
"needs triage"
] | rajeevbaalwan | 3 |
dunossauro/fastapi-do-zero | pydantic | 34 | [Aula 10] - Trabalhar a parte de variáveis de ambiente do CI | Como levantado em conversas no telegram, por Jorge Luiz Plautz, não foi exposto na aula uma forma de contornar o carregamento de variáveis de ambiente durante o CI. O que acarreta diversos erros na hora de executar os testes! | closed | 2023-10-21T22:20:11Z | 2023-12-09T20:20:57Z | https://github.com/dunossauro/fastapi-do-zero/issues/34 | [] | dunossauro | 3 |
comfyanonymous/ComfyUI | pytorch | 6,704 | Failed to import Omnigen | ### Your question
I have tried EVERYTHING guys. I dont know anymore what I can do to try to run omnigen
### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 21
- **Node Type:** ailab_OmniGen
- **Exception Type:** RuntimeError
- **Exception Message:** Failed to import OmniGen. Please check if the code was downloaded correctly.
## Stack Trace
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 293, in execute
obj = class_def()
^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 57, in __init__
raise RuntimeError("Failed to import OmniGen. Please check if the code was downloaded correctly.")
## System Information
- **ComfyUI Version:** 0.3.13
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.6.0+cpu
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4090 Laptop GPU : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 17170956288
- **VRAM Free:** 15778971648
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-02-05T01:42:47.552782 -
2025-02-05T01:42:47.553783 - ** ComfyUI startup time:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.553783 - 2025-02-05 01:42:47.5532025-02-05T01:42:47.553783 -
2025-02-05T01:42:47.553783 - ** Platform:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.553783 - Windows2025-02-05T01:42:47.553783 -
2025-02-05T01:42:47.553783 - ** Python version:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.553783 - 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]2025-02-05T01:42:47.553783 -
2025-02-05T01:42:47.553783 - ** Python executable:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.553783 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python.exe2025-02-05T01:42:47.553783 -
2025-02-05T01:42:47.553783 - ** ComfyUI Path:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.553783 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI2025-02-05T01:42:47.553783 -
2025-02-05T01:42:47.553783 - ** ComfyUI Base Folder Path:2025-02-05T01:42:47.553783 - 2025-02-05T01:42:47.554879 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI2025-02-05T01:42:47.554879 -
2025-02-05T01:42:47.563747 - ** User directory:2025-02-05T01:42:47.564799 - 2025-02-05T01:42:47.564799 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user2025-02-05T01:42:47.564799 -
2025-02-05T01:42:47.564799 - ** ComfyUI-Manager config path:2025-02-05T01:42:47.564799 - 2025-02-05T01:42:47.564799 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-02-05T01:42:47.564799 -
2025-02-05T01:42:47.564799 - ** Log path:2025-02-05T01:42:47.564799 - 2025-02-05T01:42:47.564799 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-02-05T01:42:47.564799 -
2025-02-05T01:43:02.999124 -
Prestartup times for custom nodes:
2025-02-05T01:43:03.000122 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use
2025-02-05T01:43:03.000122 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-02-05T01:43:03.000122 - 16.6 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-02-05T01:43:03.001122 -
2025-02-05T01:43:07.500847 - Checkpoint files will always be loaded safely.
2025-02-05T01:43:07.652497 - Total VRAM 16376 MB, total RAM 65273 MB
2025-02-05T01:43:07.654511 - pytorch version: 2.6.0+cpu
2025-02-05T01:43:07.655516 - Set vram state to: NORMAL_VRAM
2025-02-05T01:43:07.656515 - Device: cuda:0 NVIDIA GeForce RTX 4090 Laptop GPU : cudaMallocAsync
2025-02-05T01:43:08.616665 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
2025-02-05T01:43:10.204264 - ComfyUI version: 0.3.13
2025-02-05T01:43:10.227646 - [Prompt Server] web root: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\web
2025-02-05T01:43:10.827586 - Imported AddPaddingAdvanced successfully
2025-02-05T01:43:10.828585 - Imported AddPaddingBase successfully
2025-02-05T01:43:10.828585 - Imported AD_ImageResize successfully
2025-02-05T01:43:10.829629 - Imported AD_MockupMaker successfully
2025-02-05T01:43:10.830635 - Imported AD_PosterMaker successfully
2025-02-05T01:43:10.830635 - Imported AD_PromptSaver successfully
2025-02-05T01:43:10.831634 - Imported ComfyUI-FofrToolkit successfully
2025-02-05T01:43:11.260473 - Imported ComfyUI-ImageCaptioner successfully
2025-02-05T01:43:11.261472 - Imported ComfyUI-imageResize successfully
2025-02-05T01:43:11.261472 - Imported ComfyUI-textAppend successfully
2025-02-05T01:43:11.262480 - Imported imagecreatemask successfully
2025-02-05T01:43:11.262480 - Imported multiline_string successfully
2025-02-05T01:43:11.277030 - Adding2025-02-05T01:43:11.277030 - 2025-02-05T01:43:11.277030 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes2025-02-05T01:43:11.277030 - 2025-02-05T01:43:11.277030 - to sys.path2025-02-05T01:43:11.278029 -
2025-02-05T01:43:11.346493 - Could not find efficiency nodes2025-02-05T01:43:11.346966 -
2025-02-05T01:43:11.373414 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-02-05T01:43:11.374422 - [36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-02-05T01:43:11.374422 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-02-05T01:43:11.716660 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
2025-02-05T01:43:11.736621 - Loaded ControlNetPreprocessors nodes from2025-02-05T01:43:11.736621 - 2025-02-05T01:43:11.736621 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux2025-02-05T01:43:11.737621 -
2025-02-05T01:43:11.737621 - Could not find AdvancedControlNet nodes2025-02-05T01:43:11.737621 -
2025-02-05T01:43:11.738622 - Could not find AnimateDiff nodes2025-02-05T01:43:11.742629 -
2025-02-05T01:43:11.743621 - Could not find IPAdapter nodes2025-02-05T01:43:11.743621 -
2025-02-05T01:43:11.748003 - Could not find VideoHelperSuite nodes2025-02-05T01:43:11.748003 -
2025-02-05T01:43:11.750012 - ### Loading: ComfyUI-Impact-Pack (V8.7.1)2025-02-05T01:43:11.750012 -
2025-02-05T01:43:11.780133 - ### Loading: ComfyUI-Impact-Pack (V8.7.1)2025-02-05T01:43:11.780133 -
2025-02-05T01:43:11.793462 - Loaded ImpactPack nodes from2025-02-05T01:43:11.793462 - [Impact Pack] Wildcards loading done.2025-02-05T01:43:11.793462 - 2025-02-05T01:43:11.793462 -
2025-02-05T01:43:11.793462 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack2025-02-05T01:43:11.793462 -
2025-02-05T01:43:11.794724 - [Impact Pack] Wildcards loading done.2025-02-05T01:43:11.794724 -
2025-02-05T01:43:11.801723 - ### Loading: ControlnetAux (V0.3 beta)2025-02-05T01:43:11.805229 -
2025-02-05T01:43:12.183684 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
2025-02-05T01:43:12.185115 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\timm\models\registry.py:4: FutureWarning: Importing from timm.models.registry is deprecated, please import via timm.models
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning)
2025-02-05T01:43:12.186121 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_5m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_5m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)
2025-02-05T01:43:12.186121 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_11m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_11m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)
2025-02-05T01:43:12.187120 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)
2025-02-05T01:43:12.187120 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_384 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)
2025-02-05T01:43:12.187120 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\controlnet_aux\segment_anything\modeling\tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
return register_model(fn_wrapper)
2025-02-05T01:43:12.407270 - [Crystools [0;32mINFO[0m] Crystools version: 1.21.0
2025-02-05T01:43:12.439286 - [Crystools [0;32mINFO[0m] CPU: Intel(R) Core(TM) i9-14900HX - Arch: AMD64 - OS: Windows 11
2025-02-05T01:43:12.451445 - [Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
2025-02-05T01:43:12.451445 - [Crystools [0;32mINFO[0m] GPU/s:
2025-02-05T01:43:12.460597 - [Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 4090 Laptop GPU
2025-02-05T01:43:12.460597 - [Crystools [0;32mINFO[0m] NVIDIA Driver: 571.96
2025-02-05T01:43:12.647841 - [33mModule 'diffusers' load failed. If you don't have it installed, do it:[0m2025-02-05T01:43:12.647841 -
2025-02-05T01:43:12.648841 - [33mpip install diffusers[0m2025-02-05T01:43:12.648841 -
2025-02-05T01:43:12.976670 - [34m[ComfyUI-Easy-Use] server: [0mv1.2.7 [92mLoaded[0m2025-02-05T01:43:12.977676 -
2025-02-05T01:43:12.977676 - [34m[ComfyUI-Easy-Use] web root: [0mC:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use\web_version/v2 [92mLoaded[0m2025-02-05T01:43:12.980671 -
2025-02-05T01:43:13.066947 - ### Loading: ComfyUI-Impact-Pack (V8.7.1)2025-02-05T01:43:13.071948 -
2025-02-05T01:43:13.074455 - [Impact Pack] Wildcards loading done.2025-02-05T01:43:13.089554 -
2025-02-05T01:43:13.310748 - ### Loading: ComfyUI-Manager (V3.11.2)
2025-02-05T01:43:13.408578 - ### ComfyUI Revision: 3104 [016b219d] *DETACHED | Released on '2025-02-04'
2025-02-05T01:43:13.888897 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-02-05T01:43:13.899447 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-02-05T01:43:13.900437 - here: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox2025-02-05T01:43:13.900437 -
2025-02-05T01:43:13.901695 - Using ckpts path: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox\..\..\models\annotator2025-02-05T01:43:13.901695 -
2025-02-05T01:43:13.903132 - Using symlinks: False2025-02-05T01:43:13.903132 -
2025-02-05T01:43:13.903132 - Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']2025-02-05T01:43:13.903640 -
2025-02-05T01:43:13.963358 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-02-05T01:43:13.991839 - ------------------------------------------2025-02-05T01:43:13.991839 -
2025-02-05T01:43:13.993343 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-02-05T01:43:13.993343 -
2025-02-05T01:43:13.994346 - ------------------------------------------2025-02-05T01:43:13.994346 -
2025-02-05T01:43:13.994346 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-02-05T01:43:13.994346 -
2025-02-05T01:43:13.994346 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-02-05T01:43:13.994346 -
2025-02-05T01:43:13.994346 - ------------------------------------------2025-02-05T01:43:13.994346 -
2025-02-05T01:43:13.997347 -
[32mInitializing ControlAltAI Nodes[0m2025-02-05T01:43:14.006463 -
2025-02-05T01:43:14.032479 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-02-05T01:43:14.113314 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-02-05T01:43:14.149667 - 设置插件环境...2025-02-05T01:43:14.149667 -
2025-02-05T01:43:14.153660 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\Searge_LLM_Node.py", line 13, in <module>
Llama = importlib.import_module("llama_cpp_cuda").Llama
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib\__init__.py", line 90, in import_module
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1324, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'llama_cpp_cuda'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2112, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\__init__.py", line 1, in <module>
from .Searge_LLM_Node import *
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\Searge_LLM_Node.py", line 15, in <module>
Llama = importlib.import_module("llama_cpp").Llama
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib\__init__.py", line 90, in import_module
ModuleNotFoundError: No module named 'llama_cpp'
2025-02-05T01:43:14.154177 - Cannot import C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Searge_LLM module for custom nodes: No module named 'llama_cpp'
2025-02-05T01:43:14.195644 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2112, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint_Outpaint_Comfy\__init__.py", line 1, in <module>
from .LCM_Nodes import NODE_CLASS_MAPPINGS
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint_Outpaint_Comfy\LCM_Nodes.py", line 6, in <module>
from .LCM.lcm_pipeline_inpaint import LatentConsistencyModelPipeline_inpaint, LCMScheduler_X
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint_Outpaint_Comfy\LCM\lcm_pipeline_inpaint.py", line 26, in <module>
from diffusers import AutoencoderKL, ConfigMixin, DiffusionPipeline, SchedulerMixin, UNet2DConditionModel, logging, AsymmetricAutoencoderKL
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py). Did you mean: 'hf_hub_download'?
2025-02-05T01:43:14.196642 - Cannot import C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint_Outpaint_Comfy module for custom nodes: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py)
2025-02-05T01:43:14.329375 - [31mWAS Node Suite [0mError: [0mStyles file `C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\styles.csv` does not exist.[0m2025-02-05T01:43:14.330367 -
2025-02-05T01:43:14.796534 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-02-05T01:43:14.796534 -
2025-02-05T01:43:14.796534 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\pr-was-node-suite-comfyui-47064894\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-02-05T01:43:14.796534 -
2025-02-05T01:43:15.211176 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m220[0m [32mnodes successfully.[0m2025-02-05T01:43:15.211176 -
2025-02-05T01:43:15.211176 -
[3m[93m"The secret to getting ahead is getting started."[0m[3m - Mark Twain[0m
2025-02-05T01:43:15.211176 -
2025-02-05T01:43:15.227736 -
2025-02-05T01:43:15.227736 - [92m[rgthree-comfy] Loaded 42 magnificent nodes. 🎉[00m2025-02-05T01:43:15.227736 -
2025-02-05T01:43:15.227736 -
2025-02-05T01:43:15.240958 -
Import times for custom nodes:
2025-02-05T01:43:15.240958 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-02-05T01:43:15.241958 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\image-resize-comfyui
2025-02-05T01:43:15.241958 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-cropandstitch
2025-02-05T01:43:15.245481 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-02-05T01:43:15.249477 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen
2025-02-05T01:43:15.251468 - 0.0 seconds (IMPORT FAILED): C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Searge_LLM
2025-02-05T01:43:15.252478 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-styles_csv_loader
2025-02-05T01:43:15.253478 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2025-02-05T01:43:15.253478 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui
2025-02-05T01:43:15.253478 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-advanced-controlnet
2025-02-05T01:43:15.253998 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-02-05T01:43:15.253998 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-02-05T01:43:15.254995 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-florence2
2025-02-05T01:43:15.257005 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlaltai_nodes
2025-02-05T01:43:15.261003 - 0.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-02-05T01:43:15.265527 - 0.0 seconds (IMPORT FAILED): C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint_Outpaint_Comfy
2025-02-05T01:43:15.267528 - 0.1 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox
2025-02-05T01:43:15.268525 - 0.1 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_kimnodes
2025-02-05T01:43:15.268525 - 0.2 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-ollama
2025-02-05T01:43:15.269528 - 0.2 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-itools
2025-02-05T01:43:15.271527 - 0.3 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-crystools
2025-02-05T01:43:15.271527 - 0.4 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-controlnetaux
2025-02-05T01:43:15.274047 - 0.4 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-02-05T01:43:15.274047 - 0.4 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Addoor
2025-02-05T01:43:15.276046 - 0.5 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture
2025-02-05T01:43:15.279579 - 0.5 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use
2025-02-05T01:43:15.283579 - 1.0 seconds: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\pr-was-node-suite-comfyui-47064894
2025-02-05T01:43:15.284755 -
2025-02-05T01:43:15.296466 - Starting server
2025-02-05T01:43:15.300465 - To see the GUI go to: http://127.0.0.1:8188
2025-02-05T01:43:20.923606 - FETCH ComfyRegistry Data: 5/322025-02-05T01:43:20.924109 -
2025-02-05T01:43:27.349031 - FETCH ComfyRegistry Data: 10/322025-02-05T01:43:27.349031 -
2025-02-05T01:43:28.709642 - FETCH DATA from: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json2025-02-05T01:43:28.710651 - 2025-02-05T01:43:28.723455 - [DONE]2025-02-05T01:43:28.724460 -
2025-02-05T01:43:28.991258 - [ERROR] An error occurred while retrieving information for the 'controlaux_hed' node.
2025-02-05T01:43:28.995788 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:28.996783 - [ERROR] An error occurred while retrieving information for the 'controlaux_midas' node.
2025-02-05T01:43:28.997790 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:28.998782 - [ERROR] An error occurred while retrieving information for the 'controlaux_mlsd' node.
2025-02-05T01:43:28.999784 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:28.999784 - [ERROR] An error occurred while retrieving information for the 'controlaux_openpose' node.
2025-02-05T01:43:29.001790 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.001790 - [ERROR] An error occurred while retrieving information for the 'controlaux_pidi' node.
2025-02-05T01:43:29.003295 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.004301 - [ERROR] An error occurred while retrieving information for the 'controlaux_dwpose' node.
2025-02-05T01:43:29.005298 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.005298 - [ERROR] An error occurred while retrieving information for the 'controlaux_normal_bae' node.
2025-02-05T01:43:29.007299 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.007299 - [ERROR] An error occurred while retrieving information for the 'controlaux_lineart' node.
2025-02-05T01:43:29.009298 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.010306 - [ERROR] An error occurred while retrieving information for the 'controlaux_lineart_anime' node.
2025-02-05T01:43:29.011299 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.012301 - [ERROR] An error occurred while retrieving information for the 'controlaux_zoe' node.
2025-02-05T01:43:29.013307 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.013826 - [ERROR] An error occurred while retrieving information for the 'controlaux_sam' node.
2025-02-05T01:43:29.014824 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.015824 - [ERROR] An error occurred while retrieving information for the 'controlaux_leres' node.
2025-02-05T01:43:29.016831 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.017831 - [ERROR] An error occurred while retrieving information for the 'controlaux_canny' node.
2025-02-05T01:43:29.018824 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.019825 - [ERROR] An error occurred while retrieving information for the 'controlaux_content' node.
2025-02-05T01:43:29.020825 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:29.021832 - [ERROR] An error occurred while retrieving information for the 'controlaux_face_detector' node.
2025-02-05T01:43:29.021832 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 589, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\server.py", line 557, in node_info
info['input_order'] = {key: list(value.keys()) for (key, value) in obj_class.INPUT_TYPES().items()}
^^^^^^^^^^
AttributeError: 'str' object has no attribute 'keys'
2025-02-05T01:43:32.310241 - Error. No styles.csv found. Put your styles.csv in the root directory of ComfyUI. Then press "Refresh".
Your current root directory is: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI
2025-02-05T01:43:32.312242 -
2025-02-05T01:43:32.312242 - Error. No styles.csv found. Put your styles.csv in the root directory of ComfyUI. Then press "Refresh".
Your current root directory is: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI
2025-02-05T01:43:32.312242 -
2025-02-05T01:43:32.434123 - 目录不存在: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\models\yolo
2025-02-05T01:43:32.436216 - 目录不存在: C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\models\yolo
2025-02-05T01:43:32.460636 - ['flux_train_replicatej_joaercole.safetensors']2025-02-05T01:43:32.461636 -
2025-02-05T01:43:32.462634 - ['flux_train_replicatej_joaercole.safetensors']2025-02-05T01:43:32.462634 -
2025-02-05T01:43:32.560750 - C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_response.py:804: UserWarning: Synchronous compression of large response bodies (1415497 bytes) might block the async event loop. Consider providing a custom value to zlib_executor_size/zlib_executor response properties or disabling compression on it.
warnings.warn(
2025-02-05T01:43:34.385799 - FETCH ComfyRegistry Data: 15/322025-02-05T01:43:34.385799 -
2025-02-05T01:43:35.082912 - got prompt
2025-02-05T01:43:35.209331 - OmniGen code already exists2025-02-05T01:43:35.209331 -
2025-02-05T01:43:35.210331 - OmniGen models verified successfully2025-02-05T01:43:35.210331 -
2025-02-05T01:43:35.221323 - Error importing OmniGen: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py)2025-02-05T01:43:35.221323 -
2025-02-05T01:43:35.229327 - !!! Exception during processing !!! Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:43:35.257333 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 53, in __init__
from OmniGen import OmniGenPipeline
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\__init__.py", line 1, in <module>
from .model import OmniGen
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\model.py", line 9, in <module>
from diffusers.loaders import PeftAdapterMixin
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py). Did you mean: 'hf_hub_download'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 293, in execute
obj = class_def()
^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 57, in __init__
raise RuntimeError("Failed to import OmniGen. Please check if the code was downloaded correctly.")
RuntimeError: Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:43:35.262324 - Prompt executed in 0.18 seconds
2025-02-05T01:43:40.883303 - FETCH ComfyRegistry Data: 20/322025-02-05T01:43:40.884297 -
2025-02-05T01:43:47.486441 - FETCH ComfyRegistry Data: 25/322025-02-05T01:43:47.486959 -
2025-02-05T01:43:54.190765 - FETCH ComfyRegistry Data: 30/322025-02-05T01:43:54.194765 -
2025-02-05T01:43:57.401196 - FETCH ComfyRegistry Data [DONE]2025-02-05T01:43:57.401196 -
2025-02-05T01:43:57.530329 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-02-05T01:43:57.602005 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-02-05T01:43:57.604002 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-05T01:43:57.604002 - 2025-02-05T01:43:57.911450 - [DONE]2025-02-05T01:43:57.912361 -
2025-02-05T01:44:13.118550 - got prompt
2025-02-05T01:44:13.132061 - OmniGen code already exists2025-02-05T01:44:13.132061 -
2025-02-05T01:44:13.133062 - OmniGen models verified successfully2025-02-05T01:44:13.133062 -
2025-02-05T01:44:13.147608 - Error importing OmniGen: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py)2025-02-05T01:44:13.147608 -
2025-02-05T01:44:13.149614 - !!! Exception during processing !!! Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:44:13.159876 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 53, in __init__
from OmniGen import OmniGenPipeline
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\__init__.py", line 1, in <module>
from .model import OmniGen
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\model.py", line 9, in <module>
from diffusers.loaders import PeftAdapterMixin
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py). Did you mean: 'hf_hub_download'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 293, in execute
obj = class_def()
^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 57, in __init__
raise RuntimeError("Failed to import OmniGen. Please check if the code was downloaded correctly.")
RuntimeError: Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:44:13.162876 - Prompt executed in 0.04 seconds
2025-02-05T01:47:02.553551 - got prompt
2025-02-05T01:47:02.565075 - OmniGen code already exists2025-02-05T01:47:02.566579 -
2025-02-05T01:47:02.567583 - OmniGen models verified successfully2025-02-05T01:47:02.567583 -
2025-02-05T01:47:02.578939 - Error importing OmniGen: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py)2025-02-05T01:47:02.579938 -
2025-02-05T01:47:02.586442 - !!! Exception during processing !!! Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:47:02.608499 - Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 53, in __init__
from OmniGen import OmniGenPipeline
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\__init__.py", line 1, in <module>
from .model import OmniGen
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\OmniGen\model.py", line 9, in <module>
from diffusers.loaders import PeftAdapterMixin
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\__init__.py). Did you mean: 'hf_hub_download'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 293, in execute
obj = class_def()
^^^^^^^^^^^
File "C:\prueba\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-OmniGen\AILab_OmniGen.py", line 57, in __init__
raise RuntimeError("Failed to import OmniGen. Please check if the code was downloaded correctly.")
RuntimeError: Failed to import OmniGen. Please check if the code was downloaded correctly.
2025-02-05T01:47:02.625003 - Prompt executed in 0.07 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":22,"last_link_id":18,"nodes":[{"id":16,"type":"SaveImage","pos":[847,90],"size":[619.6498413085938,665.0986938476562],"flags":{},"order":2,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":17}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":21,"type":"ailab_OmniGen","pos":[410,90],"size":[394.94378662109375,492.285888671875],"flags":{},"order":1,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":16,"shape":7},{"name":"image_2","type":"IMAGE","link":null,"shape":7},{"name":"image_3","type":"IMAGE","link":null,"shape":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[17]}],"properties":{"Node name for S&R":"ailab_OmniGen"},"widgets_values":["None","the man from image_1 is sitting in a throne, cinematic photo, corporation mood, beautiful charming photo, award winning photography","Auto","Balanced",3.5,1.8,50,true,false,1024,1024,320701274182481,"randomize",1024]},{"id":11,"type":"LoadImage","pos":[101,89],"size":[279.8789978027344,371.3621826171875],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[16],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["ComfyUI_temp_mbcay_00004_.png","image"]}],"links":[[16,11,0,21,0,"IMAGE"],[17,21,0,16,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.19181765377273,"offset":[-108.66948174319516,3.162546749944121]},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"node_versions":{"comfy-core":"0.3.13","ComfyUI-OmniGen":"121073508ff04773c57bbdedabf51ff0062190ba"}},"version":0.4}
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_ | closed | 2025-02-05T00:47:30Z | 2025-02-05T08:49:56Z | https://github.com/comfyanonymous/ComfyUI/issues/6704 | [
"User Support"
] | joako357 | 1 |
Gozargah/Marzban | api | 652 | add flow in hosts instead of users | it's better to include flow in hosts because if you use `xtls-rprx-vision` only reality work and if you have somthing like `vless tcp` or `vless ws` not connecting and if someone forgot to include flow for some user they can do it once in hosts not one by one for all users | closed | 2023-11-22T06:06:28Z | 2023-12-31T10:57:58Z | https://github.com/Gozargah/Marzban/issues/652 | [] | ImMohammad20000 | 1 |
AirtestProject/Airtest | automation | 525 | wda.WDARequestError: WDARequestError(status=13, value=Error Domain=XCTDaemonErrorDomain Code=14 | Remove any following parts if does not have details about
**Describe the bug**
A clear and concise description of what the bug is. Or paste traceback below.
执行时报错上述信息
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/utils/logwraper.py:72: in wrapper
res = f(*args, **kwargs)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/api.py:367: in text
G.DEVICE.text(text, enter=enter, **kwargs)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/ios/ios.py:44: in wrapper
raise err
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/ios/ios.py:37: in wrapper
return func(self, *args, **kwargs)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/airtest/core/ios/ios.py:251: in text
self.session.send_keys(text)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wda/__init__.py:644: in send_keys
return self.http.post('/wda/keys', data={'value': value})
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wda/__init__.py:144: in fetch
return self._fetch_no_alert(method, url, data)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wda/__init__.py:150: in _fetch_no_alert
return httpdo(target_url, method, data)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
url = 'http://127.0.0.1:8100/session/31BEFFB2-C374-4FD1-ADF4-9D31CA0300B6/wda/keys', method = 'post', data = {'value': ['1', '2', '\n']}
def httpdo(url, method='GET', data=None):
"""
Do HTTP Request
"""
start = time.time()
if DEBUG:
body = json.dumps(data) if data else ''
print("Shell: curl -X {method} -d '{body}' '{url}'".format(
method=method.upper(), body=body or '', url=url))
try:
response = requests.request(method,
url,
json=data,
timeout=HTTP_TIMEOUT)
except (requests.exceptions.ConnectionError,
requests.exceptions.ReadTimeout) as e:
raise
if DEBUG:
ms = (time.time() - start) * 1000
print('Return ({:.0f}ms): {}'.format(ms, response.text))
try:
retjson = response.json()
retjson['status'] = retjson.get('status', 0)
r = convert(retjson)
if r.status != 0:
> raise WDARequestError(r.status, r.value)
E wda.WDARequestError: WDARequestError(status=13, value=Error Domain=XCTDaemonErrorDomain Code=14 "Timed out after waiting 1.0s for KeyEventCompleted after sending event for '
E '." UserInfo={NSLocalizedDescription=Timed out after waiting 1.0s for KeyEventCompleted after sending event for '
E '.})
**To Reproduce**
Steps to reproduce the behavior:
定位元素后,
调用text()输入内容
执行时报错
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**python version:** `python3.5`
python3.6
**airtest version:** `1.0.22`
> You can get airtest version via `pip freeze` command.
1.0.27
**Smartphone (please complete the following information):**
iPhone 6(ios 11)
**Additional context**
Add any other context about the problem here.
| open | 2019-09-11T07:35:52Z | 2019-10-11T03:36:38Z | https://github.com/AirtestProject/Airtest/issues/525 | [] | agoraHan | 1 |
ivy-llc/ivy | tensorflow | 28,539 | Fix Frontend Failing Test: torch - math.tensorflow.math.reduce_prod | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-11T10:56:03Z | 2024-04-02T09:31:09Z | https://github.com/ivy-llc/ivy/issues/28539 | [
"Sub Task"
] | ZJay07 | 0 |
activeloopai/deeplake | computer-vision | 2,369 | [FEATURE] Allow users to disable tiling when encoding images | ## 🚨🚨 Feature Request
- [ ] Related to an existing [Issue](../issues)
- [x] A new implementation (Improvement, Extension)
### Is your feature request related to a problem?
Nope, not for deeplake users!
### If your feature will improve `deeplake`
Being able to "disable" the tiling of images when very large arrays are used (e.g. satellite imagery) would be a nice feature to have. I am currently building my own compression/decompression pipeline which crops and downscales/upscales images as they are loaded, and the fact that tiled images cannot be converted to `bytes` directly is quite limiting. The library currently throws:
```
NotImplementedError: `tobytes=True` is not supported by tiled samples as it can cause recompression.
```
...whenever a "bytes" array is requested out of a tensor dataset for large images.
### Description of the possible solution
I would assume that adding a `allow_tiling: bool = True` option to the `deeplake.core.dataset.Dataset.create_tensor` function would be the best way to expose this setting to end users. Defaulting it to `True` makes sense (keeps the current behavior intact by default). Setting it to false should simply skip tiling, and help me on my way to create amazingly fast/flexible loaders for big image arrays.
**Teachability, Documentation, Adoption, Migration Strategy**
The docs already do not mention anything about tiling, so I was quite surprised to see that it's used under the hood. I assume it's to speed up access to smaller image regions in such datasets, but this is limiting when "bytes" objects are expected instead of the numpy arrays. Simply changing the signature of `deeplake.core.dataset.Dataset.create_tensor` by adding a new parameter would probably not confuse users, and it would probably be the main place to look when someone's thinking about disabling tiling.
| closed | 2023-05-22T19:08:42Z | 2023-05-23T14:44:08Z | https://github.com/activeloopai/deeplake/issues/2369 | [
"enhancement"
] | plstcharles | 4 |
explosion/spaCy | data-science | 12,919 | EntityRuler match order is dependent on Python's set implementation | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
This is not necessarily a bug, and I am not sure there's any elegant solution to the problem, but please hear me out. I was migrating my model from python 3.7 to 3.10, and I got some seemingly random differences. I drilled it down to `EntityRuler` behavior when different rules match the same span. I am still on spaCy `v3.1`, but I looked in the code, the same behavior exists in `v3.6`. It all drills down to this in `entityruler.py` (master branch, line 175, `match` method):
```
final_matches = set(
[(m_id, start, end) for m_id, start, end in matches if start != end]
)
```
Until this point, the match order is predictable (the order in which patterns were added). However, when a set is created, the order changes to whatever the `set` implementation dictates. Apparently, somewhere between python 3.7 and 3.10, that implementation has changed, and thus the order in which the matches are returned changed as well.
I was wondering what you guys/gals think about predictability of the returned matches here (and named entities as a result), and its dependency on `set` implementation in Python.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```
from spacy import load as load_spacy
from spacy.pipeline import EntityRuler
nlp = load_spacy('en_core_web_sm')
# 2 patterns matching the same word, with 2 different ent_ids_ specified
pattern_1 = {'label': 'DEVICE', 'pattern': [{'LOWER': {'IN': ['bypass', 'stent']}}], 'id': 'stent'}
pattern_2 = {'label': 'DEVICE', 'pattern': [{'LOWER': {'IN': ['bypass', ]}}], 'id': 'repair'}
ruler = EntityRuler(nlp, validate=True, overwrite_ents=True)
ruler.add_patterns([pattern_1, pattern_2])
doc = nlp.tokenizer('Severe coronary atherosclerotic disease with prior bypass surgery.')
ruler(doc)
entity = doc.ents[0]
print("entity: %s, label: %s, ent_id: %s" % (entity, entity.label_, entity.ent_id_))
```
Output:
```
Python 3.7:
entity: bypass, label: DEVICE, ent_id: repair
Python 3.10:
entity: bypass, label: DEVICE, ent_id: stent
```
Code from `entityruler.py`:
```
matches = list(ruler.matcher(doc)) + list(ruler.phrase_matcher(doc))
print("Matcher:", matches)
final_matches = set([(m_id, start, end) for m_id, start, end in matches if start != end])
print("Set: ", list(final_matches))
get_sort_key = lambda m: (m[2] - m[1], -m[1])
final_matches = sorted(final_matches, key=get_sort_key, reverse=True)
print("Sorted: ", final_matches)
```
Output:
```
Python 3.7:
Matcher: [(11861990552470164073, 6, 7), (9586214660192362110, 6, 7)]
Set: [(9586214660192362110, 6, 7), (11861990552470164073, 6, 7)]
Sorted: [(9586214660192362110, 6, 7), (11861990552470164073, 6, 7)]
Python 3.10:
Matcher: [(11861990552470164073, 6, 7), (9586214660192362110, 6, 7)]
Set: [(11861990552470164073, 6, 7), (9586214660192362110, 6, 7)]
Sorted: [(11861990552470164073, 6, 7), (9586214660192362110, 6, 7)]
```
Code demonstrating implementation change in Python's `set`:
```
x = {(5, 1, 2),(4, 1, 2), (3, 1, 2), (2, 1, 2), (1, 1, 2)}
sorted(x, key=get_sort_key, reverse=True)
```
Output:
```
Python 3.7:
[(5, 1, 2), (1, 1, 2), (2, 1, 2), (4, 1, 2), (3, 1, 2)]
Python 3.10:
[(2, 1, 2), (3, 1, 2), (5, 1, 2), (4, 1, 2), (1, 1, 2)]
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
## Info about spaCy (python3.7)
- **spaCy version:** 3.1.0
- **Platform:** Darwin-21.5.0-x86_64-i386-64bit
- **Python version:** 3.7.9
- **Pipelines:** en_core_web_trf (3.1.0), en_core_web_sm (3.1.0), en_core_web_md (3.1.0), en_core_web_lg (3.1.0), eon_actionable_findings_core_web_sm (1.0.0)
## Info about spaCy (python3.10)
- **spaCy version:** 3.1.7
- **Platform:** macOS-12.4-x86_64-i386-64bit
- **Python version:** 3.10.11
- **Pipelines:** en_core_web_sm (3.1.0)
| closed | 2023-08-16T16:05:52Z | 2023-09-22T00:02:08Z | https://github.com/explosion/spaCy/issues/12919 | [
"compat",
"feat / spanruler"
] | nrodnova | 5 |
simple-login/app | flask | 1,655 | Logging out does not log out of Proton account | ## Prerequisites
- [x] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
1. Login to SimpleLogin with a Proton account by the `Log in with Proton` on the login screen. After logging in, you are automatically redirect to the SimpleLogin dashboard.
2. Then log out of SimpleLogin by clicking your initials on the top right, then clicking "Sign out"
3. Browse to https://account.proton.me/login?product=generic&language=en , which will automatically log you into your Proton account without authenticating.
4. Browse to https://app.simplelogin.io/auth/proton/login , which will log you into SimpleLogin without authenticating.
**Expected behavior**
- At step 3, you should not be able to access the Proton account without authenticating, since you logged out on step 2.
- At step 4, you should not be able to access your SimpleLogin account without authenticating, since you logged out on step 2.
**Environment (If applicable):**
- OS: Linux, Mac
- Browser: Brave
- Version 1.49.120
**Additional context**
This could allow someone else other than yourself access to your Proton account even though you only accessed SimpleLogin.
| closed | 2023-03-23T03:06:24Z | 2023-03-24T00:10:04Z | https://github.com/simple-login/app/issues/1655 | [] | jbabyhacker | 1 |
plotly/jupyterlab-dash | dash | 6 | "jupyterlab_dash@0.1.0" is not compatible with the current JupyterLab | ValueError:
"jupyterlab_dash@0.1.0" is not compatible with the current JupyterLab
Conflicting Dependencies:
JupyterLab Extension Package
>=0.16.3 <0.17.0 >=0.19.1 <0.20.0 @jupyterlab/application
>=0.16.3 <0.17.0 >=0.19.2 <0.20.0 @jupyterlab/notebook
>=0.16.3 <0.17.0 >=0.19.1 <0.20.0 @jupyterlab/console
My jupyter lab --version:
0.32.1
Kind regards, | closed | 2019-01-04T16:47:53Z | 2019-04-17T06:57:31Z | https://github.com/plotly/jupyterlab-dash/issues/6 | [] | gusseppe | 4 |
python-gitlab/python-gitlab | api | 2,444 | Create CI/CD variable with "raw=true" parameter raises error, update works | ## Description of the problem, including code/CLI snippet
If you try to create a project or group variables with "raw" parameter set to True it raises error (see below)
```
def set_cicd_variable(project_or_group, key, value, is_file=False, raw=True):
try:
project_or_group.variables.create(
{'key': key, 'value': value},
variable_type='file' if is_file else 'env_var',
raw=raw
)
except GitlabCreateError:
project_or_group.variables.update(
id=key,
new_data={
'value': value, 'variable_type': 'file' if is_file else 'env_var', 'raw': raw
}
)
```
## Expected Behavior
creation of the CI/CD variable with "raw" parameter set to "true"
## Actual Behavior
CREATE raises:
`The provided content-type 'application/octet-stream' is not supported gitlab python`
UPDATE actually works
## Specifications
- python-gitlab version: 3.12.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 15.7
| closed | 2023-01-02T10:55:26Z | 2024-04-08T01:15:59Z | https://github.com/python-gitlab/python-gitlab/issues/2444 | [
"need info",
"stale"
] | rabbagliettiandrea | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 763 | using different discriminator | Hi Dr.Zhu
Sorry to bother you again.
Is it possible to use Relativistic Discriminators for CycleGAN training?

My concern is that for cycleGAN real and fake are different images, but RaD seems to be only used for the paired case. Are you familiar with this case ?
Thanks
| closed | 2019-09-11T02:20:48Z | 2019-09-25T08:29:32Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/763 | [] | wl082013 | 5 |
desec-io/desec-stack | rest-api | 125 | deSEC DNS Authenticator plugin for Certbot (Let's Encrypt) | Let's write a plugin like this one: https://github.com/certbot/certbot/tree/master/certbot-dns-cloudflare | closed | 2018-09-26T22:55:39Z | 2021-05-13T14:47:52Z | https://github.com/desec-io/desec-stack/issues/125 | [
"enhancement",
"help wanted",
"prio: medium",
"easy"
] | peterthomassen | 5 |
tiangolo/uwsgi-nginx-flask-docker | flask | 174 | Enabling DataDog ddtrace | We have a number of flask API's running as docker containers and using this as a base. We are in the process of implementing DataDog's ddtrace and need to call python as "ddtrace python3". Is there a way to hijack the entrypoint to do this? | closed | 2020-05-05T20:34:42Z | 2020-05-20T00:12:11Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/174 | [] | jwodushek | 2 |
deepspeedai/DeepSpeed | deep-learning | 7,044 | [BUG] deepspeed zero2 training hangon and timeout after a fixed step | **Describe the bug**
I use deepspeed zero2 to train a transformer-based dit model. However, the script always gets stuck at a fixed step after one hour of training. When I disable deepspeed and use pure pytorch DDP to train the code, the problem disappears. Moreover, even if I change the mixed-precision from fp16 to bf16 or adjust the learning rate, the same problem occurs and the script gets stuck at the same training step.
I have also tried changing the model initialization and resuming the training, but the script still gets stuck after the same training step. For example, if the script gets stuck after training 14K steps, I save the checkpoint at the 10K step and resume training with the 10K-step checkpoint. Then, after training another 14K steps again, the script gets stuck once more.
**ds_report output**
[rank0]:[E217 07:24:07.938443115 ProcessGroupNCCL.cpp:616] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=54053, OpType=ALLREDUCE, NumelIn=497464352, NumelOut=497464352, Timeout(ms)=600000) ran for 600087 milliseconds before timing out.
[rank0]:[E217 07:24:07.938551513 ProcessGroupNCCL.cpp:1785] [PG ID 1 PG GUID 1 Rank 0] Exception (either an error or timeout) detected by watchdog at work: 54053, last enqueued NCCL work: 54054, last completed NCCL work: 54052.
[rank0]:[E217 07:24:08.990085728 ProcessGroupNCCL.cpp:1834] [PG ID 1 PG GUID 1 Rank 0] Timeout at NCCL work: 54053, last enqueued NCCL work: 54054, last completed NCCL work: 54052.
[rank0]:[E217 07:24:08.990112551 ProcessGroupNCCL.cpp:630] [Rank 0] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank0]:[E217 07:24:08.990119150 ProcessGroupNCCL.cpp:636] [Rank 0] To avoid data inconsistency, we are taking the entire process down.
[rank0]:[E217 07:24:08.991954167 ProcessGroupNCCL.cpp:1595] [PG ID 1 PG GUID 1 Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=54053, OpType=ALLREDUCE, NumelIn=497464352, NumelOut=497464352, Timeout(ms)=600000) ran for 600087 milliseconds before timing out.
Exception raised from checkTimeout at /opt/tiger/compile_path/src/code.byted.org/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x91 (0x7f6aa3f5fdd1 in /usr/local/lib/python3.11/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x1022831 (0x7f6aa4fee831 in /usr/local/lib/python3.11/dist-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x22a (0x7f6aa500d27a in /usr/local/lib/python3.11/dist-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::watchdogHandler() + 0x22e (0x7f6aa500d92e in /usr/local/lib/python3.11/dist-packages/torch/lib/libtorch_cuda.so)
frame #4: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x143 (0x7f6aa500f5f3 in /usr/local/lib/python3.11/dist-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0xd44a3 (0x7f6a96cba4a3 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #6: <unknown function> + 0x89144 (0x7f6ae9427144 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #7: <unknown function> + 0x1097dc (0x7f6ae94a77dc in /usr/lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
**System info (please complete the following information):**
- Python version: Python 3.11.2
- Pytorch version: 2.5.1+cu124
- Deepspeed version: 0.16.3
- Any other relevant info about your setup
**Launcher context**
python -m torch.distributed.launch --nnodes=1 --nproc_per_node=8 --master_port=12345 train_deepspeed.py --xxx
**Additional context**
Add any other context about the problem here.
| open | 2025-02-17T07:47:44Z | 2025-03-13T02:29:38Z | https://github.com/deepspeedai/DeepSpeed/issues/7044 | [
"bug",
"training"
] | leeruibin | 8 |
modelscope/modelscope | nlp | 795 | 如何微调cv_resnest101_general_recognition模型 | **General Question**
Before asking a question, make sure you have:
* Searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* Googled your question.
* Searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
@wenmengzhou @tastelikefeet
@tastelikefeet @Jintao-Huang
| closed | 2024-03-04T07:22:26Z | 2024-03-20T02:28:46Z | https://github.com/modelscope/modelscope/issues/795 | [] | AnitaSherry | 1 |
horovod/horovod | machine-learning | 4,066 | NCCL error while training (SGD optimizer). | Framework: (TensorFlow, Keras, PyTorch, MXNet) Tensorflow
Framework version: Whatever is in latest horovod container from docker hub (2.9.2)
Horovod version: Whatever is in latest docker hub image (not sure)
MPI version: OpenMPI 4.1.4
CUDA version: Whatever is in latest docker hub image (not sure)
NCCL version:Whatever is in latest docker hub image (not sure)
Python version: 3.8.10
Spark / PySpark version: Whatever is in latest docker hub image (not sure)
Ray version: Whatever is in latest docker hub image (not sure)
OS and version: Ubuntu 20.04.5
GCC version: 9.4.0
CMake version: Whatever is in latest docker hub image (not sure)
Checklist:
Did you search issues to find if somebody asked this question before? Yes
If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I'm running a simple 2-node test where each 8 GPUs. I modified the example script, https://github.com/horovod/horovod/blob/master/examples/tensorflow2/tensorflow2_synthetic_benchmark.py.
I'm using the latest horovod container from Docker hub (it's over a year old).
I get the following error:
```
Traceback (most recent call last):
File "/home/jelayton/BERT_SCRIPT/NEW_BERT/new_bert_04.py", line 732, in <module>
history = classifier_model.fit(train_ds,\
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnknownError: Graph execution error:
Detected at node 'DistributedSGD_Allreduce/cond/HorovodAllgather_grads_1_0' defined at (most recent call last):
File "/home/jelayton/BERT_SCRIPT/NEW_BERT/new_bert_04.py", line 732, in <module>
history = classifier_model.fit(train_ds,\
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1409, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1051, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1040, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1030, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 893, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 537, in minimize
grads_and_vars = self._compute_gradients(
File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/__init__.py", line 136, in _compute_gradients
allreduced_grads = self._allreduce(grads, weights)
File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/__init__.py", line 218, in _allreduce
return __filtered_reduce_grads(grads, vars)
File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/__init__.py", line 184, in __filtered_reduce_grads
rg = self._allreduce_grads(rg, rv)
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 573, in allreduce_grads
if groups is not None:
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 616, in allreduce_grads
op=op,
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 616, in allreduce_grads
op=op,
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 616, in allreduce_grads
op=op,
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 398, in _allreduce_cond
return tf.cond(cond, allreduce_fn, id_fn)
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 384, in allreduce_fn
return allreduce(tensor, *args, process_set=process_set, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 102, in allreduce
if isinstance(tensor, tf.IndexedSlices):
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/__init__.py", line 117, in allreduce
indices = allgather(tensor.indices, process_set=process_set, ignore_name_scope=ignore_name_scope)
File "/usr/local/lib/python3.8/dist-packages/horovod/tensorflow/mpi_ops.py", line 222, in allgather
return MPI_LIB.horovod_allgather(tensor, name=name,
File "<string>", line 393, in horovod_allgather
Node: 'DistributedSGD_Allreduce/cond/HorovodAllgather_grads_1_0'
ncclCommInitRank failed: invalid usage
[[{{node DistributedSGD_Allreduce/cond/HorovodAllgather_grads_1_0}}]] [Op:__inference_train_function_40707]
```
I get this error for all of the 16 GPUs.
I focused on the message "ncclCommInitRank failed: invalid usage" but didn't see anything comparable anywhere.
I've tried the SGD and Adam optimizer and both give the same error message for the equivalent optimizer (SGD error message will mention SGD, Adam error message will mention Adam).
Any ideas or pointers?
Thanks! | open | 2024-08-16T18:20:29Z | 2024-08-16T18:20:29Z | https://github.com/horovod/horovod/issues/4066 | [
"bug"
] | laytonjbgmail | 0 |
chezou/tabula-py | pandas | 88 | Encoding | I can not use the tool for utf-8 encoding, it writes warning and prints only "?"-s | closed | 2018-04-30T13:57:25Z | 2018-05-02T01:15:32Z | https://github.com/chezou/tabula-py/issues/88 | [] | bekab95 | 10 |
twopirllc/pandas-ta | pandas | 861 | Attribute error when using yfinance for data loading | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
[pandas_ta version 0.3.14b0]
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
[Not the cause]
**Have you tried the _development_ version? Did it resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta.git@development
```
[I checked the development, did not find the integration in it]
**Describe the bug**
When I try using yfinance to fetch data, it is throwing an attribute error.
**To Reproduce**
Provide sample code.
[
import pandas as pd
import pandas_ta as ta
df = pd.DataFrame() # Empty DataFrame
# Load data
# df = pd.read_csv("path/to/symbol.csv", sep=",")
# OR if you have yfinance installed
df = df.ta.ticker("aapl")
]
**Expected behavior**
A clear and concise description of what you expected to happen.
[
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], line 4
1 # Load data
2 # df = pd.read_csv("path/to/symbol.csv", sep=",")
3 # OR if you have yfinance installed
----> 4 df = df.ta.ticker("aapl")
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas_ta\core.py:851, in AnalysisIndicators.ticker(self, ticker, **kwargs)
849 ds = ds.lower() is not None and isinstance(ds, str)
850 # df = av(ticker, **kwargs) if ds and ds == "av" else yf(ticker, **kwargs)
--> 851 df = yf(ticker, **kwargs)
853 if df is None: return
854 elif df.empty:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas_ta\utils\data\yahoofinance.py:86, in yf(ticker, **kwargs)
84 if Imports["yfinance"] and ticker is not None:
85 import yfinance as yfra
---> 86 yfra.pdr_override()
88 # Ticker Info & Chart History
89 yfd = yfra.Ticker(ticker)
AttributeError: module 'yfinance' has no attribute 'pdr_override'
]
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
[This has to do with the attempt to run .pdr_override()]
Thanks for using Pandas TA!
| closed | 2024-12-14T14:32:11Z | 2024-12-16T06:55:15Z | https://github.com/twopirllc/pandas-ta/issues/861 | [
"bug",
"duplicate"
] | miracle-a-osigwe | 4 |
jmcnamara/XlsxWriter | pandas | 317 | Rearranging worksheets | It's "inadvisable" to rearrange worksheets once they have been added, according to [this Stack Overflow answer](http://stackoverflow.com/a/21124112/95852). So I looked into possible reasons why, and it seems to come down to the fact that `Workbook.worksheets_objs` and `Workbook.sheetnames` are independent lists, with no API that guarantees keeping them synchronized. In particular, `Workbook._get_sheet_index` relies on `sheetnames` being unaltered except via `Workbook._add_sheet`.
Some possibilities (not mutually exclusive) that occurred to me:
- Add a public method to insert a new sheet just before a given ordinal position or sheet name.
- Convert `sheetnames` to a property with a getter that returns `[ws.name for ws in worksheets_objs]`.
- Add a data structure to `Workbook` to map sheet names to internal worksheet indices (the `index` worksheet attribute, not ordinal position), to make `Workbook._get_sheet_index` trivial or unnecessary.
- If `sheetnames` becomes a property, give it a setter that rearranges `worksheets_objs`.
Would some combination of these relatively lightweight changes be sufficient, or is the problem deeper? Your answer stated that sorting `worksheets_objs` might work for "simple" cases but not "complex" ones. At the very least, I think these changes would increase the number cases where reordering works. (And they are simple enough that I could work on them, if there is interest.)
| closed | 2015-12-15T18:57:43Z | 2016-01-05T10:56:44Z | https://github.com/jmcnamara/XlsxWriter/issues/317 | [
"question",
"ready to close"
] | jkyeung | 3 |
tqdm/tqdm | pandas | 1,259 | Force display of iterations per second (it/s) - instead of displaying inverse (s/it) based on rate | For a given code example:
```
from tqdm import tqdm
from time import sleep
for i in tqdm(range(100)):
sleep(1.5)
```
the output looks like this:
`6%|▌ | 6/100 [00:09<02:21, 1.50s/it]`
but I'd like for it to actually force display it/s (generally, units per time, and possibly vice versa) so the output for this example (with the required parameter being set) would look like this:
`6%|▌ | 6/100 [00:09<02:21, 0.67it/s]`
as I see it, this functionality is basically hardcoded in std.py (in master branch as of October 2021) https://github.com/tqdm/tqdm/blob/fc69d5dcf578f7c7986fa76841a6b793f813df35/tqdm/std.py#L450
a modification of the above code line to
`rate_fmt = rate_noinv_fmt`
achieves desired behavior, but does not parametrize the rate when each format should be displayed. As I see it right now, that hardcoded "1" may be the only thing in need of parametrizing (possibly member variable) with an option to be set to None or so to completely ignore it, while a default value could be 1 to preserve current default behavior.
is there any other way right now to "force" set it/s mode (without changing tqdm code), and possibly set the "trigger" frequency? Say, for example, one might want to display seconds per iteration only if rate is lower than 1e-3 it/s)...
EDIT 1, 2 & 3: sentence formatting, spelling, punctuation | open | 2021-10-11T03:28:23Z | 2024-05-08T08:46:15Z | https://github.com/tqdm/tqdm/issues/1259 | [] | tloki | 6 |
autokey/autokey | automation | 523 | Hold mouse button? | ## Classification:
Feature, Enhancement (maybe bug)
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI: GTK
If the problem is known to be present in more than one version, please list all of those.
Installed via: (Debian/Ubuntu Repo).
Linux Distribution: Linux Mint
## Summary
There doesn't seem to be a way to make the mouse button hold down (eg. by press-release functions) that produces behavior similar to holding the button in reality. described below is the scenario when using the keyboard `press()` and `release()` functions with the `<left_mouse_button>` token.
## Expected Results
When dragging the mouse around during script execution, text should be highlighted and any draggables under the cursor should be dragged along.
## Actual Results
Cursor acts as if the left mouse button were being pressed in quick succession, instead of being held; while this may be normal behavior for a key, it is not for a mouse button.
## Notes
I scoured the GitHub Wiki, but came up empty. I would not have even known that `<left_mouse_button>` was a thing if not for my thorough search of an unrelated section in the Known Limitations page. This is because it lacks an entry in the Special Keys page, where I would have expected to find information on it.
| closed | 2021-03-29T20:56:20Z | 2021-04-01T11:29:52Z | https://github.com/autokey/autokey/issues/523 | [
"duplicate",
"enhancement",
"scripting"
] | thedocruby | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.