repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
miguelgrinberg/Flask-SocketIO | flask | 916 | Socketio doesn't work properly when Flask is streaming video | I am trying to build a RPi zero controlled toy car with camera and stream video to web page.
I found your project [flask-video-streaming](https://github.com/miguelgrinberg/flask-video-streaming/blob/master/base_camera.py) which works great, and I try to combine it with the [ZeroBot project](https://github.com/CoretechR/ZeroBot), the ZeroBot project runs Node.js on server side, I basically just rewrite the server side in python.
Here is [my project](https://github.com/hyansuper/FPV_RPi_Car), camera streaming works great, but socketio seems to be very slow or not responding: If I click the "light" button on the web page, the server side should print "on_light" and a LED connected on RPi zero should light up, but it didn't.
In the file app.py, if I comment out the video_feed function, then the rest code works fine. print and LED works as expected.
I don't know what's wrong, can you help?
Thanks! | closed | 2019-03-06T19:58:57Z | 2019-06-08T08:00:21Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/916 | [
"question"
] | hyansuper | 5 |
benbusby/whoogle-search | flask | 255 | Does pre-config apply to heroku deployment as that? | hi, i have a question about heroku deployment.
https://github.com/IniUe/whoogle-search#environment-variables
`# You can set Whoogle environment variables here, but must set`
`# WHOOGLE_DOTENV=1 in your deployment to enable these values`
https://github.com/benbusby/whoogle-search/blob/develop/whoogle.env
Is it enough to remove # to enable that, or should i do something else to get that working?
Edit, when use quick deploy button then only appear line 4 to 13 in cofig vars (we cant do other configuration) then cant do anything for line 18 to 23 or can we add new config var after deployed app in heroku settings --> config vars?
| closed | 2021-04-02T07:35:38Z | 2021-04-02T12:55:58Z | https://github.com/benbusby/whoogle-search/issues/255 | [
"question"
] | ghost | 4 |
tensorflow/tensor2tensor | deep-learning | 1,659 | Getting to work with MultiProblem | #### Please refer to https://github.com/tensorflow/tensor2tensor/issues/1687
----
For training models, I have separated the data generation pipeline from t2t. For that I have implemented my own problem which, in essence, already expects a created dataset.
```python
@registry.register_problem
class ConfigBasedTranslationProblem(translate.TranslateProblem):
# ...
def training_filepaths(self, data_dir, num_shards, shuffled):
return glob.glob(os.path.join(data_dir, '*.train.shard'))
# ...
def filepattern(self, data_dir, split: str, shard=None):
split = 'dev' if split == 'eval' else split
return os.path.join(data_dir, '*.%s.shard' % split)
def generate_data(self, data_dir, tmp_dir, task_id=-1):
raise NotImplementedError('Data should already be generated.')
def prepare_to_generate(self, data_dir, tmp_dir):
raise NotImplementedError('Data should already be generated.')
def source_data_files(self, dataset_split):
raise NotImplementedError('Data should already be generated.')
```
This works great so far but now that I discovered `MultiProblem` ([`multi_problem.md`](https://github.com/tensorflow/tensor2tensor/blob/master/docs/multi_problem.md)) I am facing some issues and questions I was hoping to be able to clarify here.
### The Language Model in `MultiProblem` (?)
This is not really a question but more a suggestion for code and API changes. I would provide a PR but atm I am stuck with tensor2tensor 1.12 and until I updated to 1.13 I won't have resources for it.
From ([`multi_problem.md`](https://github.com/tensorflow/tensor2tensor/blob/master/docs/multi_problem.md)) and the code I can see that the first `problem` has to be a language-model problem.
From the looks, this seems like a requirement which could be relaxed. The first task seems to be used just to create a vocabulary for all languages. In my implementation, I just merge all vocabularies from all tasks (here called `dataset`) to one and return a `SubwordEncoder`:
```python
datasets = self.get_datasets()
reserved_tokens = set()
subword_tokens = set()
for dataset in datasets:
encoders = dataset.feature_encoders()
for encoder in encoders.values():
encoder = cast(SubwordEncoder, encoder)
reserved_tokens.update(encoder.reserved_tokens)
subword_tokens.update(encoder.subword_tokens)
final_subwords = list(reserved_tokens) + sorted(list(subword_tokens.difference(reserved_tokens)))
# ...
return SubwordEncoder(vocab_fp)
```
The only other occurrence of the primary task I can see is in `get_hparams()` in order to set the vocab size and modality.
Imho it could make sense to relax this requirement if `MultiProblem` itself provided feature encoders for inputs and targets. All that is required for this would be additional "merge-vocab" logic and `MultiProblem` could work without the first task having to be a language model problem.
### How does `MultiProblem` training work?
In the `MultiProblem` class we find this:
```python
task_dataset = task_dataset.map(
lambda x: self.add_task_id(task, x, enc, hparams, is_infer))
```
and `add_task_id()` takes an `example` (here `x`) in order to create
```python
# not is_infer
inputs = example.pop("inputs")
concat_list = [inputs, [task.task_id], example["targets"]]
example["targets"] = tf.concat(concat_list, axis=0)
```
in case the problem has inputs or
```python
concat_list = [[task.task_id], example["targets"]]
example["targets"] = tf.concat(concat_list, axis=0)
```
in case the problem has no inputs.
Now, I do not quite understand why a `MultiProblem` only works on examples which provide `targets`. I was under the impression that we still simply train a `Transformer` model which gets `inputs` and `targets` presented as usual but that these samples are drawn from a set of tasks (problems/corpora) for training (see `multiproblem_per_task_threshold`).
So in theory, I should be able to create a classic `Text2TextProblem` which contains all required samples (plus a `task_id` for each sample) and it _should_ work as well, right? Or does `MultiProblem` work totally different?
The `task_id` does not set a different "_mode_" or something, it's just additional context for the model, isn't it?
### The Transformer in `MultiProblem`
I noticed that the graph of the Transformer looks very different from why I know from single-problem training. In particular, I noticed that there are no encoder layers? The `body` only contains the `decoder` layers. Why is this the case or what am I missing here?
> *Note:* It should not matter but this is from the `EvolvedTransformer` in particular

Does this mean hparams like `num_encoder_layers` are getting ignored here?
### `MultiProblem` Training Best Practice
I am not at the point where I could make use of recommendations but what I was thinking of was something like the following:
I want to test if (or how much) a small dataset can benefit from `MultiProblem` training. For this I would like to use a small `de_en` dataset. The main task of the `MultiProblem` would be to learn `en2de`. In addition I would like to use a `en_fr`. So the second task would be a training on `en2fr` translation.
Setting `multiproblem_per_task_threshold` to`"95,5"` would mean that batches should consist of 95% `en2de` and 5% `en2fr` samples, is that correct? If so, can I expect improvements for the `"constant"` schedule or should I rather consider the `"pretrain"` schedule?
Any comments on that would be appreciated.
----
There are some other things I do not understand. Some which are inside the code of `MultiProblem` e.g. the following:
```python
def dataset(self, ...):
# ..
if not is_training and not is_infer:
zeros = tf.zeros([self._ADDED_EVAL_COUNT, 1], dtype=tf.int64)
pad_data = tf.data.Dataset.from_tensor_slices({
"targets": zeros,
"batch_prediction_key": zeros,
"task_id": zeros,
})
task_dataset = task_dataset.concatenate(pad_data)
# ..
```
which I 1) do not understand why it's there and 2) breaks my pipeline. I am overriding `example_reading_spec()` in my problems because I am adding things like the `corpus`-name which I use during evaluation.
```python
def example_reading_spec(self):
data_fields = {
'targets': tf.VarLenFeature(tf.int64),
'corpus': tf.VarLenFeature(tf.int64)
}
data_items_to_decoders = None
return data_fields, data_items_to_decoders
```
Since this padding above is hard-coded, the program crashes. I had to override `MultiProblem.dataset` and add a dummy. What is this padding good for?
----
I know this is a lot so thank you for any response to these questions.
| closed | 2019-08-13T13:30:34Z | 2019-09-05T10:00:01Z | https://github.com/tensorflow/tensor2tensor/issues/1659 | [] | stefan-falk | 0 |
scikit-learn/scikit-learn | python | 30,056 | LinearSVC does not correctly handle sample_weight under class_weight strategy 'balanced' | ### Describe the bug
LinearSVC does not pass sample weights through when computing class weights under the "balanced" strategy leading to sample weight invariance issues cross-linked to meta-issue #16298
### Steps/Code to Reproduce
```python
from sklearn.svm import LinearSVC
from sklearn.base import clone
from sklearn.datasets import make_classification
import numpy as np
rng = np.random.RandomState()
X, y = make_classification(
n_samples=100,
n_features=5,
n_informative=3,
n_classes=4,
random_state=0,
)
# Create dataset with repetitions and corresponding sample weights
sample_weight = rng.randint(0, 10, size=X.shape[0])
X_resampled_by_weights = np.repeat(X, sample_weight, axis=0)
y_resampled_by_weights = np.repeat(y, sample_weight)
est_sw = LinearSVC(dual=False,class_weight="balanced").fit(X, y, sample_weight=sample_weight)
est_dup = LinearSVC(dual=False,class_weight="balanced").fit(
X_resampled_by_weights, y_resampled_by_weights, sample_weight=None
)
np.testing.assert_allclose(est_sw.coef_, est_dup.coef_,rtol=1e-10,atol=1e-10)
np.testing.assert_allclose(
est_sw.decision_function(X_resampled_by_weights),
est_dup.decision_function(X_resampled_by_weights),
rtol=1e-10,
atol=1e-10
)
```
### Expected Results
No error thrown
### Actual Results
```
AssertionError:
Not equal to tolerance rtol=1e-10, atol=1e-10
Mismatched elements: 20 / 20 (100%)
Max absolute difference among violations: 0.00818953
Max relative difference among violations: 0.10657042
ACTUAL: array([[ 0.157045, -0.399979, -0.050654, 0.236997, -0.313416],
[-0.038369, -0.169516, -0.239528, -0.164231, 0.29698 ],
[ 0.069654, 0.250218, 0.268922, -0.065565, -0.195888],
[-0.117921, 0.185563, 0.005148, 0.006144, 0.130577]])
DESIRED: array([[ 0.157595, -0.401087, -0.051018, 0.23653 , -0.313528],
[-0.041687, -0.169006, -0.243102, -0.16373 , 0.302628],
[ 0.065096, 0.245549, 0.260732, -0.061577, -0.188419],
[-0.117224, 0.184116, 0.004652, 0.005555, 0.130453]])
```
### Versions
```shell
System:
python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ]
executable: /Users/shrutinath/micromamba/envs/scikit-learn/bin/python
machine: macOS-14.3-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.dev0
pip: 24.0
setuptools: 70.1.1
numpy: 2.0.0
scipy: 1.14.0
Cython: 3.0.10
pandas: 2.2.2
matplotlib: 3.9.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libopenblas
...
num_threads: 8
prefix: libomp
filepath: /Users/shrutinath/micromamba/envs/scikit-learn/lib/libomp.dylib
version: None
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
```
| closed | 2024-10-13T15:09:29Z | 2025-02-11T18:20:03Z | https://github.com/scikit-learn/scikit-learn/issues/30056 | [
"Bug"
] | snath-xoc | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,193 | Fields of old default whistleblower_identity and new default are shown together after version upgrade | **Describe the bug**
After upgradin GL from 4.0.54 to 4.7.17 Fields of old default whistleblower_identity and new default are shown together in identity section of a previously installaed tenant.
Steps to reproduce the behavior:
1. upgrading GL from 4.0.54 to 4.7.17
2. going to step identity -> whistleblowing identity templates fields are shown together
**Expected behavior**
The old default whistleblower_identity should be shown only, or just the new one
**Desktop (please complete the following information):**
- OS: ubuntu 20
- Browser ffx 97
- GL Version 4.7.17
**Screenshots**

| closed | 2022-03-10T16:50:06Z | 2022-03-12T22:08:41Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3193 | [
"T: Bug",
"C: Backend"
] | larrykind | 1 |
bmoscon/cryptofeed | asyncio | 341 | Example for OHLC information does not work | The example code here
https://github.com/bmoscon/cryptofeed/blob/master/examples/demo_ohlcv.py
fails with this error
TypeError: __call__() got an unexpected keyword argument 'order_type' | closed | 2020-11-30T01:02:18Z | 2020-11-30T04:27:06Z | https://github.com/bmoscon/cryptofeed/issues/341 | [
"bug"
] | mccoydj1 | 3 |
ymcui/Chinese-BERT-wwm | tensorflow | 92 | 想请问怎么把这个模型放到TFBertModel中,可否提供模型的h5文件? | closed | 2020-03-16T08:44:17Z | 2020-03-25T10:02:18Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/92 | [] | JOHNYXUU | 3 | |
plotly/dash-bio | dash | 106 | invalid plotly syntax in component factory manhattan component | The syntax in https://github.com/plotly/dash-bio/blob/master/dash_bio/component_factory/_manhattan.py#L440 and https://github.com/plotly/dash-bio/blob/master/dash_bio/component_factory/_manhattan.py#L453 may need to be updated.
when running locally (after `pip install -r requirements.txt`) I'm getting:
```
(venv3) ➜ dash-bio git:(master) python index.py
Traceback (most recent call last):
File "index.py", line 31, in <module>
for filename in appList
File "index.py", line 32, in <dictcomp>
if filename.startswith("app_") and filename.endswith(".py")
File "/Users/chelsea/Repos/dash-repos/gallery-apps/dash-bio/tests/dash/app_manhattan_plot.py", line 12, in <module>
fig = dash_bio.ManhattanPlot(df) # Feed the data to a function which creates a Manhattan Plot figure
File "/Users/chelsea/Repos/dash-repos/gallery-apps/dash-bio/dash_bio/component_factory/_manhattan.py", line 165, in ManhattanPlot
highlight_color=highlight_color
File "/Users/chelsea/Repos/dash-repos/gallery-apps/dash-bio/dash_bio/component_factory/_manhattan.py", line 440, in figure
suggestiveline = go.layout.Shape(
AttributeError: module 'plotly.graph_objs' has no attribute 'layout'
```
| closed | 2019-01-17T00:16:51Z | 2019-01-21T18:01:40Z | https://github.com/plotly/dash-bio/issues/106 | [] | cldougl | 5 |
huggingface/datasets | nlp | 7,249 | How to debugging | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.
### Steps to reproduce the bug
# my_dataset.py
import json
import datasets
class MyDatasetConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(MyDatasetConfig, self).__init__(**kwargs)
class MyDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
MyDatasetConfig(
name="default",
version=VERSION,
description="myDATASET"
),
]
def _info(self):
print("info") # breakpoints
return datasets.DatasetInfo(
description="myDATASET",
features=datasets.Features(
{
"id": datasets.Value("int32"),
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=["negative", "positive"]),
}
),
supervised_keys=("text", "label"),
)
def _split_generators(self, dl_manager):
print("generate") # breakpoints
data_file = "data.json"
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file}
),
]
def _generate_examples(self, filepath):
print("example") # breakpoints
with open(filepath, encoding="utf-8") as f:
data = json.load(f)
for idx, sample in enumerate(data):
yield idx, {
"id": sample["id"],
"text": sample["text"],
"label": sample["label"],
}
#main.py
import os
os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1"
from datasets import load_dataset
dataset = load_dataset("my_dataset.py", split="train", cache_dir=None)
print(dataset[:5])
### Expected behavior
Pause at breakpoints while running debugging
### Environment info
pycharm
| open | 2024-10-24T01:03:51Z | 2024-10-24T01:03:51Z | https://github.com/huggingface/datasets/issues/7249 | [] | ShDdu | 0 |
qwj/python-proxy | asyncio | 141 | Custom filter functions? | Hello, is it possible to add custom filtering functions based on the content?
Basically I want to filter YouTube videos based on their metadata, something I can do by leveraging the YouTube Data API. It is ok if the request takes several seconds.
I took a quick look of the code and I think I could add this in the connect() function of the different clients (pretty much HTTP for my use case). I also saw there is a stream reader/writer that has the content, is that accurate?
Thanks in advance. | closed | 2021-12-05T03:51:36Z | 2022-07-05T07:35:41Z | https://github.com/qwj/python-proxy/issues/141 | [] | crorella | 0 |
jina-ai/serve | deep-learning | 5,585 | Change documentation for `CONTEXT` environment variables | **Describe your proposal/problem**
<!-- A clear and concise description of what the proposal is. -->
The [docs](https://docs.jina.ai/concepts/flow/yaml-spec/#context-variables) don't specify how to use context variables in a flow yaml.
It should be made clear that when defining a flow using the YAML specification `VALUE_A` & `VALUE_B` should appear in the `env` key.
---
**Flow.yml**
```
jtype: Flow
executors:
- name: executor1
uses: executor1/config.yml
env:
VALUE_A: 123
VALUE_B: hello
uses_with:
var_a: ${{ CONTEXT.VALUE_A }}
var_b: ${{ CONTEXT.VALUE_B }}
``` | closed | 2023-01-09T16:05:59Z | 2023-04-24T00:18:00Z | https://github.com/jina-ai/serve/issues/5585 | [
"Stale",
"area/docs"
] | npitsillos | 2 |
nalepae/pandarallel | pandas | 112 | Weird return from parallel_apply() | (duplicated) #111 | closed | 2020-10-06T03:54:32Z | 2020-10-06T03:55:15Z | https://github.com/nalepae/pandarallel/issues/112 | [] | conraddd | 0 |
HIT-SCIR/ltp | nlp | 380 | 您好,咨询3.3.2版本otcws训练分词模型cws.model的问题 | 您好,用otcws训练人民日报1998六个月的分词模型总是失败,build-featurespace: 30% instances is extracted.提取到30%的时候就退出了,训练5万行以下的文本可以,训练5万行以上的文本就总是失败,我想问一下用otcws训练模型的时候有什么限制吗,比如不能存在特殊字符,单行文本字符限制,整个训练样本不能过长等,期待收到您的回复!
ps:硬件环境为16核 128GB,window下命令otcws.exe learn --reference people1998.seg --development people1998.seg --algorithm pa --model cws.model --max-iter 10 --rare-feature-threshold 1 | closed | 2020-07-09T08:45:27Z | 2020-07-10T07:43:48Z | https://github.com/HIT-SCIR/ltp/issues/380 | [] | GuohyCoding | 1 |
nalepae/pandarallel | pandas | 7 | Implement GroupBy.parallel_apply | open | 2019-03-16T13:28:36Z | 2019-03-16T13:31:14Z | https://github.com/nalepae/pandarallel/issues/7 | [
"enhancement"
] | nalepae | 0 | |
microsoft/nni | tensorflow | 4,817 | Why does SlimPruner utilize the WeightTrainerBasedDataCollector instead of the WeightDataCollector before model compressing? | open | 2022-04-27T11:43:29Z | 2022-04-29T01:50:48Z | https://github.com/microsoft/nni/issues/4817 | [] | songkq | 1 | |
TencentARC/GFPGAN | pytorch | 176 | Some colors in black and white photo | A minor detail, in some black and white photos, colors appear that are not in the photo, but it seems that the model "suggests" what colors it should have. It is also remarkable the improvement of the V1.3 model. Although the 1.1 model has behaved very generous with this image. I must also add that some faces have been improved but with Asian features (like women and children).
Thanks for your project I love it
img1- Original
img2 -V1 Model (more natural and accurate face, but colorized(in this face))
img3-V1.3 added colors in BW pics

| open | 2022-03-13T08:45:33Z | 2022-03-14T23:23:38Z | https://github.com/TencentARC/GFPGAN/issues/176 | [] | GOZARCK | 2 |
nltk/nltk | nlp | 3,149 | TclError resizing download dialog table column | When attempting to resize a column in the downloader dialog, and error is raised and the column does not resize.
Steps to reproduce:
- Run `nltk.download()` to open downloading interface
- Try resizing any of the table columns (e.g. "Identifier" in the first tab)
An example full traceback is as follows:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.11/tkinter/__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/nltk/draw/table.py", line 196, in _resize_column_motion_cb
lb["width"] = max(3, lb["width"] + (x1 - x2) // charwidth)
~~^^^^^^^^^
File "/usr/lib/python3.11/tkinter/__init__.py", line 1713, in __setitem__
self.configure({key: value})
File "/usr/lib/python3.11/tkinter/__init__.py", line 1702, in configure
return self._configure('configure', cnf, kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/tkinter/__init__.py", line 1692, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: expected integer but got "21.0"
```
The fix for this would be to simply change [`draw/table.py:196`](https://github.com/nltk/nltk/blob/56bc4af35906fb/nltk/draw/table.py#L196) from
```lb["width"] = max(3, lb["width"] + (x1 - x2) // charwidth)```
to
```lb["width"] = max(3, int(lb["width"] + (x1 - x2) // charwidth))```
(forcing the result of the floor div to be an int rather than float). | closed | 2023-05-04T10:38:45Z | 2023-05-08T08:23:10Z | https://github.com/nltk/nltk/issues/3149 | [] | E-Paine | 0 |
huggingface/datasets | deep-learning | 7,215 | Iterable dataset map with explicit features causes slowdown for Sequence features | ### Describe the bug
When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets.
However, this can cause a major slowdown for certain types of array features due to the features being re-encoded.
This is separate to the slowdown reported in #7206
### Steps to reproduce the bug
```
from datasets import Dataset, Features, Array3D, Sequence, Value
import numpy as np
import time
features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [5000,10000]*25] for i in range(2)}, features=features)
```
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.5 s on main
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x, features=features)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~ 3 s on main
### Expected behavior
I'm not 100% sure whether passing new feature types to formatted outputs of map should be supported or not, but assuming it should, then there should be a cost-free way to specify the new feature type - knowing feature type is required by interleave_datasets and concatenate_datasets for example
### Environment info
3.0.2 | open | 2024-10-10T22:08:20Z | 2024-10-10T22:10:32Z | https://github.com/huggingface/datasets/issues/7215 | [] | alex-hh | 0 |
viewflow/viewflow | django | 449 | CreateViewMixin doesn't check permissions before adding "Add new" page action | `class CreateViewMixin(metaclass=ViewsetMeta):
create_view_class = CreateModelView
create_form_layout = DEFAULT
create_form_class = DEFAULT
create_form_widgets = DEFAULT
def has_add_permission(self, user):
return has_object_perm(user, "add", self.model)
def get_create_view_kwargs(self, **kwargs):
view_kwargs = {
"form_class": first_not_default(
self.create_form_class, getattr(self, "form_class", DEFAULT)
),
"form_widgets": first_not_default(
self.create_form_widgets, getattr(self, "form_widgets", DEFAULT)
),
"layout": first_not_default(
self.create_form_layout, getattr(self, "form_layout", DEFAULT)
),
**self.create_view_kwargs,
**kwargs,
}
return self.filter_kwargs(self.create_view_class, **view_kwargs)
def get_list_page_actions(self, request, *actions):
add_action = Action(
name="Add new",
url=self.reverse("add"),
icon=Icon("add_circle", class_="material-icons mdc-list-item__graphic"),
)
return super().get_list_page_actions(request, *(add_action, *actions))
`
I believe get_list_page_actions should check for add permission. Right now it shows "Add new" to users that aren't allowed to add.
A related question, is there a way to override the name of the add action when using ModelViewset? Often I want it to say "Add New Blog" for example. I've been just using BaseModelViewset and then adding in the other mixins except CreateViewMixin so that I can perform the permission check as above as well as change the add action name.
Thanks | closed | 2024-06-19T22:41:26Z | 2024-06-24T10:26:05Z | https://github.com/viewflow/viewflow/issues/449 | [] | SamuelLayNZ | 1 |
cs230-stanford/cs230-code-examples | computer-vision | 17 | Error when run build_dataset.py on windows | In Windows OS, folder names in a path join together with back slash [ \ ] instead of slash [ / ] like this:
> C:\Program Files\NVIDIA GPU Computing Toolkit
so build_dataset.py throw an error. because it can't split filename from directory.
I solve it by replace the slash with double back slash '\\'
`image.save(os.path.join(output_dir, **filename.split('\\')[-1])**)`
Thanks. | open | 2019-04-05T11:21:30Z | 2024-01-23T11:41:54Z | https://github.com/cs230-stanford/cs230-code-examples/issues/17 | [] | Amin-Tgz | 1 |
zhiyiYo/Fluent-M3U8 | dash | 5 | 是不是下载完没有文件列表完整性校验?网络波动一下就下载不全,然后合成失败 | 特别是下载外网视频时,一旦梯子不稳断线重连一下,断线时正在下载的ts文件就一直是temp后缀,下完合成失败,打开下载文件夹一看还有temp后缀的ts文件在。能不能合成之前先进行已下载文件列表完整性校验,把下载失败的文件单独再下载? | closed | 2025-02-16T14:24:25Z | 2025-02-17T16:21:46Z | https://github.com/zhiyiYo/Fluent-M3U8/issues/5 | [
"enhancement"
] | cai1niao1 | 2 |
miguelgrinberg/microblog | flask | 62 | Problem with sending email | I have searched, compared line by line and can't for the life of me figure out what I have done wrong.
Seems the error originates in the email.py file.
```
powershell
127.0.0.1 - - [03/Jan/2018 18:57:27] "GET /reset_password_request HTTP/1.1" 200 -
[2018-01-03 18:57:32,933] ERROR in app: Exception on /reset_password_request [POST]
Traceback (most recent call last):
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\_compat.py", line 33, in reraise
raise value
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "c:\users\calle\pycharmprojects\flask_megatutorial\venv\lib\site-packages\flask\app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\Calle\PycharmProjects\flask_megatutorial\app\routes.py", line 160, in reset_password_request
send_password_reset_email(user)
File "C:\Users\Calle\PycharmProjects\flask_megatutorial\app\email.py", line 14, in send_password_reset_email
sender=app.config['ADMINS'][0],
NameError: name 'app' is not defined
127.0.0.1 - - [03/Jan/2018 18:57:34] "POST /reset_password_request HTTP/1.1" 500 -
```
I also tried it in the Flask shell as described in **10.2 Flask-Mail Usage**:
```
powershell
(venv) PS C:\Users\Calle\PycharmProjects\flask_megatutorial> flask shell
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32
App: app
Instance: C:\Users\Calle\PycharmProjects\flask_megatutorial\instance
>>> from flask_mail import Message
>>> from app import mail
>>> msg = Message('test subject', sender=app.config['ADMINS'][0],
... recipients=['your-email@example.com'])
>>> msg.body = 'text body'
>>> msg.html = '<h1>HTML body</h1>'
>>> mail.send(msg)
```
Here is the code in Gist with the files that I suspect:
[https://gist.github.com/Callero/7b7edec02ed1e6be2644b0a3703a1630](https://gist.github.com/Callero/7b7edec02ed1e6be2644b0a3703a1630)
| closed | 2018-01-03T18:16:55Z | 2018-01-04T18:44:44Z | https://github.com/miguelgrinberg/microblog/issues/62 | [
"bug"
] | Callero | 2 |
flairNLP/flair | nlp | 3,428 | [Bug]: Error message: "learning rate too small - quitting training!" | ### Describe the bug
Model training quits after epoch 1 with a "learning rate too small - quitting training!" error message even though the "patience" parameter is set to 10.
### To Reproduce
```python
In Google Colab:
!pip install flair -qq
import os
from os import mkdir, listdir
from os.path import join, exists
import re
from torch.optim.adam import Adam
from flair.datasets import CSVClassificationCorpus
from flair.data import Corpus, Sentence
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
for embedding in ["distilbert-base-uncased"]:
print("Training on", embedding)
# 1a. define the column format indicating which columns contain the text and labels
column_name_map = {1: "text", 2: "label"}
# 1b. load the preprocessed training, development, and test sets
corpus: Corpus = CSVClassificationCorpus(processed_dir,
column_name_map,
label_type="label",
skip_header=True,
delimiter='\t')
# 2. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type="label")
# 3. initialize the transformer document embeddings
document_embeddings = TransformerDocumentEmbeddings(embedding,
fine_tune=True,
layers="all")
#document_embeddings.tokenizer.pad_token = document_embeddings.tokenizer.eos_token
# 4. create the text classifier
classifier = TextClassifier(document_embeddings,
label_dictionary=label_dict,
label_type="label")
# 5. initialize the trainer
trainer = ModelTrainer(classifier,
corpus)
# 6. start the training
trainer.train('model/'+embedding,
learning_rate=1e-5,
mini_batch_size=8,
max_epochs=3,
patience=10,
optimizer=Adam,
train_with_dev=False,
save_final_model=False
)
```
### Expected behavior
In this case, the model should be trained for 3 epochs without reducing the learning rate. In prior cases, even when a learning rate of 1e-5 was reduced by an anneal factor of 0.5, I did not receive a "learning rate too small - quitting training!" error message.
### Logs and Stack traces
```stacktrace
2024-03-18 14:11:51,783 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,786 Model: "TextClassifier(
(embeddings): TransformerDocumentEmbeddings(
(model): DistilBertModel(
(embeddings): Embeddings(
(word_embeddings): Embedding(30523, 768)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(transformer): Transformer(
(layer): ModuleList(
(0-5): 6 x TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
)
(decoder): Linear(in_features=5376, out_features=2, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
(locked_dropout): LockedDropout(p=0.0)
(word_dropout): WordDropout(p=0.0)
(loss_function): CrossEntropyLoss()
(weights): None
(weight_tensor) None
)"
2024-03-18 14:11:51,787 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,789 Corpus: 8800 train + 2200 dev + 2200 test sentences
2024-03-18 14:11:51,793 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,794 Train: 8800 sentences
2024-03-18 14:11:51,795 (train_with_dev=False, train_with_test=False)
2024-03-18 14:11:51,799 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,802 Training Params:
2024-03-18 14:11:51,804 - learning_rate: "1e-05"
2024-03-18 14:11:51,806 - mini_batch_size: "8"
2024-03-18 14:11:51,807 - max_epochs: "3"
2024-03-18 14:11:51,812 - shuffle: "True"
2024-03-18 14:11:51,813 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,814 Plugins:
2024-03-18 14:11:51,816 - AnnealOnPlateau | patience: '10', anneal_factor: '0.5', min_learning_rate: '0.0001'
2024-03-18 14:11:51,817 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,818 Final evaluation on model from best epoch (best-model.pt)
2024-03-18 14:11:51,820 - metric: "('micro avg', 'f1-score')"
2024-03-18 14:11:51,821 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,823 Computation:
2024-03-18 14:11:51,825 - compute on device: cuda:0
2024-03-18 14:11:51,835 - embedding storage: cpu
2024-03-18 14:11:51,836 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,837 Model training base path: "model/distilbert-base-uncased"
2024-03-18 14:11:51,840 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,846 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:55,845 epoch 1 - iter 110/1100 - loss 0.57600509 - time (sec): 4.00 - samples/sec: 220.19 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:11:58,978 epoch 1 - iter 220/1100 - loss 0.50393908 - time (sec): 7.13 - samples/sec: 246.84 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:01,876 epoch 1 - iter 330/1100 - loss 0.46954644 - time (sec): 10.03 - samples/sec: 263.27 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:05,276 epoch 1 - iter 440/1100 - loss 0.44181235 - time (sec): 13.43 - samples/sec: 262.14 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:08,456 epoch 1 - iter 550/1100 - loss 0.41807515 - time (sec): 16.61 - samples/sec: 264.93 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:11,447 epoch 1 - iter 660/1100 - loss 0.40403758 - time (sec): 19.60 - samples/sec: 269.41 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:14,420 epoch 1 - iter 770/1100 - loss 0.38948912 - time (sec): 22.57 - samples/sec: 272.91 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:17,914 epoch 1 - iter 880/1100 - loss 0.38118810 - time (sec): 26.07 - samples/sec: 270.09 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:21,085 epoch 1 - iter 990/1100 - loss 0.37110791 - time (sec): 29.24 - samples/sec: 270.89 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:24,027 epoch 1 - iter 1100/1100 - loss 0.36139164 - time (sec): 32.18 - samples/sec: 273.47 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:24,030 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:24,032 EPOCH 1 done: loss 0.3614 - lr: 0.000010
2024-03-18 14:12:28,158 DEV : loss 0.28874295949935913 - f1-score (micro avg) 0.9095
2024-03-18 14:12:29,719 - 0 epochs without improvement
2024-03-18 14:12:29,721 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,723 learning rate too small - quitting training!
2024-03-18 14:12:29,725 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,727 Done.
2024-03-18 14:12:29,729 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,733 Testing using last state of model ...
2024-03-18 14:12:33,651
Results:
- F-score (micro) 0.9132
- F-score (macro) 0.9029
- Accuracy 0.9132
By class:
precision recall f1-score support
0 0.9184 0.9511 0.9345 1432
1 0.9024 0.8424 0.8714 768
accuracy 0.9132 2200
macro avg 0.9104 0.8968 0.9029 2200
weighted avg 0.9128 0.9132 0.9125 2200
2024-03-18 14:12:33,653 ----------------------------------------------------------------------------------------------------
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.2.1+cu121
##### Transformers
4.38.2
#### GPU
True | closed | 2024-03-18T14:58:03Z | 2024-03-18T16:14:55Z | https://github.com/flairNLP/flair/issues/3428 | [
"bug"
] | azkgit | 1 |
lukas-blecher/LaTeX-OCR | pytorch | 319 | Training isn't working properly | I tried to train a custom model. This model's intention was to detect matrices, so I created a dataset, tokenizer, and config.yaml file.
However, I am here for a reason. For some reason it doesn't appear to actually be training. This is the output from the following command:
```
!python -m pix2tex.train --config colab.yaml
```
Output:
```
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:
wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc
wandb: Tracking run with wandb version 0.15.10
wandb: Run data is saved locally in /content/wandb/run-20230921_163333-mj2ft4r2
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run mixed
wandb: ⭐️ View project at https://wandb.ai/frankvp_11/uncategorized
wandb: 🚀 View run at https://wandb.ai/frankvp_11/uncategorized/runs/mj2ft4r2
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
wandb: Waiting for W&B process to finish... (success).
wandb:
wandb: Run history:
wandb: train/epoch ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███
wandb:
wandb: Run summary:
wandb: train/epoch 50
wandb:
wandb: 🚀 View run mixed at: https://wandb.ai/frankvp_11/uncategorized/runs/mj2ft4r2
wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20230921_163333-mj2ft4r2/logs
```
Can someone help me debug what went wrong? Here's the link to the colab file that I am using. To get to this point (dataset creation + training) takes ~10 minutes
https://colab.research.google.com/drive/19aGMcvZVDhjJndIIdcaWHiz0IKRk1vxE?usp=sharing
| open | 2023-09-21T16:36:56Z | 2023-09-21T16:37:19Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/319 | [] | frankvp11 | 0 |
Ehco1996/django-sspanel | django | 649 | 节点名称中文乱码 | **问题的描述**
使用potatso lite添加ss订阅,中文乱码
**相关截图/log**

| closed | 2022-03-15T08:20:21Z | 2022-03-19T09:24:13Z | https://github.com/Ehco1996/django-sspanel/issues/649 | [
"bug"
] | dymasch | 2 |
tensorflow/tensor2tensor | machine-learning | 1,631 | Most straight forward way to train summarization on new data with simpler format than CNN/DM datasets? Make a new data_generator ? | ### Description
I would like to train a summarizer on my own data, and I am wondering what's the most straightforward way to do this. The CNN/DailyMail datasets have a bit of an odd format which seems tricky to convert regular summarization datasets (CSVs with 1 column for source, 1 column for summary) into.
So from my analysis of the code, the easiest to for Tensor2Tensor to accept new summarization datasets is to develop new data_generators such that it will be able to training on any data formatted in CSV, one column being the source, the other column being the summary.
My plan is to use the data_generators/cnn_dailymail.py code as the base with the following alterations:
First, replace the CNN/DailyMail google drive links with my own, here
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/cnn_dailymail.py#L37
Then, I need to alter `def example_generator` in https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/cnn_dailymail.py#L137
In such a way that it'll take my custom data and put the source and summary in one line, seperated by story_summary_split_token ,( unless there's no sum_token )
Is this it? Or is there anything else I need to take into consideration?
| closed | 2019-07-12T22:51:02Z | 2021-03-03T13:07:17Z | https://github.com/tensorflow/tensor2tensor/issues/1631 | [] | Santosh-Gupta | 2 |
LibreTranslate/LibreTranslate | api | 679 | Basque translation project needs update in Weblate | Comparing with English string quantity [161](https://hosted.weblate.org/projects/libretranslate/app/en/), there are less available in the Basque project: [143](https://hosted.weblate.org/projects/libretranslate/app/eu/)
For example "Albanian", "Chinese (traditional)", "Kabyle" and some other are missing.
I guess the "Basque" string has also to be added :smile:
Thank you!
| closed | 2024-09-20T23:31:15Z | 2024-09-21T16:41:37Z | https://github.com/LibreTranslate/LibreTranslate/issues/679 | [
"enhancement"
] | urtzai | 1 |
xlwings/xlwings | automation | 1,724 | while accessing worksheet.range com_error: (-2147352573, 'Member not found.', None, None) | #### OS Windows 7 professional
#### Versions of xlwings 0.24.9, Excel 2010 and Python 3.8.10
#### Describe your issue (incl. Traceback!)
The code worked fine the yesterday, but today it is not working.
```python
# Your traceback here
Traceback (most recent call last):
File "C:\Users\ssp\SpyderPythonProjects\SSTrades\SSTradesAlgoZero\library\NSEtickerinExcel.py", line 108, in <module>
print(dak.range("B2").value)
File "C:\Users\ssp\AppData\Local\Programs\Python\Python38\Lib\site-packages\xlwings\main.py", line 1106, in range
return Range(impl=self.impl.range(cell1, cell2))
File "C:\Users\ssp\AppData\Local\Programs\Python\Python38\Lib\site-packages\xlwings\_xlwindows.py", line 689, in range
xl1 = self.xl.Range(arg1)
File "C:\Users\ssp\AppData\Local\Programs\Python\Python38\Lib\site-packages\xlwings\_xlwindows.py", line 70, in __call__
v = self.__method(*args, **kwargs)
File "<COMObject <unknown>>", line 2, in Range
com_error: (-2147352573, 'Member not found.', None, None)
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
pathhcurr = os.getcwd()
savepath = pathhcurr.replace("library","reference files")
xlfilepath = str(savepath) + str("\\NSE_analysis_list.xlsx")
wb = xw.Book(str(xlfilepath))
dak = wb.sheets("DataKeys")
dak.active = True
print(dak.name)
dak.range("a:b").value = None
```
| closed | 2021-10-01T00:58:54Z | 2022-02-05T20:09:49Z | https://github.com/xlwings/xlwings/issues/1724 | [] | ssprakash-seeni | 5 |
polakowo/vectorbt | data-visualization | 493 | Pulling fundamental data | Thank you to the vectorbt team for all their hard work with this great library!
I was wondering if it were possible to pull more fundamental-style data into vectorbt? I'm interested in things like total current assets, long term investments, total current liabilities, etc.? I'm not sure if there is a particular data broker that vectorbt utilizes or something else. Thanks in advance for your help! | closed | 2022-09-06T15:44:44Z | 2022-09-20T01:14:25Z | https://github.com/polakowo/vectorbt/issues/493 | [] | aclifton314 | 1 |
drivendataorg/cookiecutter-data-science | data-science | 8 | Add option to choose different data storage back ends | - S3 (get AWS settings)
- Git Large File Storage
- Git Annex
- dat
| closed | 2016-04-23T17:56:20Z | 2023-08-30T21:26:21Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/8 | [] | pjbull | 2 |
huggingface/transformers | pytorch | 36,571 | In the latest version of transformers (4.49.0) matrix transformation error is encountered | ### System Info
transformer Version : 4.49.0
python version: python3.10
env : HuggingFace spaces
Looks to be working in : 4.48.3
Please find the following HuggingFace Space code which works in (4.48.3) but fails in (4.49.0)
Code :
`
import os
--
| import random
| import uuid
| import gradio as gr
| import numpy as np
| from PIL import Image
| import spaces
| import torch
| from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler
| from typing import Tuple
|
| css = '''
| .gradio-container{max-width: 575px !important}
| h1{text-align:center}
| footer {
| visibility: hidden
| }
| '''
|
| DESCRIPTIONXX = """## lStation txt2Img🥠"""
|
| examples = [
|
| "A tiny reptile hatching from an egg on the mars, 4k, planet theme, --style raw5 --v 6.0",
| "An anime-style illustration of a delicious, rice biryani with curry and chilli pickle --style raw5",
| "Iced tea in a cup --ar 85:128 --v 6.0 --style raw5, 4K, Photo-Realistic",
| "A zebra holding a sign that says Welcome to Zoo --ar 85:128 --v 6.0 --style raw",
| "A splash page of Spiderman swinging through a futuristic cityscape filled with flying cars, the scene depicted in a vibrant 3D rendered Marvel comic art style.--style raw5, 4K, Photo-Realistic"
| ]
|
| MODEL_OPTIONS = {
|
| "LIGHTNING V5.0": "SG161222/RealVisXL_V5.0_Lightning",
| "LIGHTNING V4.0": "SG161222/RealVisXL_V4.0_Lightning",
| }
|
| MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "4096"))
| USE_TORCH_COMPILE = os.getenv("USE_TORCH_COMPILE", "0") == "1"
| ENABLE_CPU_OFFLOAD = os.getenv("ENABLE_CPU_OFFLOAD", "0") == "1"
| BATCH_SIZE = int(os.getenv("BATCH_SIZE", "1"))
|
| device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
|
| style_list = [
| {
| "name": "3840 x 2160",
| "prompt": "hyper-realistic 8K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic",
| "negative_prompt": "cartoonish, low resolution, blurry, simplistic, abstract, deformed, ugly",
| },
| {
| "name": "2560 x 1440",
| "prompt": "hyper-realistic 4K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic",
| "negative_prompt": "cartoonish, low resolution, blurry, simplistic, abstract, deformed, ugly",
| },
| {
| "name": "HD+",
| "prompt": "hyper-realistic 2K image of {prompt}. ultra-detailed, lifelike, high-resolution, sharp, vibrant colors, photorealistic",
| "negative_prompt": "cartoonish, low resolution, blurry, simplistic, abstract, deformed, ugly",
| },
| {
| "name": "Style Zero",
| "prompt": "{prompt}",
| "negative_prompt": "",
| },
| ]
|
| styles = {k["name"]: (k["prompt"], k["negative_prompt"]) for k in style_list}
| DEFAULT_STYLE_NAME = "3840 x 2160"
| STYLE_NAMES = list(styles.keys())
|
| def apply_style(style_name: str, positive: str, negative: str = "") -> Tuple[str, str]:
| if style_name in styles:
| p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
| else:
| p, n = styles[DEFAULT_STYLE_NAME]
|
| if not negative:
| negative = ""
| return p.replace("{prompt}", positive), n + negative
|
| def load_and_prepare_model(model_id):
| pipe = StableDiffusionXLPipeline.from_pretrained(
| model_id,
| torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
| use_safetensors=True,
| add_watermarker=False,
| ).to(device)
| pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
|
| if USE_TORCH_COMPILE:
| pipe.compile()
|
| if ENABLE_CPU_OFFLOAD:
| pipe.enable_model_cpu_offload()
|
| return pipe
|
| # Preload and compile both models
| models = {key: load_and_prepare_model(value) for key, value in MODEL_OPTIONS.items()}
|
| MAX_SEED = np.iinfo(np.int32).max
|
| def save_image(img):
| unique_name = str(uuid.uuid4()) + ".png"
| img.save(unique_name)
| return unique_name
|
| def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:
| if randomize_seed:
| seed = random.randint(0, MAX_SEED)
| return seed
|
| @spaces.GPU(duration=60, enable_queue=True)
| def generate(
| model_choice: str,
| prompt: str,
| negative_prompt: str = "extra limbs, extra fingers, extra toes, unnatural proportions, distorted anatomy, disjointed limbs, mutated body parts, broken bones, oversized limbs, unrealistic muscles, merged faces, extra eyes, floating features, disfigured hands, incorrect joint placement, missing parts, blurry details, asymmetrical body structure, glitched textures",
| use_negative_prompt: bool = False,
| style_selection: str = DEFAULT_STYLE_NAME,
| seed: int = 1,
| width: int = 1024,
| height: int = 1024,
| guidance_scale: float = 3,
| num_inference_steps: int = 25,
| randomize_seed: bool = False,
| use_resolution_binning: bool = True,
| num_images: int = 1,
| progress=gr.Progress(track_tqdm=True),
| ):
| global models
| pipe = models[model_choice]
|
| seed = int(randomize_seed_fn(seed, randomize_seed))
| generator = torch.Generator(device=device).manual_seed(seed)
|
| prompt, negative_prompt = apply_style(style_selection, prompt, negative_prompt)
|
| options = {
| "prompt": [prompt] * num_images,
| "negative_prompt": [negative_prompt] * num_images if use_negative_prompt else None,
| "width": width,
| "height": height,
| "guidance_scale": guidance_scale,
| "num_inference_steps": num_inference_steps,
| "generator": generator,
| "output_type": "pil",
| }
|
| if use_resolution_binning:
| options["use_resolution_binning"] = True
|
| images = []
| for i in range(0, num_images, BATCH_SIZE):
| batch_options = options.copy()
| batch_options["prompt"] = options["prompt"][i:i + BATCH_SIZE]
| if "negative_prompt" in batch_options:
| batch_options["negative_prompt"] = options["negative_prompt"][i:i + BATCH_SIZE]
| images.extend(pipe(**batch_options).images)
|
| image_paths = [save_image(img) for img in images]
|
| return image_paths, seed
|
| with gr.Blocks(css=css, theme="bethecloud/storj_theme") as demo:
| gr.Markdown(DESCRIPTIONXX)
| with gr.Row():
| prompt = gr.Text(
| label="Prompt",
| show_label=False,
| max_lines=1,
| placeholder="Enter your prompt",
| container=False,
| )
| run_button = gr.Button("Run", scale=0)
| result = gr.Gallery(label="Result", columns=1, show_label=False)
|
| with gr.Row():
| model_choice = gr.Dropdown(
| label="Model Selection⬇️",
| choices=list(MODEL_OPTIONS.keys()),
| value="LIGHTNING V5.0"
| )
|
| with gr.Accordion("Advanced options", open=False, visible=True):
| style_selection = gr.Radio(
| show_label=True,
| container=True,
| interactive=True,
| choices=STYLE_NAMES,
| value=DEFAULT_STYLE_NAME,
| label="Quality Style",
| )
| num_images = gr.Slider(
| label="Number of Images",
| minimum=1,
| maximum=5,
| step=1,
| value=1,
| )
| with gr.Row():
| with gr.Column(scale=1):
| use_negative_prompt = gr.Checkbox(label="Use negative prompt", value=True)
| negative_prompt = gr.Text(
| label="Negative prompt",
| max_lines=5,
| lines=4,
| placeholder="Enter a negative prompt",
| value="(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation",
| visible=True,
| )
| seed = gr.Slider(
| label="Seed",
| minimum=0,
| maximum=MAX_SEED,
| step=1,
| value=0,
| )
| randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
| with gr.Row():
| width = gr.Slider(
| label="Width",
| minimum=512,
| maximum=MAX_IMAGE_SIZE,
| step=8,
| value=1024,
| )
| height = gr.Slider(
| label="Height",
| minimum=512,
| maximum=MAX_IMAGE_SIZE,
| step=8,
| value=1024,
| )
| with gr.Row():
| guidance_scale = gr.Slider(
| label="Guidance Scale",
| minimum=0.1,
| maximum=6,
| step=0.1,
| value=3.0,
| )
| num_inference_steps = gr.Slider(
| label="Number of inference steps",
| minimum=1,
| maximum=60,
| step=1,
| value=28,
| )
| gr.Examples(
| examples=examples,
| inputs=prompt,
| cache_examples=False
| )
|
| use_negative_prompt.change(
| fn=lambda x: gr.update(visible=x),
| inputs=use_negative_prompt,
| outputs=negative_prompt,
| api_name=False,
| )
|
| gr.on(
| triggers=[
| prompt.submit,
| negative_prompt.submit,
| run_button.click,
| ],
| fn=generate,
| inputs=[
| model_choice,
| prompt,
| negative_prompt,
| use_negative_prompt,
| style_selection,
| seed,
| width,
| height,
| guidance_scale,
| num_inference_steps,
| randomize_seed,
| num_images,
| ],
| outputs=[result, seed]
| )
|
| if __name__ == "__main__":
| demo.queue(max_size=50).launch(show_api=True)
`
Exception:
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 256, in thread_wrapper
res = future.result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/user/app/app.py", line 158, in generate
images.extend(pipe(**batch_options).images)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 1086, in __call__
) = self.encode_prompt(
File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py", line 406, in encode_prompt
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 1490, in forward
text_embeds = self.text_projection(pooled_output)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
**RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::Half**
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just try to execute the above space code in a GPU enabled system.
While generating any image it fails with the exception in the description posted above.
**RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::Half**
### Expected behavior
There should not be any exception. | open | 2025-03-06T05:33:31Z | 2025-03-07T05:39:13Z | https://github.com/huggingface/transformers/issues/36571 | [
"bug"
] | idebroy | 3 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 558 | 合并Lora时报错NotImplementedError | chinese-alpaca-plus-lora-13b
chinese-llama-plus-lora-13b
chinese-llama-plus-lora-7b
执行单LoRA权重合并时候,报错NotImplementedError
(fastchat) root@estar-ESC8000-G4:~# pip list \| grep*
Package Version
------------------- ------------
accelerate 0.19.0
aiofiles 23.1.0
aiohttp 3.8.4
aiosignal 1.3.1
altair 5.0.1
anyio 3.6.2
appdirs 1.4.4
async-timeout 4.0.2
attrs 23.1.0
certifi 2023.5.7
charset-normalizer 3.1.0
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
docker-pycreds 0.4.0
fastapi 0.95.1
ffmpy 0.3.0
filelock 3.12.0
fonttools 4.39.4
frozenlist 1.3.3
fschat 0.2.9
fsspec 2023.5.0
gitdb 4.0.10
GitPython 3.1.31
gradio 3.23.0
h11 0.14.0
httpcore 0.17.0
httpx 0.24.0
huggingface-hub 0.14.1
idna 3.4
importlib-resources 5.12.0
Jinja2 3.1.2
jsonschema 4.17.3
kiwisolver 1.4.4
linkify-it-py 2.0.2
markdown-it-py 2.2.0
markdown2 2.4.8
MarkupSafe 2.1.2
matplotlib 3.7.1
mdit-py-plugins 0.3.3
mdurl 0.1.2
multidict 6.0.4
nh3 0.2.11
numpy 1.24.3
orjson 3.8.12
packaging 23.1
pandas 2.0.1
pathtools 0.1.2
peft 0.3.0
Pillow 9.5.0
pip 23.1.2
prompt-toolkit 3.0.38
protobuf 3.19.0
psutil 5.9.5
pydantic 1.10.7
pydub 0.25.1
Pygments 2.15.1
pyparsing 3.0.9
pyrsistent 0.19.3
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3
PyYAML 6.0
regex 2023.5.5
requests 2.30.0
rich 13.3.5
semantic-version 2.10.0
sentencepiece 0.1.97
sentry-sdk 1.23.1
setproctitle 1.3.2
setuptools 67.7.2
shortuuid 1.0.11
six 1.16.0
smmap 5.0.0
sniffio 1.3.0
starlette 0.26.1
svgwrite 1.4.3
tokenizers 0.13.3
toolz 0.12.0
torch 1.13.1+cu117
torchaudio 0.13.1+cu117
torchvision 0.14.1+cu117
tqdm 4.65.0
transformers 4.28.1
typing_extensions 4.5.0
tzdata 2023.3
uc-micro-py 1.0.2
urllib3 1.26.15
uvicorn 0.22.0
wandb 0.15.3
wavedrom 2.0.3.post3
wcwidth 0.2.6
websockets 11.0.3
wheel 0.40.0
yarl 1.9.2
zipp 3.15.0
*请提供文本log、运行截图*

- [x] **基础模型**: LLaMA-Plus 13B/33B
- [x] **运行系统**:Linux
- [x] **问题分类**:模型转换和合并
- [x] **模型正确性检查**:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行。
- [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-06-10T12:29:52Z | 2023-06-12T00:16:39Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/558 | [] | wuxiulike | 5 |
PrefectHQ/prefect | automation | 17,017 | Validation error when using anonymous volumes | ### Bug summary
It looks like Prefect's validation doesn't allow anonymous volumes. This is my volume configuration:
<img width="548" alt="Image" src="https://github.com/user-attachments/assets/8f91b48e-a923-4745-8a32-be386c68368f" />
That throws the following Validation error:
```
19:30:34.753 | ERROR | prefect.flow_runs.worker - Failed to submit flow run 'e8d1029c-8368-4063-a208-8bf8305b7c6e' to infrastructure.
Traceback (most recent call last):
File "/Users/anzepecar/app/.venv/lib/python3.12/site-packages/prefect/workers/base.py", line 1007, in _submit_run_and_capture_errors
configuration = await self._get_configuration(flow_run)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anzepecar/app/.venv/lib/python3.12/site-packages/prefect/workers/base.py", line 1105, in _get_configuration
configuration = await self.job_configuration.from_template_and_values(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anzepecar/app/.venv/lib/python3.12/site-packages/prefect/client/utilities.py", line 99, in with_injected_client
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anzepecar/app/.venv/lib/python3.12/site-packages/prefect/workers/base.py", line 188, in from_template_and_values
return cls(**populated_configuration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/anzepecar/app/.venv/lib/python3.12/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for DockerWorkerJobConfiguration
volumes.1
Value error, Invalid volume string: '/opt/watchpointlabs/.venv' [type=value_error, input_value='/opt/watchpointlabs/.venv', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
19:30:34.769 | INFO | prefect.flow_runs.worker - Reported flow run 'e8d1029c-8368-4063-a208-8bf8305b7c6e' as crashed: Flow run could not be submitted to infrastructure:
1 validation error for DockerWorkerJobConfiguration
volumes.1
```
Is there a reason for not allowing anonymous volumes? They can be very useful useful for development purposes as also mentioned in the [uv docs](https://docs.astral.sh/uv/guides/integration/docker/#mounting-the-project-with-docker-run).
I'm happy to open a PR that fixes this, let me know!
### Version info
```Text
Version: 3.1.15
API version: 0.8.4
Python version: 3.12.6
Git commit: 3ac3d548
Built: Thu, Jan 30, 2025 11:31 AM
OS/Arch: darwin/arm64
Profile: local
Server type: server
Pydantic version: 2.10.6
Integrations:
prefect-docker: 0.6.2
```
### Additional context
_No response_ | closed | 2025-02-06T19:46:55Z | 2025-02-07T01:07:56Z | https://github.com/PrefectHQ/prefect/issues/17017 | [
"bug"
] | anze3db | 2 |
Lightning-AI/pytorch-lightning | pytorch | 20,249 | Shuffle order is the same across runs when using strategy='ddp' | ### Bug description
The batches and their order are the same across different executions of the script when using strategy='ddp' and dataloader with shuffle=True
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
Say you have train.py that prints the current input on each training iteration and has shuffling enabled in the
dataloader:
```python
import torch
from torch.utils.data import TensorDataset, DataLoader
import torch.nn.functional as F
import lightning.pytorch as pl
class SomeLightningModule(pl.LightningModule):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.tensor(0.0))
self.p2 = torch.nn.Parameter(torch.tensor(0.0))
def training_step(self, batch):
x, y = batch
print(x.item())
return F.mse_loss(x * self.p1 + self.p2, y)
def configure_optimizers(self):
optimizer = torch.optim.Adam(
self.parameters(),
)
return {
"optimizer": optimizer,
}
lightning_module = SomeLightningModule()
trainer = pl.Trainer(
strategy='ddp',
max_epochs=1,
)
train_dataset = TensorDataset(torch.arange(5).float(), torch.arange(5).float())
train_loader = DataLoader(train_dataset, shuffle=True)
trainer.fit(lightning_module, train_dataloaders=train_loader)
```
When strategy='ddp', the script will print the same numbers across different runs:
```
$ python3 train.py
4.0
0.0
1.0
3.0
2.0
$ python3 train.py
4.0
0.0
1.0
3.0
2.0
```
Such behavior can be unwanted, as people might want to try different orders of batches (e.g. to construct ensembles or get the average performance)
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- Graphics Device
- available: True
- version: 11.8
* Lightning:
- lightning: 2.2.0.post0
- lightning-utilities: 0.10.1
- pytorch-lightning: 1.7.7
- torch: 2.1.2
- torchaudio: 2.1.2
- torchmetrics: 0.10.3
- torchvision: 0.16.2
* Packages:
- absl-py: 1.3.0
- aiohttp: 3.8.3
- aiosignal: 1.3.1
- alphafold-colabfold: 2.3.6
- altair: 5.4.0
- anarci: 1.3
- antiberty: 0.1.3
- antlr4-python3-runtime: 4.9.3
- anyio: 3.5.0
- appdirs: 1.4.4
- argon2-cffi: 21.3.0
- argon2-cffi-bindings: 21.2.0
- asttokens: 2.0.5
- astunparse: 1.6.3
- async-lru: 2.0.4
- async-timeout: 4.0.2
- attrs: 22.1.0
- babel: 2.11.0
- backcall: 0.2.0
- beautifulsoup4: 4.12.2
- biopython: 1.79
- bleach: 4.1.0
- blinker: 1.5
- bottleneck: 1.3.5
- brotlipy: 0.7.0
- cached-property: 1.5.2
- cachetools: 5.2.0
- certifi: 2023.5.7
- cffi: 1.15.1
- charset-normalizer: 2.1.1
- chex: 0.1.86
- click: 8.1.3
- cmake: 3.28.3
- colabfold: 1.5.5
- colorama: 0.4.6
- comm: 0.1.2
- contextlib2: 21.6.0
- contourpy: 1.0.6
- cryptography: 38.0.3
- cycler: 0.11.0
- debugpy: 1.6.7
- decorator: 5.1.1
- deepspeed: 0.9.5
- defusedxml: 0.7.1
- dm-haiku: 0.0.12
- dm-tree: 0.1.8
- docker-pycreds: 0.4.0
- docstring-parser: 0.15
- einops: 0.8.0
- entrypoints: 0.4
- et-xmlfile: 1.1.0
- etils: 1.5.2
- exceptiongroup: 1.0.4
- executing: 0.8.3
- fastjsonschema: 2.16.2
- filelock: 3.13.1
- flatbuffers: 24.3.25
- flax: 0.8.5
- fonttools: 4.38.0
- frozenlist: 1.3.3
- fsspec: 2024.3.1
- gast: 0.6.0
- gdown: 5.1.0
- gemmi: 0.5.7
- gitdb: 4.0.9
- gitpython: 3.1.29
- gmpy2: 2.1.2
- google-auth: 2.14.1
- google-auth-oauthlib: 0.4.6
- google-pasta: 0.2.0
- grpcio: 1.49.1
- h5py: 3.11.0
- hjson: 3.1.0
- huggingface-hub: 0.22.2
- hydra-core: 1.3.2
- idna: 3.4
- immutabledict: 4.2.0
- importlib-metadata: 4.13.0
- importlib-resources: 6.1.2
- ipykernel: 6.25.0
- ipython: 8.15.0
- ipython-genutils: 0.2.0
- ipywidgets: 8.0.4
- jax: 0.3.25
- jaxlib: 0.3.25+cuda11.cudnn82
- jedi: 0.18.1
- jinja2: 3.1.2
- jmp: 0.0.4
- json5: 0.9.6
- jsonargparse: 4.27.5
- jsonschema: 4.17.3
- jupyter: 1.0.0
- jupyter-client: 7.4.9
- jupyter-console: 6.6.3
- jupyter-core: 5.5.0
- jupyter-events: 0.6.3
- jupyter-lsp: 2.2.0
- jupyter-server: 2.10.0
- jupyter-server-terminals: 0.4.4
- jupyterlab: 4.0.8
- jupyterlab-pygments: 0.1.2
- jupyterlab-server: 2.22.0
- jupyterlab-widgets: 3.0.9
- keras: 3.4.1
- kiwisolver: 1.4.4
- libclang: 18.1.1
- lightning: 2.2.0.post0
- lightning-utilities: 0.10.1
- lit: 18.1.1
- markdown: 3.4.1
- markdown-it-py: 3.0.0
- markupsafe: 2.1.1
- matplotlib: 3.6.2
- matplotlib-inline: 0.1.6
- mdurl: 0.1.2
- mistune: 2.0.4
- mkl-fft: 1.3.1
- mkl-random: 1.2.2
- mkl-service: 2.4.0
- ml-collections: 0.1.1
- ml-dtypes: 0.3.2
- mmcif-pdbx: 2.0.1
- mpi4py: 3.1.4
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.2
- munkres: 1.1.4
- namex: 0.0.8
- narwhals: 1.5.0
- nbclient: 0.8.0
- nbconvert: 7.10.0
- nbformat: 5.9.2
- nest-asyncio: 1.5.6
- networkx: 3.1
- ninja: 1.11.1
- notebook: 6.3.0
- notebook-shim: 0.2.3
- numexpr: 2.8.4
- numpy: 1.23.5
- oauthlib: 3.2.2
- omegaconf: 2.3.0
- openpyxl: 3.1.5
- opt-einsum: 3.3.0
- optax: 0.2.2
- optree: 0.11.0
- orbax-checkpoint: 0.5.20
- overrides: 7.4.0
- packaging: 21.3
- pandas: 1.5.3
- pandocfilters: 1.5.0
- parso: 0.8.3
- path: 16.2.0
- pathtools: 0.1.2
- pdb2pqr: 3.6.1
- pexpect: 4.8.0
- pickleshare: 0.7.5
- pillow: 9.2.0
- pip: 22.3.1
- platformdirs: 3.10.0
- ply: 3.11
- pmw: 2.0.1
- pooch: 1.6.0
- prody: 2.2.0
- prometheus-client: 0.14.1
- promise: 2.3
- prompt-toolkit: 3.0.43
- propka: 3.5.1
- protobuf: 4.21.9
- psutil: 5.9.4
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- py-cpuinfo: 9.0.0
- py3dmol: 2.0.4
- pyasn1: 0.4.8
- pyasn1-modules: 0.3.0
- pycollada: 0.8
- pycparser: 2.21
- pydantic: 1.10.11
- pydeprecate: 0.3.2
- pygments: 2.15.1
- pyjwt: 2.6.0
- pykerberos: 1.2.4
- pymol: 2.5.5
- pyopenssl: 22.1.0
- pyparsing: 3.0.9
- pyqt5: 5.15.7
- pyqt5-sip: 12.11.0
- pyrsistent: 0.20.0
- pysocks: 1.7.1
- python-dateutil: 2.8.2
- python-json-logger: 2.0.7
- pytorch-lightning: 1.7.7
- pytz: 2022.7
- pyu2f: 0.1.5
- pyyaml: 6.0
- pyzmq: 25.1.0
- qtconsole: 5.5.1
- qtpy: 2.4.1
- regex: 2023.12.25
- requests: 2.28.1
- requests-oauthlib: 1.3.1
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.7.1
- rjieba: 0.1.11
- rsa: 4.9
- safetensors: 0.4.2
- scipy: 1.10.1
- seaborn: 0.13.2
- send2trash: 1.8.2
- sentry-sdk: 1.11.0
- setproctitle: 1.3.2
- setuptools: 59.5.0
- shortuuid: 1.0.11
- sip: 6.7.12
- six: 1.16.0
- smmap: 3.0.5
- sniffio: 1.2.0
- soupsieve: 2.5
- stack-data: 0.2.0
- sympy: 1.12
- tabulate: 0.9.0
- tensorboard: 2.16.2
- tensorboard-data-server: 0.7.2
- tensorboard-plugin-wit: 1.8.1
- tensorflow-cpu: 2.16.2
- tensorflow-io-gcs-filesystem: 0.37.0
- tensorstore: 0.1.63
- termcolor: 2.4.0
- terminado: 0.17.1
- tinycss2: 1.2.1
- tmtools: 0.2.0
- tokenizers: 0.15.2
- toml: 0.10.2
- tomli: 2.0.1
- toolz: 0.12.0
- torch: 2.1.2
- torchaudio: 2.1.2
- torchmetrics: 0.10.3
- torchvision: 0.16.2
- tornado: 6.3.3
- tqdm: 4.64.1
- trainable-folding: 0.0.0
- traitlets: 5.7.1
- transformers: 4.39.3
- triton: 2.1.0
- tunedabs: 0.0.1
- typeshed-client: 2.5.1
- typing-extensions: 4.10.0
- unicodedata2: 15.0.0
- urllib3: 1.26.11
- wandb: 0.13.5
- wcwidth: 0.2.5
- webencodings: 0.5.1
- websocket-client: 0.58.0
- werkzeug: 2.2.2
- wheel: 0.40.0
- widgetsnbextension: 4.0.5
- wrapt: 1.16.0
- yarl: 1.8.1
- zipp: 3.10.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.9.13
- release: 3.10.0-693.17.1.el7.x86_64
- version: #1 SMP Thu Jan 25 20:13:58 UTC 2018
</details>
### More info
_No response_ | open | 2024-09-05T17:40:58Z | 2024-10-25T08:54:44Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20249 | [
"bug",
"needs triage",
"ver: 2.2.x"
] | bogdanmagometa | 2 |
rthalley/dnspython | asyncio | 670 | Support DoH over HTTP/2 | `dns.query.https` currently queries DoH endpoints with HTTP/1.1 requests with no way to switch to HTTP/2. This is a problem for querying endpoints supporting only HTTP/2 (such as `odvr.nic.cz/dns-query`).
I realize that `requests` are [unlikely](https://github.com/psf/requests/issues/5757) to add HTTP/2 support and `hyper` (which provided `requests` integration) is [no longer supported](https://github.com/python-hyper/hyper), `httpx` seems hopeful (has HTTP/2 and is actively developed) but it's in beta and [doesn't provide](https://www.python-httpx.org/compatibility/) drop-in `requests.Session` replacement.
I'm opening this issue in hope of ongoing discussion: Maybe `httpx` becomes stable enough to warrant a switch to it from `requests`. Maybe we can hack something around `libcurl` for HTTP/2 support. | closed | 2021-06-17T11:44:19Z | 2021-11-20T14:38:12Z | https://github.com/rthalley/dnspython/issues/670 | [
"Enhancement Request"
] | balaziks | 6 |
ccxt/ccxt | api | 24,928 | Greetings, similar question | > @sc0Vu Thank you, I thought that it called same everywhere. Found what I was looking for everywhere except the exchange Gate, there are such a method, it called - GET /wallet/currency_chains. But I can t find function that use it in ccxt. Maybe you would be so kind as to tell me.
_Originally posted by @AlwxDavydov in [#19706](https://github.com/ccxt/ccxt/issues/19706#issuecomment-1782779263)_
_____________________________
Hello, can you please tell me how you found these points? Especially for Bybit KuCoin Mexc HTX. | closed | 2025-01-17T17:22:33Z | 2025-01-17T17:33:08Z | https://github.com/ccxt/ccxt/issues/24928 | [] | X1r0s | 0 |
huggingface/transformers | tensorflow | 36,321 | Config' object has no attribute 'get_text_config on 4.49.0 VS 4.46.0 all OK | Hello, was using ComfyUI node for BiRefNet models (https://github.com/MoonHugo/ComfyUI-BiRefNet-Hugo) with Transformers 4.46.0 and it was running all perfect. Now, when I upgrade to transformers 4.49.0 it stops with the error "Config' object has no attribute 'get_text_config". If I downgrade back to 4.46.0 version the node works well again. So I stuck on transformers 4.46.0. The case is that ComfyUI when updates, collect 4.49.0 and i need to downgrade every time. Is there any easy way to adapt the code by some manner for not getting the error "Config' object has no attribute 'get_text_config" transformers related when it upgrades from 4.46.0 to 4.49.0? What i need to change? Would be very grateful for your help. | closed | 2025-02-21T07:31:47Z | 2025-02-24T10:23:59Z | https://github.com/huggingface/transformers/issues/36321 | [] | MegaCocos | 12 |
man-group/arctic | pandas | 789 | Accessing keep_mins kwarg in _prune_previous_versions | Hello,
I was wondering how the user can set the keep_mins kwarg when removing older versions. Thank you.
https://github.com/manahl/arctic/blob/722316c0f9fa7d1d7b757483b8573b57169d97ca/arctic/store/version_store.py#L858 | closed | 2019-06-25T20:58:33Z | 2019-07-05T17:41:22Z | https://github.com/man-group/arctic/issues/789 | [] | mschrem | 6 |
Farama-Foundation/Gymnasium | api | 871 | [Bug Report] max_episode_steps is not passed to the env's spec attribute anymore | ### Describe the bug
In [previous versions of gym](https://github.com/openai/gym/blob/dcd185843a62953e27c2d54dc8c2d647d604b635/gym/envs/registration.py#L502C1-L503C1), an env registered with `max_episode_steps=N` could see its `env.spec.max_episode_steps` refelect this value.
Now this attribute is automatically [set to None](https://github.com/Farama-Foundation/Gymnasium/blob/046c76f623675e3bf4c43e701e025c676d0b420f/gymnasium/envs/registration.py#L758-L769) even if the env is explicitely [registered with this](https://github.com/vikashplus/robohive/blob/ef6f2c3deb93555d779bb3f9af0b3c21414c6bc0/robohive/envs/fm/__init__.py#L19-L28)
Would it make sense to keep the value from the registration in the env spec, or set it to None only if `max_episode_steps` is passed when `make` is called, ie
```python
# max_episode_steps is proper to the env
register(envname0, max_episode_steps=N)
make(envname0) # env.unwrapped.spec.max_episode_steps == N
# max_episode_steps is just there to tell us to wrap it in a TimeLimit
register(envname1, max_episode_steps=None)
make(envname0, max_episode_steps=None) # env.unwrapped.spec.max_episode_steps == None
```
Otherwise, it's hard for us to know what the env horizon is (we don't need a TimeLimit, the env is terminated at `max_episode_steps` regardless of that)
Happy to make a PR to solve this issue
cc @vikashplus
### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-01-11T08:05:18Z | 2024-01-21T19:31:18Z | https://github.com/Farama-Foundation/Gymnasium/issues/871 | [
"bug"
] | vmoens | 17 |
cvat-ai/cvat | computer-vision | 8,450 | Didn't receive all labels for images when downloading dataset in YOLOv8 detection format | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Right click on project
2. Click export dataset
3. Export in YOLOv8 detection format
### Expected Behavior
Receive all labels for images, only received half of them.
### Possible Solution
You can export as YOLOv8 Oriented Bounding Boxes, giving you all the labels, but the label format is different. Also, the labels don't align with the annotated image it's attached to. e.g. frame245 has 2 shapes while the label only has one shape.
### Context
I can't train a model with incomplete data, I have 211 images and only received 115 labels.
### Environment
```Markdown
OS: Windows 11
```
| closed | 2024-09-17T13:48:11Z | 2024-10-08T16:20:05Z | https://github.com/cvat-ai/cvat/issues/8450 | [
"bug",
"need info"
] | benjiroooo | 1 |
InstaPy/InstaPy | automation | 5,941 | dont get this bot | this bot is a cheat it follows the same people over and over and likes and comments them ...i tried multiple accounts..chanegd ip...blocked those people it give me an eror....i didnt even give a follow command but its following them...and its the same people always nice cheat..ofc free bot cuz u are making profit with this job NICE JOB
| closed | 2020-12-07T17:09:42Z | 2021-01-19T01:02:34Z | https://github.com/InstaPy/InstaPy/issues/5941 | [
"wontfix"
] | dphenom21 | 2 |
autokey/autokey | automation | 713 | updated libnotify 0.8.0-1 breaks autokey-gtk on arch linux | ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Crash/Hang/Data loss
### Which Linux distribution did you use?
OS: EndeavourOS Linux x86_64
Kernel: 5.15.54-1-lts
Packages: 1569 (pacman)
Shell: zsh 5.9
Resolution: 1920x1080
WM: bspwm
CPU: Intel Core 2 Duo P8700 (2) @ 2.534GHz
GPU: AMD ATI Mobility Radeon HD 4650/5165
Memory: 1242MiB / 3892MiB
### Which AutoKey GUI did you use?
GTK
### Which AutoKey version did you use?
0.96.0-beta.10
### How did you install AutoKey?
yay -S autokey (or autokey-git)
### Can you briefly describe the issue?
simply crashing
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. have autokey installed
2. have libnotify 0.7.12-1 installed
3. sudo pacman -Suy (thus getting upgraded libnotify (0.7.12-1 -> 0.8.0-1)
4. reboot
### What should have happened?
autokey-gtk should load normally
### What actually happened?
autokey-gtk crashed
### Do you have screenshots?
Not any longer, because I've fixed it by downgrading libnotify 0.7.12-1 from 0.8.0-1 back to 0.7.12-1
### Can you provide the output of the AutoKey command?
```bash
Not any longer, because I've fixed it by downgrading libnotify 0.7.12-1 from 0.8.0-1 back to 0.7.12-1
```
### Anything else?
My (hopefully temporary) solution has been
$ sudo downgrade libnotify
from 0.8.0-1 back to 0.7.12-1
| closed | 2022-07-16T06:27:38Z | 2022-07-20T19:42:24Z | https://github.com/autokey/autokey/issues/713 | [
"upstream bug"
] | pierostrada | 12 |
RobertCraigie/prisma-client-py | pydantic | 252 | Prompt the user to specify recursive type depth if they haven't already | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently, I imagine that most people will just use pseudo-recursive types as that is default and they don't know that there is a better option. We should try and make this option more present.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should output a message indicating that the recursive type depth option should be set in the schema if it is not already, e.g.
Message should be shown when:
```prisma
generator client {
provider = "prisma-client-py"
}
```
But not when the option is set:
```prisma
generator client {
provider = "prisma-client-py"
recursive_type_depth = 5
}
```
Message should look something like:
```
Some types are disabled by default due to being incompatible with Mypy, it is highly recommended
to use Pyright instead and configure Prisma Python to use recursive types to re-enable certain types:
generator client {
provider = "prisma-client-py"
recursive_type_depth = -1
}
If you need to use Mypy, you can also disable this message by explicitly setting the default value:
generator client {
provider = "prisma-client-py"
recursive_type_depth = 5
}
For more information see: https://prisma-client-py.readthedocs.io/en/stable/reference/limitations/#default-type-limitations
``` | closed | 2022-01-28T14:55:41Z | 2022-05-22T12:49:51Z | https://github.com/RobertCraigie/prisma-client-py/issues/252 | [
"kind/improvement",
"good first issue",
"level/beginner",
"priority/medium"
] | RobertCraigie | 0 |
explosion/spaCy | nlp | 13,190 | Spacy high memory consumption issue | Hello,
I am running spacy model with english medium weights inside kubernetes pod.
As I am observing after loading spacy model, it's taking around 500mb space and while after every prediction it keeps increasing.
I am wondering even after deleting spacy object then also it's not releasing memory.
I have allotted around 1GB memory resources to my pod, but after few hours it consume complete memory and stuck pod.
Could you please provide solutions how to release memory and ensure memory should not increase with number of predictions
| closed | 2023-12-08T19:00:46Z | 2023-12-11T07:45:34Z | https://github.com/explosion/spaCy/issues/13190 | [
"perf / memory"
] | nikhilcms | 1 |
Nemo2011/bilibili-api | api | 670 | [提问] 如何搜索超过1000条结果? | **Python 版本:** 3.10
**模块版本:** 16.1.1
**运行环境:** Windows
<!-- 务必提供模块版本并确保为最新版 -->
---
请问目前的search.search_by_type()每次仅能返回50页,每页20项,一共1000项,如果想获得更多数据怎么做?
| closed | 2024-02-04T14:13:06Z | 2024-03-15T14:19:10Z | https://github.com/Nemo2011/bilibili-api/issues/670 | [
"question"
] | tomriddle1234 | 3 |
deepspeedai/DeepSpeed | machine-learning | 6,007 | [BUG] Trainer saves global_steps300 in LoRA training with deepspeed | **Describe the bug**
I trained LLama 2 with deepspeed/
Trainer with 2 GPU but on saving the checkpoint with the following configuration deepspeed saves a large folder global_step50, which is 44GB. How I can automatically **not** save this folder? I just need adapter checkpoints.

**To Reproduce**
Steps to reproduce the behavior:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
**Expected behavior**
For LoRA training, I just need adapter as checkpoints.
**Screenshots**

| open | 2024-08-16T07:49:01Z | 2024-08-16T07:50:03Z | https://github.com/deepspeedai/DeepSpeed/issues/6007 | [
"bug",
"training"
] | YerongLi | 0 |
fastapi/sqlmodel | pydantic | 336 | How to reuse SelectOfScalar[Sequence] | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
import sqlalchemy
from sqlmodel import select
statement = select(Foo).join(xxx).where(xxx)
count_statement = select([sqlalchemy.func.count(Foo.id)]).join(xxx).where(xxx)
```
### Description
There are some statements shared same subquery conditions. Can I know how to reuse the select sequence such as `.join(xxx).where(xxx)`?
### Operating System
Linux, macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
Python 3.6.9
### Additional Context
_No response_ | closed | 2022-05-10T13:56:12Z | 2022-06-10T06:02:59Z | https://github.com/fastapi/sqlmodel/issues/336 | [
"question"
] | northtree | 1 |
PrefectHQ/prefect | automation | 17,513 | Add PREFECT_API_URL config setup step before the work pool creation for self hosted | ### Describe the current behavior
Current behavior is when a new user jumps on `self-hosted` [documentation](https://docs.prefect.io/v3/tutorials/schedule) if the user run ` prefect worker start --pool my-work-pool` they will get the error of:
```
:~/prefect/prefect$ prefect worker start --pool my-work-pool
Traceback (most recent call last):
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/cli/_utilities.py", line 44, in wrapper
return fn(*args, **kwargs)
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/cli/_types.py", line 155, in sync_fn
return asyncio.run(async_fn(*args, **kwargs))
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/cli/worker.py", line 126, in start
is_queues_paused = await _check_work_queues_paused(
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/cli/worker.py", line 208, in _check_work_queues_paused
wqs = await client.read_work_queues(
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/client/orchestration/__init__.py", line 1141, in read_work_queues
response = await self._client.post(
File "/home/abhi/.local/lib/python3.9/site-packages/httpx/_client.py", line 1859, in post
return await self.request(
File "/home/abhi/.local/lib/python3.9/site-packages/httpx/_client.py", line 1540, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/client/base.py", line 354, in send
response.raise_for_status()
File "/home/abhi/.local/lib/python3.9/site-packages/prefect/client/base.py", line 162, in raise_for_status
raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__
prefect.exceptions.PrefectHTTPStatusError: Client error '403 Forbidden' for url 'https://github.com/prefecthq/demos.git/work_pools/my-work-pool/queues/filter'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
An exception occurred.
```
which states that the `PREFECT_API_URL` is not setup as its not written on documentation.
### Describe the proposed behavior
`PREFECT_API_URL` config setup step before the creation of work pool for the self hosted.
Something like this:
```bash
:~/prefect/prefect$ prefect config set PREFECT_API_URL="http://127.0.0.1:4200/api"
Set 'PREFECT_API_URL' to 'http://127.0.0.1:4200/api'.
Updated profile 'local'.
```
This will solve the issue
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-03-17T18:01:52Z | 2025-03-17T18:01:52Z | https://github.com/PrefectHQ/prefect/issues/17513 | [
"enhancement"
] | octonawish-akcodes | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,170 | Failed to execute script 'UVR' due to unhandled exception | After updating to the newest version I'm getting the following error when trying to run the program:
`Failed to execute script 'UVR' due to unhandled exception: cannot import name '_get_cpp_backtrace' from 'torch._C' (D:\Ultilate Vocal Remover\Ultilate Vocal Remover\torch\_C.cp39-win_amd64.pyd)`
```
Traceback (most recent call last):
File "UVR.py", line 21, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
File "torch\__init__.py", line 649, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
File "torch\_tensor.py", line 12, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
File "torch\utils\__init__.py", line 6, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module
File "torch\utils\cpp_backtrace.py", line 1, in <module>
ImportError: cannot import name '_get_cpp_backtrace' from 'torch._C' (D:\Ultimate Vocal Remover\Ultimate Vocal Remover\torch\_C.cp39-win_amd64.pyd)
```
I updated by downloading the patch on the releases page. I am running Windows 10 on Intel CPU. I am a bit confused why the .pyd file says "amd". Could that be the issue? | open | 2024-02-16T12:04:10Z | 2024-05-21T04:48:52Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1170 | [] | SCSR-is-too-short-username | 1 |
BayesWitnesses/m2cgen | scikit-learn | 350 | Add support for Naive Bayes | Hi folks, should be possible support sklearn stacking and naives bayes? | closed | 2021-02-09T22:37:00Z | 2022-01-26T17:27:03Z | https://github.com/BayesWitnesses/m2cgen/issues/350 | [
"enhancement"
] | rspadim | 4 |
dask/dask | scikit-learn | 11,018 | `vindex` as outer indexer: memory and time performance | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Emulating outerindexing via `vindex` + `np.ix_` appears to be much slower and more memory intensive (prohibitively so for very large arrays where for an 1,000,000x1,000,000 array, it tried allocating 1.5TB of memory) than twice indexing. I know this is basically stated in the docs, but maybe there is something to be done here? If not, feel free to close.
**Minimal Complete Verifiable Example**:
```python
%load_ext memory_profiler
import dask.array as da
import numpy as np
import scipy as sp
chunksize = 100
size = 10_000
n_points = 5000
X = da.random.poisson(15, (size, size), chunks = (chunksize, chunksize))
index_0 = np.random.randint(0, X.shape[0], n_points)
index_0.sort()
index_1 = np.random.randint(0, X.shape[1], n_points)
index_1.sort()
print('vindex timing:')
%timeit X.vindex[np.ix_(index_0, index_1)].compute()
print('vindex memory usage:')
%memit X.vindex[np.ix_(index_0, index_1)]
print('double-index timing:')
%timeit X[index_0, :][:, index_1].compute()
print('double-index memory usage:')
%memit X[index_0, :][:, index_1]
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.3.1
- Python version: 3.12
- Operating System: mac
- Install method (conda, pip, source): pip
| closed | 2024-03-23T20:41:48Z | 2025-01-02T17:00:05Z | https://github.com/dask/dask/issues/11018 | [
"array",
"needs triage"
] | ilan-gold | 1 |
sktime/sktime | scikit-learn | 7,855 | [DOC] Add Documentation for `_safe_import()` utility | #7702 adds a `_safe_import()` utility for isolation of soft dependencies. Earlier vendoring a new library in `sktime` or interfacing one required `_check_soft_dependencies()` to check if any soft dependency like `torch` or `transformers` is present in the environment, if it is present then it would import them or else create dummy classes and attributes. But this design was not extensible - for each new library interfaced in `sktime` one had to create new set of dummy classes and attributes.
The `_safe_import()` utility solves this redundancy by an extensible design. So it would be nice to have it documented. There is a page concerning dependencies in sktime: [https://www.sktime.net/en/latest/developer_guide/dependencies.html](https://www.sktime.net/en/latest/developer_guide/dependencies.html) and opinions would be appreciated if `_safe_import()` should be documented on a new page altogether or the dependencies page? | open | 2025-02-17T19:37:58Z | 2025-02-17T19:46:56Z | https://github.com/sktime/sktime/issues/7855 | [
"documentation"
] | jgyasu | 1 |
noirbizarre/flask-restplus | flask | 591 | Marshall fields from a session.query | I work a lot with geo-stuff, and often data is stored as binary geometry, but converted to a JSON representation for the browser. This poses a problem for flask-restplus - I can't really declare my model as one of the fields is based on a function, but if I write the model as a session.query object, I don't get an object with field names to marshall.
My session query below:
`db.session.query(Model.r_id,Model.r_width,Model.r_length,Model.wb_id,func.ST_AsGeoJson(func.ST_Transform(Model.geom,4326))).filter(Model.wb_id == str(wbid)).all()`
This gives an array of tuples, without any identifiers.
Is there a way of mapping field positions such as you get as the result of a `session.query` to a list of field names? Otherwise I have to use database views, which slightly defeats the object of an ORM. | open | 2019-02-11T16:04:57Z | 2019-03-27T09:44:11Z | https://github.com/noirbizarre/flask-restplus/issues/591 | [
"Needed: Feedback"
] | stev-0 | 1 |
pydantic/pydantic-ai | pydantic | 857 | 'OpenAIModel' object has no attribute 'client' | I am running a local instance of my ollama and I want to try the ollama model, but when I try to run it. It returns me an 'OpenAIModel' object has no attribute 'client' error.
```
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
class CityLocation(BaseModel):
city: str
country: str
ollama_model = OpenAIModel(model_name='llama2', base_url='http://127.0.0.1:11434')
agent = Agent(ollama_model, result_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.data)
#> city='London' country='United Kingdom'
print(result.usage())
```
This is the exact error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[85], line 9
6 ollama_model = OpenAIModel(model_name='llama3.2', base_url='http://127.0.0.1:11434')
7 agent = Agent(ollama_model, result_type=CityLocation)
----> 9 result = agent.run_sync('Where were the olympics held in 2012?')
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_ai\agent.py:432, in Agent.run_sync(self, user_prompt, result_type, message_history, model, deps, model_settings, usage_limits, usage, infer_name)
430 if infer_name and self.name is None:
431 self._infer_name(inspect.currentframe())
--> 432 return asyncio.get_event_loop().run_until_complete(
433 self.run(
434 user_prompt,
435 result_type=result_type,
436 message_history=message_history,
437 model=model,
438 deps=deps,
439 model_settings=model_settings,
440 usage_limits=usage_limits,
441 usage=usage,
442 infer_name=False,
443 )
444 )
File ~\AppData\Roaming\Python\Python311\site-packages\nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future)
95 if not f.done():
96 raise RuntimeError(
97 'Event loop stopped before Future completed.')
---> 98 return f.result()
File C:\Program Files\Python311\Lib\asyncio\futures.py:203, in Future.result(self)
201 self.__log_traceback = False
202 if self._exception is not None:
--> 203 raise self._exception.with_traceback(self._exception_tb)
204 return self._result
File C:\Program Files\Python311\Lib\asyncio\tasks.py:267, in Task.__step(***failed resolving arguments***)
263 try:
264 if exc is None:
265 # We use the `send` method directly, because coroutines
266 # don't have `__iter__` and `__next__` methods.
--> 267 result = coro.send(None)
268 else:
269 result = coro.throw(exc)
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_ai\agent.py:340, in Agent.run(self, user_prompt, message_history, model, deps, model_settings, usage_limits, usage, result_type, infer_name)
332 start_node = _agent_graph.UserPromptNode[AgentDepsT](
333 user_prompt=user_prompt,
334 system_prompts=self._system_prompts,
335 system_prompt_functions=self._system_prompt_functions,
336 system_prompt_dynamic_functions=self._system_prompt_dynamic_functions,
337 )
339 # Actually run
--> 340 end_result, _ = await graph.run(
341 start_node,
342 state=state,
343 deps=graph_deps,
344 infer_name=False,
345 )
347 # Build final run result
348 # We don't do any advanced checking if the data is actually from a final result or not
349 return result.RunResult(
350 state.message_history,
351 new_message_index,
(...)
354 state.usage,
355 )
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_graph\graph.py:187, in Graph.run(self, start_node, state, deps, infer_name)
185 next_node = start_node
186 while True:
--> 187 next_node = await self.next(next_node, history, state=state, deps=deps, infer_name=False)
188 if isinstance(next_node, End):
189 history.append(EndStep(result=next_node))
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_graph\graph.py:263, in Graph.next(self, node, history, state, deps, infer_name)
261 start_ts = _utils.now_utc()
262 start = perf_counter()
--> 263 next_node = await node.run(ctx)
264 duration = perf_counter() - start
266 history.append(
267 NodeStep(state=state, node=node, start_ts=start_ts, duration=duration, snapshot_state=self.snapshot_state)
268 )
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_ai\_agent_graph.py:249, in ModelRequestNode.run(self, ctx)
246 ctx.state.run_step += 1
248 with _logfire.span('preparing model and tools {run_step=}', run_step=ctx.state.run_step):
--> 249 agent_model = await _prepare_model(ctx)
251 # Actually make the model request
252 model_settings = merge_model_settings(ctx.deps.model_settings, None)
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_ai\_agent_graph.py:223, in _prepare_model(ctx)
220 await asyncio.gather(*map(add_tool, ctx.deps.function_tools.values()))
222 result_schema = ctx.deps.result_schema
--> 223 return await run_context.model.agent_model(
224 function_tools=function_tool_defs,
225 allow_text_result=_allow_text_result(result_schema),
226 result_tools=result_schema.tool_defs() if result_schema is not None else [],
227 )
File ~\AppData\Roaming\Python\Python311\site-packages\pydantic_ai\models\openai.py:132, in OpenAIModel.agent_model(self, function_tools, allow_text_result, result_tools)
129 if result_tools:
130 tools += [self._map_tool_definition(r) for r in result_tools]
131 return OpenAIAgentModel(
--> 132 self.client,
133 self.model_name,
134 allow_text_result,
135 tools,
136 self.system_prompt_role,
137 )
AttributeError: 'OpenAIModel' object has no attribute 'client'
``` | closed | 2025-02-06T09:22:37Z | 2025-02-07T03:29:09Z | https://github.com/pydantic/pydantic-ai/issues/857 | [] | edilberto-pajunar | 2 |
davidsandberg/facenet | tensorflow | 370 | cluster random images into folders | Hi
I have set of random images in a folder. how can i cluster similar images into specific folder. I tried using LBP approach but it was not solving the problem. Using facenet pls suggest how can i achieve the same.
Thanks
vij | closed | 2017-07-12T10:12:56Z | 2017-12-04T07:25:21Z | https://github.com/davidsandberg/facenet/issues/370 | [] | myinzack | 7 |
pydata/bottleneck | numpy | 88 | Porting bottleneck to numpy 1.9 | Just a heads up that nansum now returns 0 for empty slices.
```
======================================================================
FAIL: Test nansum.
----------------------------------------------------------------------
Traceback (most recent call last):
File "X:\Python27-x64\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "X:\Python27-x64\lib\site-packages\bottleneck\tests\func_test.py", line 80, in unit_maker
assert_array_equal(actual, desired, err_msg)
File "D:\Build\Test\numpy-build\numpy\testing\utils.py", line 734, in assert_array_equal
verbose=verbose, header='Arrays are not equal')
File "D:\Build\Test\numpy-build\numpy\testing\utils.py", line 623, in assert_array_compare
chk_same_position(x_isnan, y_isnan, hasval='nan')
File "D:\Build\Test\numpy-build\numpy\testing\utils.py", line 603, in chk_same_position
raise AssertionError(msg)
AssertionError:
Arrays are not equal
func nansum | input a24 (float32) | shape (0L,) | axis -1
Input array:
[]
x and y nan location mismatch:
x: array(nan, dtype=float32)
y: array(0.0, dtype=float32)
```
| closed | 2014-07-04T22:04:40Z | 2015-02-04T16:47:29Z | https://github.com/pydata/bottleneck/issues/88 | [] | charris | 7 |
chaoss/augur | data-visualization | 2,630 | Change Request Acceptance Ratio metric API | The canonical definition is here: https://chaoss.community/?p=3598 | open | 2023-11-30T18:05:34Z | 2023-11-30T18:20:26Z | https://github.com/chaoss/augur/issues/2630 | [
"API",
"first-timers-only"
] | sgoggins | 0 |
litestar-org/litestar | asyncio | 3,054 | Bug: Pydantic's `json_schema_extra` is not passed to the generated OpenAPI spec | ### Description
The generated OpenAPI schema is missing `model_config = ConfigDict(json_schema_extra=...)` and `Field(json_schema_extra=...)` in Pydantic models.
The `json_schema_extra` (and other fields?) should be copied over to the schema.
### MCVE
```python
import uvicorn
from litestar import Litestar, get
from litestar.openapi import ResponseSpec
from pydantic import BaseModel, ConfigDict, Field
class Payload(BaseModel):
model_config = ConfigDict(
title="Some label",
json_schema_extra={
"examples": [
{
"field": "VALUE1"
},
{
"field": "VALUE2"
}
],
"not": {
"type": "integer"
}
}
)
field: str = Field(default=..., json_schema_extra={"x-local-extension": True})
@get(
responses={
200: ResponseSpec(Payload, generate_examples=False)
}
)
async def hello() -> Payload:
pass
app = Litestar(
route_handlers=[hello],
)
uvicorn.run(app)
```
### Steps to reproduce
Run it.
Inspect the generated schema:
```json
{
"info": {
"title": "Litestar API",
"version": "1.0.0"
},
"openapi": "3.1.0",
"servers": [
{
"url": "/"
}
],
"paths": {
"/": {
"get": {
"summary": "Hello",
"operationId": "Hello",
"responses": {
"200": {
"description": "Additional response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Payload"
}
}
}
}
},
"deprecated": false
}
}
},
"components": {
"schemas": {
"Payload": {
"properties": {
"field": {
"type": "string"
}
},
"type": "object",
"required": [
"field"
],
"title": "Some label"
}
}
}
}
```
### Related
- It's possible to provide OpenAPI examples for input args via `Parameter(examples=[...])` but how can you do the same for response body? `ResponseSpec` doesn't provide `examples` field, only `generate_examples` flag...
- How can you generate examples for request body? There's no `generate_examples` flag for `Parameter`?
### Litestar Version
2.5.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-01-31T20:43:46Z | 2025-03-20T15:54:23Z | https://github.com/litestar-org/litestar/issues/3054 | [
"Bug :bug:"
] | tuukkamustonen | 4 |
LAION-AI/Open-Assistant | machine-learning | 3,144 | Curate SFT-9 dataset mixes | Iterate on the SFT-8 dataset mixes to create pretraining and final SFT mixes for SFT-9. This requires investigating the quality and usefulness of the datasets. Community input welcome below. See the `sft8_training` [branch](https://github.com/LAION-AI/Open-Assistant/tree/sft8_training) for the code state corresponding to the below SFT-8 configs.
<details>
<summary>SFT-8 pretraining mix</summary>
```
datasets:
- gpteacher_roleplay:
val_split: 0.05
- red_pajama:
fraction: 0.25
max_val_set: 1000
- wizardlm_70k:
val_split: 0.05
max_val_set: 500
- joke:
val_split: 0.05
- poem_instructions:
val_split: 0.025
- oa_stackexchange:
val_split: 0.05
fraction: 0.1
max_val_set: 1000
- tell_a_joke:
val_split: 0.05
max_val_set: 250
- webgpt:
val_split: 0.05
max_val_set: 250
- gpt4all:
val_split: 0.01
max_val_set: 1000
- alpaca_gpt4:
val_split: 0.025
max_val_set: 250
- code_alpaca:
val_split: 0.05
max_val_set: 250
- vicuna:
max_val_set: 250
- oig_file:
source_url: https://huggingface.co/datasets/laion/OIG/resolve/main/unified_chip2.jsonl
max_count: 10000
min_length: 250
val_split: 0.05
max_val_set: 250
- minimath:
val_split: 0.05
- humaneval_mbpp_codegen_qa:
val_split: 0.05
- humaneval_mbpp_testgen_qa:
val_split: 0.05
- grade_school_math_instructions:
val_split: 0.05
- recipes:
val_split: 0.05
- cmu_wiki_qa:
val_split: 0.05
- oa_wiki_qa_bart_10000row:
val_split: 0.05
max_val_set: 250
- prosocial_dialogue:
fraction: 0.1
max_val_set: 250
- explain_prosocial:
fraction: 0.075
max_val_set: 250
- soda:
fraction: 0.25
max_val_set: 1000
- oa_leet10k:
val_split: 0.05
max_val_set: 250
- dolly15k:
val_split: 0.05
max_val_set: 300
```
</details>
<details>
<summary>SFT-8 final SFT mix</summary>
```
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 0.4
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
- red_pajama:
fraction: 0.05
max_val_set: 1000
- wizardlm_70k:
val_split: 0.05
max_val_set: 500
fraction: 0.4
- poem_instructions:
fraction: 0.5
val_split: 0.025
```
</details>
Leading on this: @0x22almostEvil
Some initial requests from community include removal or reduction/filtering of `prosocial_dialogue` and `explain_prosocial` datasets from pretraining. | open | 2023-05-13T09:57:48Z | 2023-05-25T15:13:33Z | https://github.com/LAION-AI/Open-Assistant/issues/3144 | [
"research",
"ml",
"data"
] | olliestanley | 10 |
pyppeteer/pyppeteer | automation | 105 | SSL error while downloading chromium for the first time | While downloading chromium for the first time, i got the following error:
`OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]`
I had to use [https://github.com/kiwi0fruit/pyppdf/blob/11d082f7a35cdac2ae3e7ffa7022c1d1e9747cd2/pyppdf/patch_pyppeteer/patch_pyppeteer.py#L59](https://github.com/kiwi0fruit/pyppdf/blob/11d082f7a35cdac2ae3e7ffa7022c1d1e9747cd2/pyppdf/patch_pyppeteer/patch_pyppeteer.py#L59) to solve my issue.
As seen in the above link, It uses HTTPS to download while pyppeteer uses HTTP. Can't HTTPS be used for pyppeteer to solve this issue?
| open | 2020-05-12T09:59:37Z | 2020-08-07T10:27:24Z | https://github.com/pyppeteer/pyppeteer/issues/105 | [
"bug",
"fixed-in-2.1.1"
] | ravisumit33 | 5 |
keras-team/keras | deep-learning | 20,726 | keras.mixed_precision not working with TorchModuleWrapper | When using a torch model with TorchModuleWrapper, the mixed_precision doesnt work.
I guess somehow in the call of TorchModuleWrapper we are supposed to wrap the call to the torch model with
`with torch.cuda.amp.autocast():`
Here is some code that doesnt work:
```
import os
os.environ["KERAS_BACKEND"] = "torch"
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
import keras
keras.mixed_precision.set_global_policy("mixed_float16")
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
train_dataloader = DataLoader(training_data, batch_size=64)
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to("cuda")
inputs = keras.layers.Input(shape=(1, 28,28))
outputs = keras.layers.TorchModuleWrapper(model)(inputs)
keras_model = keras.models.Model(inputs,outputs)
keras_model.compile( optimizer=keras.optimizers.SGD(learning_rate=1e-3),loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True))
keras_model.fit(train_dataloader)
``` | closed | 2025-01-05T14:43:35Z | 2025-02-06T02:01:25Z | https://github.com/keras-team/keras/issues/20726 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | yonigottesman | 4 |
babysor/MockingBird | pytorch | 757 | gpu换大的后碰到这个问题,是什么原因呢? | <string>:6: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
Traceback (most recent call last):
File "gen_audio_from_srt.py", line 430, in <module>
Path("vocoder/saved_models/pretrained/g_hifigan.pt"), fpath, gen_materials
File "gen_audio_from_srt.py", line 144, in generate_wav
gen_one_wav(synthesizer, embed, processed_texts, file_name, hint_txt)
File "gen_audio_from_srt.py", line 74, in gen_one_wav
generated_wav = encoder.preprocess_wav(generated_wav)
File "/home/evers/MyGithub/MockingBird/encoder/audio.py", line 46, in preprocess_wav
wav = normalize_volume(wav, audio_norm_target_dBFS, increase_only=True)
File "/home/evers/MyGithub/MockingBird/encoder/audio.py", line 115, in normalize_volume
if (dBFS_change < 0 and increase_only) or (dBFS_change > 0 and decrease_only):
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
| open | 2022-10-01T15:43:40Z | 2022-10-01T15:43:40Z | https://github.com/babysor/MockingBird/issues/757 | [] | everschen | 0 |
onnx/onnx | machine-learning | 6,149 | [Question] Where is `onnx-operators-ml.pb.h` | https://github.com/onnx/onnx/blob/b86cc54efce19530fb953e4b21f57e6b3888534c/onnx/onnx-operators_pb.h#L9 | closed | 2024-05-28T08:28:32Z | 2024-05-31T00:32:07Z | https://github.com/onnx/onnx/issues/6149 | [
"question"
] | AIYoungcino | 1 |
holoviz/panel | plotly | 7,119 | pixi run docs-build missing Webdriver | I followed https://holoviz-dev.github.io/panel/developer_guide/index.html#documentation to run
```
panel $ pixi run docs-build
```
which ran
```
✨ Pixi task (_docs-generate in docs): nbsite build --what=html --output=builtdocs --org holoviz --project-name panel
```
which gave this RuntimeError:
```
getting thumbnail code for /Users/cdeil/code/oss/panel/examples/reference/widgets/FileDropper.ipynb
Path exists True
Traceback (most recent call last):
File "/var/folders/6v/0_6nt0pj07x9xjhd8qzkyy700000gn/T/tmp6jv8j0iz", line 67, in <module>
from nbsite.gallery.thumbnailer import thumbnail;thumbnail(file_dropper, '/Users/cdeil/code/oss/panel/doc/reference/widgets/thumbnails/FileDropper')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cdeil/code/oss/panel/.pixi/envs/docs/lib/python3.11/site-packages/nbsite/gallery/thumbnailer.py", line 133, in thumbnail
obj.save(basename+'.png')
File "/Users/cdeil/code/oss/panel/panel/viewable.py", line 964, in save
return save(
^^^^^
File "/Users/cdeil/code/oss/panel/panel/io/save.py", line 270, in save
return save_png(
^^^^^^^^^
File "/Users/cdeil/code/oss/panel/panel/io/save.py", line 85, in save_png
state.webdriver = webdriver_control.create()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cdeil/code/oss/panel/.pixi/envs/docs/lib/python3.11/site-packages/bokeh/io/webdriver.py", line 180, in create
driver = self._create(kind, scale_factor=scale_factor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cdeil/code/oss/panel/.pixi/envs/docs/lib/python3.11/site-packages/bokeh/io/webdriver.py", line 198, in _create
raise RuntimeError("Neither firefox and geckodriver nor a variant of chromium browser and " \
RuntimeError: Neither firefox and geckodriver nor a variant of chromium browser and chromedriver are available on system PATH. You can install the former with 'conda install -c conda-forge firefox geckodriver'.
FileDropper thumbnail export failed
```
and same error for `examples/reference/chat/ChatStep.ipynb`
Is this a missing dependency in the `.pixi/envs/docs` spec? | open | 2024-08-10T18:22:27Z | 2024-08-24T12:03:53Z | https://github.com/holoviz/panel/issues/7119 | [] | cdeil | 6 |
postmanlabs/httpbin | api | 511 | arraybuffer inconsistencies | Here's my pseudo code request:
```
url: 'https://httpbin.org/anything',
responseType: 'arraybuffer',
body: new Uint8Array(10000),
method: 'POST',
mode: 'cors'
```
The bug: I am receiving a 60715 bytes (1/6 average ratio) ArrayBuffer. | closed | 2018-09-17T13:50:52Z | 2018-09-19T14:24:27Z | https://github.com/postmanlabs/httpbin/issues/511 | [] | Mouvedia | 3 |
noirbizarre/flask-restplus | api | 188 | Serve Swagger UI as HTML, keeping JSON for everything else | I have a REST API where I set the response class' type to `application/json`. Unfortunately this sets the `Content-Type` for the Swagger UI, so the page doesn't render in browsers.
Is there a way to change the Swagger UI Content type to html, while keeping JSON for the rest of the app? What am I missing here?
``` py
class JsonResponse(Response):
default_mimetype = 'application/json'
# bp is a blueprint: bp = Blueprint(bla bla, params...)
api = Api(bp, version='1.0', title='API', description='Simple API', doc='/doc/')
# default_mediatype='text/html' didn't help
app = Flask(__name__)
app.register_blueprint(bp)
app.response_class = JsonResponse
```
Thank you for your help.
| closed | 2016-08-02T16:09:52Z | 2016-08-04T09:09:10Z | https://github.com/noirbizarre/flask-restplus/issues/188 | [] | vincevargadev | 5 |
zihangdai/xlnet | nlp | 100 | What should i do to display the F1 score for my own dataset? | closed | 2019-07-02T10:39:02Z | 2019-07-02T14:43:00Z | https://github.com/zihangdai/xlnet/issues/100 | [] | bishalgaire | 0 | |
scrapy/scrapy | python | 6,717 | Scrapy Issues Warning for 'parse' Method in ActressListSpider: Generator Detection Problem |
### Description:
```
I am encountering a warning in Scrapy, where it is unable to determine if the parse method of my spider is a generator. The warning does not prevent the spider from functioning, but it does prevent Scrapy from properly identifying potential issues with my implementation.
```
### Steps to Reproduce:
```
I am using Scrapy to crawl a list of actresses from the following URL: https://www.mymovies.com
In my spider, I make requests to multiple pages, and for each response, I attempt to parse the page and continue scraping data.
As I iterate over pages, I receive a warning that Scrapy is unable to detect whether the parse method is a generator or not.
Expected Behavior: Scrapy should be able to properly detect whether the parse method is a generator.
```
### Actual Behavior:
```
I receive the following warning during scraping:
UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse".
```
### Log Output:
Here is the relevant portion of the log where the warning occurs:
```text
2025-03-10 17:35:45 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/407
2025-03-10 17:35:46 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/407 with status: 200
2025-03-10 17:35:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/407> (referer: https://www.mymovies.com/uncensored/actresses/406)
2025-03-10 17:35:46 [base.base_spider] INFO: Now parsing page 407
2025-03-10 17:35:46 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/408
2025-03-10 17:35:47 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/408 with status: 200
2025-03-10 17:35:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/408> (referer: https://www.mymovies.com/uncensored/actresses/407)
2025-03-10 17:35:48 [base.base_spider] INFO: Now parsing page 408
2025-03-10 17:35:48 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/409
2025-03-10 17:35:49 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/409 with status: 200
2025-03-10 17:35:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/409> (referer: https://www.mymovies.com/uncensored/actresses/408)
2025-03-10 17:35:49 [base.base_spider] INFO: Now parsing page 409
2025-03-10 17:35:49 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/410
2025-03-10 17:35:51 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/410 with status: 200
2025-03-10 17:35:51 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/410> (referer: https://www.mymovies.com/uncensored/actresses/409)
2025-03-10 17:35:51 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:51 [base.base_spider] INFO: Now parsing page 410
2025-03-10 17:35:51 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/411
2025-03-10 17:35:53 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/411 with status: 200
2025-03-10 17:35:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/411> (referer: https://www.mymovies.com/uncensored/actresses/410)
2025-03-10 17:35:53 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:53 [base.base_spider] INFO: Now parsing page 411
2025-03-10 17:35:53 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/412
2025-03-10 17:35:54 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/412 with status: 200
2025-03-10 17:35:54 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/412> (referer: https://www.mymovies.com/uncensored/actresses/411)
2025-03-10 17:35:54 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:54 [base.base_spider] INFO: Now parsing page 412
2025-03-10 17:35:54 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/413
2025-03-10 17:35:56 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/413 with status: 200
2025-03-10 17:35:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/413> (referer: https://www.mymovies.com/uncensored/actresses/412)
2025-03-10 17:35:56 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:56 [base.base_spider] INFO: Now parsing page 413
2025-03-10 17:35:56 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/414
2025-03-10 17:35:57 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/414 with status: 200
2025-03-10 17:35:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/414> (referer: https://www.mymovies.com/uncensored/actresses/413)
2025-03-10 17:35:57 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:57 [base.base_spider] INFO: Now parsing page 414
2025-03-10 17:35:58 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/415
2025-03-10 17:35:59 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/415 with status: 200
2025-03-10 17:35:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/415> (referer: https://www.mymovies.com/uncensored/actresses/414)
2025-03-10 17:35:59 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:35:59 [base.base_spider] INFO: Now parsing page 415
2025-03-10 17:35:59 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/416
2025-03-10 17:36:01 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/416 with status: 200
2025-03-10 17:36:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/416> (referer: https://www.mymovies.com/uncensored/actresses/415)
2025-03-10 17:36:01 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
2025-03-10 17:36:01 [base.base_spider] INFO: Now parsing page 416
2025-03-10 17:36:01 [actresses_list] INFO: Sending request: https://www.mymovies.com/uncensored/actresses/417
2025-03-10 17:36:02 [actresses_list] INFO: Received response: https://www.mymovies.com/uncensored/actresses/417 with status: 200
2025-03-10 17:36:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.mymovies.com/uncensored/actresses/417> (referer: https://www.mymovies.com/uncensored/actresses/416)
2025-03-10 17:36:02 [py.warnings] WARNING: /home/ubuntu/gggggg/spiders/spider/myvenv/lib/python3.12/site-packages/scrapy/core/scraper.py:208: UserWarning: Unable to determine whether or not "ActressListSpider.parse" is a generator with a return value. This will not prevent your code from working, but it prevents Scrapy from detecting potential issues in your implementation of "ActressListSpider.parse". Please, report this in the Scrapy issue tracker (https://github.com/scrapy/scrapy/issues), including the code of "ActressListSpider.parse"
warn_on_generator_with_return_value(spider, callback)
```
Here is the relevant code for the parse method in my spider:
```python
def start_requests(self):
if self.is_censored is False:
url = self.mymovies_base_url + "uncensored" + "/actresses/"
else:
url = self.mymovies_base_url + "actresses/"
url = url + str(self.page_num)
yield scrapy.Request(url, callback=self.parse, meta={"page_num": self.page_num,"is_censored":self.is_censored},dont_filter=True)
def parse(self, response):
page_num = response.meta.get("page_num", self.page_num)
is_censored = response.meta.get("is_censored", self.is_censored)
if is_censored is None:
is_censored = self.is_censored
if page_num is None:
page_num = self.page_num
if response.status == 200:
bs = BeautifulSoup(response.body, "html.parser")
self.log(f"Now parsing page {page_num}")
waterfall = bs.find(id="waterfall")
if waterfall:
boxs = bs.find_all("a", attrs={"class": "avatar-box text-center"})
if boxs:
for box in boxs:
link = self.get_link(box)
if link:
actresses_request_data = {"url": link}
self.server.lpush(
actress_detail_start_url_key,
json.dumps(actresses_request_data),
)
actresses_request_data = {
"url": link,
"is_censored": is_censored,
}
self.server.lpush(
actress_detail_censored_link_key,
json.dumps(actresses_request_data),
)
else:
self.log("No boxs found on this page.")
else:
self.log("No waterfall found on this page.")
# 检查是否有下一页并跳转
next_page = self.get_next_page(bs)
if next_page:
next_page_num = page_num + 1
if is_censored is False:
url = self.mymovies_base_url + "uncensored" + "/actresses/"
else:
url = self.mymovies_base_url + "actresses/"
url = url + str(next_page_num)
yield scrapy.Request(
url, callback=self.parse, meta={"page_num": next_page_num}
)
else:
self.log("No next page, stopping crawl.")
self.crawler.engine.close_spider(self, "No next page")
```
### Environment:
```
Scrapy version: 2.12
Python version: 3.12
Operating System:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
```
### Additional Information:
- I am using scrapy.Request and yield to handle requests and responses.
- The warning appears consistently, and does not prevent the spider from scraping data, but it does raise concerns about Scrapy's detection of the generator.
| open | 2025-03-10T09:58:01Z | 2025-03-11T03:56:53Z | https://github.com/scrapy/scrapy/issues/6717 | [] | MajorTomMan | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,309 | inference about cycleGAN | thanks for your contribution, I have trained my own cycleGAN model according to the instructions. My question is, when I plan to inference the data which include 1000 data set (testA), do I have to have 1000 testB corresponding to testA? | closed | 2021-08-25T02:54:44Z | 2023-11-10T21:34:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1309 | [] | cena001plus | 2 |
tqdm/tqdm | jupyter | 764 | color printing not possible in jupyter notebook after importing tqdm? | Printing colored text in jupyter notebook doesn't seem to work after importing tqdm
Example:

tqdm 4.23.4
python 3.6.8
ipython 7.5.0
windows server 2012R2
Any suggestions? | closed | 2019-06-26T09:05:41Z | 2022-11-07T12:45:48Z | https://github.com/tqdm/tqdm/issues/764 | [
"need-feedback 📢",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] | Ruler26 | 3 |
widgetti/solara | flask | 144 | [Enhancement] Select widget missing keyword disabled | Select and SelectMultiple widgets seem missing keyword disabled.
Is it by design or still under implementation?
Is there any work around instead of using ipyvuetify directly.
thanks! | closed | 2023-06-05T19:30:32Z | 2023-06-30T02:47:18Z | https://github.com/widgetti/solara/issues/144 | [] | lp9052 | 1 |
pytest-dev/pytest-django | pytest | 873 | Pytest scope='module' fixture not delete model instance after testing module | I create the message instance in a fixture with scope='module', right in the test file. But when the test reaches another module, this message instance still exists in the database.
**in .../apps/dialogs/test/api/test_message.py**
```py
@pytest.fixture(scope='module')
def message_by_auth_user(django_db_setup, django_db_blocker,
message_factory: type,
user_factory: type,
user_with_auth: User) -> Message:
"""Return message by auth user."""
with django_db_blocker.unblock():
message = message_factory(written_by=user_with_auth) # Message object (1)
message_text = message.message # 'text_message_№_1'
return message
```
**in .../apps/users/test/api/test_users.py**
```py
@pytest.mark.django_db
def test_get_users_view_with_filter(bool_value: bool,
user_count_change: int,
filter_pattern: str,
api_auth_client: APIClient,
user_with_auth: User,
user_factory: type):
message_count = Message.objects.all().count() # 1
message = Message.objects.first() # Message object (1)
message_text = message.message # 'text_message_№_1'
```
After I replaced the 'return' with a 'yield', and after the 'yield' I manually deleted the object, everything works correctly. But shouldn't the test do it automatically, as it does in my other fixtures? For example, if scope = 'function' then the test automatically deletes the object (after each test), without any 'yield'
**If the message instance is not manually deleted, it will exist throughout the entire session, even if scope='module'. Why is this happening???**
```py
@pytest.fixture(scope='module')
def message_by_auth_user(django_db_setup, django_db_blocker,
message_factory: type,
user_factory: type,
user_with_auth: User) -> Message:
"""Return message by auth user."""
with django_db_blocker.unblock():
message = message_factory(written_by=user_with_auth)
yield message
message.delete() # This code is executed when fixture run teardown, after testing current module
```
why does function scope fixtures delete it automatically after each test? I expect module scope fixture to have the same behavior after each test module. | closed | 2020-10-05T04:31:05Z | 2020-10-16T18:50:46Z | https://github.com/pytest-dev/pytest-django/issues/873 | [] | MaximMukhametov | 1 |
SALib/SALib | numpy | 550 | Expand documentation for Sobol' analysis | Documentation with regard to usage and interpretation of Sobol' analysis should be expanded.
See issue raised in #549 as an example of what users may face.
Although this is a general issue across the SALib package, lets start with Sobol'. | open | 2022-12-16T07:19:48Z | 2023-04-10T04:29:58Z | https://github.com/SALib/SALib/issues/550 | [] | ConnectedSystems | 1 |
pandas-dev/pandas | pandas | 60,564 | BUG: The isna function returns False for NaN values in a column of type 'double [pyarrow]'. | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df_arrow = pd.DataFrame([[0, 0]], dtype="double[pyarrow]", columns=['a', 'b'])
df_arrow['c'] = df_arrow.a / df_arrow.b
df_arrow.isna()
```
### Issue Description
The isna function returns False for NaN values in the column `c`, which is of type `double [pyarrow]`. The output of reproducible example is:
```
a b c
0 False False False
```
It is clear that column `c` is 0/0, which is NaN, so it should be `True`.
Furthermore, if I assign the variable `n` to reference to column `c` and call pd.isna, it returns True.
```
n = df_arrow.iloc[0,2]
pd.isna(n)
```
Output of this case is `True`.
### Expected Behavior
If I take use of numpy instead of pyarrow as dtype backend, it's as expected.
Expected behavior is:
```
import pandas as pd
df_np = pd.DataFrame([[0, 0]], columns=['a', 'b'])
df_np['c'] = df_np.a / df_np.b
df_np.isna()
```
The output is:
```
a b c
0 False False True
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.6
python-bits : 64
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Fri Nov 15 15:12:37 PST 2024; root:xnu-10063.141.1.702.7~1/RELEASE_ARM64_T6030
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.24.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.0
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.3
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| closed | 2024-12-14T10:44:50Z | 2024-12-14T12:49:15Z | https://github.com/pandas-dev/pandas/issues/60564 | [
"Bug",
"Duplicate Report",
"Arrow",
"PDEP missing values"
] | zhengchl | 1 |
comfyanonymous/ComfyUI | pytorch | 6,907 | mat1 and mat2 must have the same dtype, but got Float and Half | ### Expected Behavior
The issue happens after the auto update of the ComfyUI Desktop app on my M4 Mac Mini (16 GB)
### Actual Behavior
I include an attachment with my workflow, the error got triggered on the KSampler module
### Steps to Reproduce
Press Queue to the attached Workflow
### Debug Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 20
- **Node Type:** KSampler
- **Exception Type:** RuntimeError
- **Exception Message:** mat1 and mat2 must have the same dtype, but got Float and Half
## Stack Trace
.....
## System Information
- **ComfyUI Version:** 0.3.14
- **Arguments:** /Applications/ComfyUI.app/Contents/Resources/ComfyUI/main.py --user-directory /Volumes/External SSD/ai_gen/ComfyUI/user --input-directory /Volumes/External SSD/ai_gen/ComfyUI/input --output-directory /Volumes/External SSD/ai_gen/ComfyUI/output --front-end-root /Applications/ComfyUI.app/Contents/Resources/ComfyUI/web_custom_versions/desktop_app --base-directory /Volumes/External SSD/ai_gen/ComfyUI --extra-model-paths-config /Users/ozonostudio/Library/Application Support/ComfyUI/extra_models_config.yaml --listen 127.0.0.1 --port 8000
- **OS:** posix
- **Python Version:** 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 10:37:40) [Clang 14.0.6 ]
- **Embedded Python:** false
- **PyTorch Version:** 2.7.0.dev20250207
## Devices
- **Name:** cpu
- **Type:** cpu
- **VRAM Total:** 17179869184
- **VRAM Free:** 4505272320
- **Torch VRAM Total:** 17179869184
- **Torch VRAM Free:** 4505272320
## Logs
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T22:58:21.840941 - Prompt executed in 162.99 seconds
2025-02-20T23:00:40.077715 - got prompt
2025-02-20T23:03:08.961280 -
15%|█▌ | 3/20 [02:28<14:04, 49.69s/it]2025-02-20T23:03:14.044689 -
15%|█▌ | 3/20 [02:33<14:32, 51.30s/it]2025-02-20T23:03:14.044726 -
2025-02-20T23:03:14.051651 - !!! Exception during processing !!! mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T23:03:14.053825 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1539, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1506, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 45, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1109, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 999, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 952, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 935, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 714, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 161, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 379, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 915, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 918, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 359, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 195, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 308, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 132, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 163, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 831, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 873, in _forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 796, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-easy-use/py/modules/layer_diffuse/attension_sharing.py", line 252, in forward
return func(self, x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 254, in forward
return func(self, x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 720, in forward
n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui_ipadapter_plus/CrossAttentionPatch.py", line 26, in __call__
out = out + callback(out, q, k, v, extra_options, **self.kwargs[i])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui_ipadapter_plus/CrossAttentionPatch.py", line 169, in ipadapter_attention
out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 218, in attention_sub_quad
hidden_states = efficient_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 268, in efficient_dot_product_attention
compute_query_chunk_attn(
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 159, in _get_attention_scores_no_kv_chunking
attn_scores = torch.baddbmm(
^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T23:03:14.055227 - Prompt executed in 153.98 seconds
2025-02-20T23:26:55.842446 - got prompt
2025-02-20T23:28:36.200068 -
10%|█ | 2/20 [01:40<15:03, 50.21s/it]2025-02-20T23:28:40.966169 -
10%|█ | 2/20 [01:45<15:45, 52.52s/it]2025-02-20T23:28:40.966209 -
2025-02-20T23:28:40.971962 - !!! Exception during processing !!! mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T23:28:40.973900 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1539, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1506, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 45, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1109, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 999, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 952, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 935, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 714, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 873, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 776, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 379, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 915, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 918, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 359, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 195, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 308, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 132, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 163, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 831, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 873, in _forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 796, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-easy-use/py/modules/layer_diffuse/attension_sharing.py", line 252, in forward
return func(self, x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 254, in forward
return func(self, x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 720, in forward
n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui_ipadapter_plus/CrossAttentionPatch.py", line 26, in __call__
out = out + callback(out, q, k, v, extra_options, **self.kwargs[i])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui_ipadapter_plus/CrossAttentionPatch.py", line 169, in ipadapter_attention
out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/attention.py", line 218, in attention_sub_quad
hidden_states = efficient_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 268, in efficient_dot_product_attention
compute_query_chunk_attn(
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 159, in _get_attention_scores_no_kv_chunking
attn_scores = torch.baddbmm(
^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T23:28:40.975393 - Prompt executed in 105.13 seconds
2025-02-20T23:47:24.414908 - got prompt
2025-02-20T23:49:05.805160 -
10%|█ | 2/20 [01:41<15:11, 50.62s/it]2025-02-20T23:49:09.882715 -
10%|█ | 2/20 [01:45<15:48, 52.71s/it]2025-02-20T23:49:09.882987 -
2025-02-20T23:49:09.890503 - !!! Exception during processing !!! mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-20T23:49:09.892485 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1539, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1506, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/Volumes/External SSD/ai_gen/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 45, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1109, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 999, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 952, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 935, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 714, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/External SSD/ai_gen/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 161, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 379, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 915, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 918, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 268, in efficient_dot_product_attention
compute_query_chunk_attn(
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/modules/sub_quadratic_attention.py", line 159, in _get_attention_scores_no_kv_chunking
attn_scores = torch.baddbmm(
^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
2025-02-21T00:16:12.936048 - Prompt executed in 103.25 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":31,"last_link_id":53,"nodes":[{"id":5,"type":"ResizeMask","pos":[2252.79150390625,1290.3970947265625],"size":[315,194],"flags":{},"order":15,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":2},{"name":"width","type":"INT","widget":{"name":"width"},"link":5},{"name":"height","type":"INT","widget":{"name":"height"},"link":6}],"outputs":[{"name":"mask","type":"MASK","links":[8],"slot_index":0},{"name":"width","type":"INT","links":null},{"name":"height","type":"INT","links":null}],"properties":{"Node name for S&R":"ResizeMask"},"widgets_values":[512,512,false,"nearest-exact","disabled"]},{"id":8,"type":"PreviewImage","pos":[1865.9940185546875,861.6403198242188],"size":[360.0256652832031,389.6624755859375],"flags":{},"order":11,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":3}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":3,"type":"InspyrenetRembgAdvanced","pos":[1436.768310546875,980.4773559570312],"size":[315,102],"flags":{"collapsed":false},"order":5,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":1}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[3,4],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"InspyrenetRembgAdvanced"},"widgets_values":[0.5,"default"]},{"id":6,"type":"MaskToImage","pos":[2253.611328125,1540.0479736328125],"size":[264.5999755859375,26],"flags":{},"order":18,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":9}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[7,18],"slot_index":0}],"properties":{"Node name for S&R":"MaskToImage"},"widgets_values":[]},{"id":4,"type":"PreviewImage","pos":[1874.1805419921875,1369.4774169921875],"size":[341.52197265625,436.8957824707031],"flags":{},"order":19,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":7}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":7,"type":"GrowMaskWithBlur","pos":[2259.293212890625,1653.7142333984375],"size":[315,246],"flags":{},"order":17,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":8}],"outputs":[{"name":"mask","type":"MASK","links":[9],"slot_index":0},{"name":"mask_inverted","type":"MASK","links":null}],"properties":{"Node name for S&R":"GrowMaskWithBlur"},"widgets_values":[0,0,true,false,0,1,1,false]},{"id":14,"type":"VAEEncode","pos":[3513.307373046875,678.025146484375],"size":[210,46],"flags":{},"order":14,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":10},{"name":"vae","type":"VAE","link":11}],"outputs":[{"name":"LATENT","type":"LATENT","links":[22],"slot_index":0}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[],"color":"#322","bgcolor":"#533"},{"id":18,"type":"VAEEncode","pos":[3511.443115234375,928.3284301757812],"size":[210,46],"flags":{},"order":20,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":18},{"name":"vae","type":"VAE","link":16}],"outputs":[{"name":"LATENT","type":"LATENT","links":[34],"slot_index":0}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[]},{"id":17,"type":"VAEEncode","pos":[3512.510986328125,816.9686279296875],"size":[210,46],"flags":{},"order":9,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":17},{"name":"vae","type":"VAE","link":15}],"outputs":[{"name":"LATENT","type":"LATENT","links":null}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[]},{"id":1,"type":"LoadImage","pos":[1306.9964599609375,1273.217041015625],"size":[534.8991088867188,823.6531372070312],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[1,35],"slot_index":0},{"name":"MASK","type":"MASK","links":[2],"slot_index":1}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["clipspace/clipspace-mask-8289040.199999999.png [input]","image"]},{"id":26,"type":"VAEDecode","pos":[4232.70166015625,831.2574462890625],"size":[210,46],"flags":{},"order":22,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":31},{"name":"vae","type":"VAE","link":32}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[33,36],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":28,"type":"Image Comparer (rgthree)","pos":[4213.4765625,1104.757080078125],"size":[568.3712158203125,891.6441650390625],"flags":{},"order":24,"mode":0,"inputs":[{"name":"image_a","type":"IMAGE","dir":3,"link":35},{"name":"image_b","type":"IMAGE","dir":3,"link":36}],"outputs":[],"properties":{"comparer_mode":"Slide"},"widgets_values":[[{"name":"A","selected":true,"url":"/api/view?filename=rgthree.compare._temp_spmze_00019_.png&type=temp&subfolder=&rand=0.7447383854027068"},{"name":"B","selected":true,"url":"/api/view?filename=rgthree.compare._temp_spmze_00020_.png&type=temp&subfolder=&rand=0.1303280799030726"}]]},{"id":27,"type":"SaveImage","pos":[4476.447265625,780.682373046875],"size":[315,270],"flags":{},"order":23,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":33}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":13,"type":"LoadAndApplyICLightUnet","pos":[3048.758056640625,672.5655517578125],"size":[397.1794128417969,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":51}],"outputs":[{"name":"MODEL","type":"MODEL","links":[27],"slot_index":0}],"properties":{"Node name for S&R":"LoadAndApplyICLightUnet"},"widgets_values":["iclight_sd15_fc_unet_ldm.safetensors"],"color":"#232","bgcolor":"#353"},{"id":11,"type":"CheckpointLoaderSimple","pos":[1904.5606689453125,687.0982666015625],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[51],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[52,53],"slot_index":1},{"name":"VAE","type":"VAE","links":[11,15,16,21,32],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["cyberrealistic_v70.safetensors"]},{"id":10,"type":"ImageResize+","pos":[2253.89306640625,981.3107299804688],"size":[315,218],"flags":{},"order":12,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":4}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[10],"slot_index":0},{"name":"width","type":"INT","links":[5],"slot_index":1},{"name":"height","type":"INT","links":[6],"slot_index":2}],"properties":{"Node name for S&R":"ImageResize+"},"widgets_values":[1024,1152,"nearest-exact","keep proportion","always",0]},{"id":12,"type":"LoadImage","pos":[2631.855712890625,1296.88330078125],"size":[381.43280029296875,487.89422607421875],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[17,30],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["264290.jpg","image"]},{"id":15,"type":"CLIPTextEncode","pos":[3041.706787109375,818.0760498046875],"size":[400,200],"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":52}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[19],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful girl, night club, detailed face, shadow from the lights behind, disco, dance club,"],"color":"#232","bgcolor":"#353"},{"id":16,"type":"CLIPTextEncode","pos":[3042.497802734375,1064.7763671875],"size":[400,200],"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":53}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[20],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["(two tails:1.2),FastNegativeV2,(bad-artist:1),(loli:1.2),(worst quality, low quality:1.4),(bad_prompt_version2:0.8),bad-hands-5,lowres,bad anatomy,bad hands,((text)),(watermark),error,missing fingers,extra digit,fewer digits,cropped,worst quality,low quality,normal quality,((username)),blurry,(extra limbs),bad-artist-anime,badhandv4,embedding:EasyNegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3,BadDream,(three hands:1.1),(three legs:1.1),(more than two hands:1.2),(more than two legs:1.2), easynegative,underware,panties,bra, underwear, swimsuite"],"color":"#322","bgcolor":"#533"},{"id":24,"type":"PrepImageForClipVision","pos":[3146.77490234375,1679.44775390625],"size":[315,106],"flags":{},"order":10,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":30}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[29],"slot_index":0}],"properties":{"Node name for S&R":"PrepImageForClipVision"},"widgets_values":["LANCZOS","center",0]},{"id":25,"type":"IPAdapterAdvanced","pos":[3510.432373046875,1677.543701171875],"size":[315,278],"flags":{},"order":13,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":27},{"name":"ipadapter","type":"IPADAPTER","link":26},{"name":"image","type":"IMAGE","link":29},{"name":"image_negative","type":"IMAGE","shape":7,"link":null},{"name":"attn_mask","type":"MASK","shape":7,"link":null},{"name":"clip_vision","type":"CLIP_VISION","shape":7,"link":28}],"outputs":[{"name":"MODEL","type":"MODEL","links":[25],"slot_index":0}],"properties":{"Node name for S&R":"IPAdapterAdvanced"},"widgets_values":[1,"linear","average",0.105,0.903,"V only"]},{"id":19,"type":"ICLightConditioning","pos":[3853.245361328125,663.0048217773438],"size":[342.5999755859375,138],"flags":{},"order":16,"mode":0,"inputs":[{"name":"positive","type":"CONDITIONING","link":19},{"name":"negative","type":"CONDITIONING","link":20},{"name":"vae","type":"VAE","link":21},{"name":"foreground","type":"LATENT","link":22},{"name":"opt_background","type":"LATENT","shape":7,"link":null}],"outputs":[{"name":"positive","type":"CONDITIONING","links":[23],"slot_index":0},{"name":"negative","type":"CONDITIONING","links":[24],"slot_index":1},{"name":"empty_latent","type":"LATENT","links":null}],"properties":{"Node name for S&R":"ICLightConditioning"},"widgets_values":[0.15]},{"id":21,"type":"CLIPVisionLoader","pos":[3143.29296875,1551.1495361328125],"size":[315,58],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"CLIP_VISION","type":"CLIP_VISION","links":[28],"slot_index":0}],"properties":{"Node name for S&R":"CLIPVisionLoader"},"widgets_values":["CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors"]},{"id":23,"type":"IPAdapterModelLoader","pos":[3497.997802734375,1557.750244140625],"size":[315,58],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IPADAPTER","type":"IPADAPTER","links":[26],"slot_index":0}],"properties":{"Node name for S&R":"IPAdapterModelLoader"},"widgets_values":["ip-adapter-plus_sd15.safetensors"]},{"id":20,"type":"KSampler","pos":[3865.73828125,850.4859008789062],"size":[319.6355285644531,730.2586059570312],"flags":{},"order":21,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":25},{"name":"positive","type":"CONDITIONING","link":23},{"name":"negative","type":"CONDITIONING","link":24},{"name":"latent_image","type":"LATENT","link":34}],"outputs":[{"name":"LATENT","type":"LATENT","links":[31],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[179694348073440,"randomize",20,3,"euler","normal",1]}],"links":[[1,1,0,3,0,"IMAGE"],[2,1,1,5,0,"MASK"],[3,3,0,8,0,"IMAGE"],[4,3,0,10,0,"IMAGE"],[5,10,1,5,1,"INT"],[6,10,2,5,2,"INT"],[7,6,0,4,0,"IMAGE"],[8,5,0,7,0,"MASK"],[9,7,0,6,0,"MASK"],[10,10,0,14,0,"IMAGE"],[11,11,2,14,1,"VAE"],[15,11,2,17,1,"VAE"],[16,11,2,18,1,"VAE"],[17,12,0,17,0,"IMAGE"],[18,6,0,18,0,"IMAGE"],[19,15,0,19,0,"CONDITIONING"],[20,16,0,19,1,"CONDITIONING"],[21,11,2,19,2,"VAE"],[22,14,0,19,3,"LATENT"],[23,19,0,20,1,"CONDITIONING"],[24,19,1,20,2,"CONDITIONING"],[25,25,0,20,0,"MODEL"],[26,23,0,25,1,"IPADAPTER"],[27,13,0,25,0,"MODEL"],[28,21,0,25,5,"CLIP_VISION"],[29,24,0,25,2,"IMAGE"],[30,12,0,24,0,"IMAGE"],[31,20,0,26,0,"LATENT"],[32,11,2,26,1,"VAE"],[33,26,0,27,0,"IMAGE"],[34,18,0,20,3,"LATENT"],[35,1,0,28,0,"IMAGE"],[36,26,0,28,1,"IMAGE"],[51,11,0,13,0,"MODEL"],[52,11,1,15,0,"CLIP"],[53,11,1,16,0,"CLIP"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7400249944258519,"offset":[-2597.3748816893626,-590.9194819863401]},"node_versions":{"comfyui-kjnodes":"1.0.5","comfy-core":"0.3.14","comfyui-inspyrenet-rembg":"87ac452ef1182e8f35f59b04010158d74dcefd06","rgthree-comfy":"1.0.0","comfyui-ic-light":"1.0.3","comfyui_essentials":"1.1.0","comfyui_ipadapter_plus":"b188a6cb39b512a9c6da7235b880af42c78ccd0d"},"ue_links":[]},"version":0.4}
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_ | open | 2025-02-21T06:37:42Z | 2025-02-27T19:59:50Z | https://github.com/comfyanonymous/ComfyUI/issues/6907 | [
"Potential Bug"
] | ozonostudio | 1 |
onnx/onnx | tensorflow | 6,101 | output different between onnx and pytorch | # Ask a Question
### Question
when i try to convert layernorm to onnx, I found that the precision between onnx and pytorch model is different, here my easy python test code:
```python
import torch
import torch.nn as nn
import torch.onnx
import onnxruntime
import torch
import onnx
class SimpleModel(nn.Module):
def __init__(self, num_features):
super(SimpleModel, self).__init__()
self.layer_norm = nn.LayerNorm(num_features)
def forward(self, x):
x = self.layer_norm(x)
return x
def onnx_export(dummy_input, num_features):
model = SimpleModel(num_features)
model.eval()
torch.onnx.export(
model,
dummy_input,
"model_with_layernorm.onnx",
export_params=True,
opset_version=16,
do_constant_folding=True,
input_names=['input'],
output_names=['output'],
)
print("ONNX finish")
def result_test(dummy_input, num_features):
sm = SimpleModel(num_features)
sm.train()
model_out = sm(dummy_input)
onnx_path = '/home/mengyaohuang/python/model_with_layernorm.onnx'
print("model: {}".format(model_out))
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if torch.cuda.is_available() else [
'CPUExecutionProvider']
input_map_sdc = {'input': dummy_input.numpy()}
ort_session = onnxruntime.InferenceSession(onnx_path, providers=providers)
output = ort_session.run(None, input_map_sdc)
print("onnx: {}".format(output))
if __name__ == "__main__":
num_features = 10
dummy_input = torch.triu(torch.ones(2, num_features), diagonal=1)
A = torch.tensor([[-10240.355, -15141.355, -14749.948, -3194.9736, -13981.226, -20323.963, -16821.863, -23410.441, -7674.426, -4421.628],
[-10240.355, -15141.355, -14749.948, -3194.9736, -13981.226, -20323.963, -16821.863, -23410.441, -7674.426, -4421.628]])
dummy_input *= torch.tensor(100000)
B = torch.tensor([[-9999.212, -10000.221, -9999.805, -10000.484, -9999.577, -10000.39, -10000.327, -9999.456, -10000.324, -9999.744],
[-9999.581, -10000.797, -10000.234, -10000.455, -9999.41, -10000.46, -10000.814, -9999.265, -10000.886, -10000.231]])
# onnx_export(dummy_input, num_features)
result_test(B, num_features)
```
the output is:
```
model: tensor([[ 1.7345, -0.6264, 0.3472, -1.2434, 0.8797, -1.0218, -0.8755, 1.1631,
-0.8686, 0.4889],
[ 1.1136, -1.0294, -0.0380, -0.4270, 1.4148, -0.4356, -1.0604, 1.6713,
-1.1861, -0.0328]])
onnx: [array([[ 1.7370378 , -0.6239623 , 0.34969315, -1.2410678 , 0.88223237,
-1.019367 , -0.87309 , 1.1656438 , -0.86623335, 0.49139884],
[ 1.1136355 , -1.0292952 , -0.03786705, -0.42686492, 1.4148507 ,
-0.43547106, -1.0602773 , 1.6713139 , -1.1859272 , -0.03270336]],
dtype=float32)]
```
can anybody solve this issue?
| closed | 2024-04-25T10:36:40Z | 2024-04-26T18:53:14Z | https://github.com/onnx/onnx/issues/6101 | [
"question"
] | MichaelH717 | 7 |
idealo/imagededup | computer-vision | 91 | ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 224, 224) | phasher.encode_image(image_array=th2)
th2 is an image with shape (610, 1280) and while encoding it gives the above error | closed | 2020-02-06T14:46:16Z | 2020-10-19T15:57:35Z | https://github.com/idealo/imagededup/issues/91 | [
"bug",
"next release"
] | vyaslkv | 3 |
huggingface/text-generation-inference | nlp | 2,737 | Local installation: weight backbone.embeddings.weight does not exist (Mamba) | ### System Info
## System Specifications
2024-11-10T21:20:44.880890Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.80.1
Commit sha: 97f7a22f0b0f57edc840beaf152e7fd102ed8311
Docker label: N/A
nvidia-smi:
Sun Nov 10 21:20:43 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA L40S On | 00000000:9E:00.0 Off | 0 |
| N/A 26C P8 32W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA L40S On | 00000000:A0:00.0 Off | 0 |
| N/A 25C P8 32W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA L40S On | 00000000:A2:00.0 Off | 0 |
| N/A 27C P8 32W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA L40S On | 00000000:A4:00.0 Off | 0 |
| N/A 27C P8 31W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA L40S On | 00000000:C6:00.0 Off | 0 |
| N/A 26C P8 32W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA L40S On | 00000000:C8:00.0 Off | 0 |
| N/A 26C P8 30W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 6 NVIDIA L40S On | 00000000:CA:00.0 Off | 0 |
| N/A 29C P8 33W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 7 NVIDIA L40S On | 00000000:CC:00.0 Off | 0 |
| N/A 26C P8 30W / 350W | 1MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
## Reproducing Steps and Traceback
~/Desktop/Code/text-generation-inference/server$ SAFETENSORS_FAST_GPU=1 python text_generation_server/cli.py serve state-spaces/mamba-130m
2024-11-10 21:18:24.957 | INFO | text_generation_server.utils.import_utils:<module>:80 - Detected system cuda
/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/utils/sgmv.py:18: UserWarning: Could not import SGMV kernel from Punica, falling back to loop.
warnings.warn("Could not import SGMV kernel from Punica, falling back to loop.")
Using prefix caching = True
Using Attention = flashinfer
Could not import Flash Attention enabled models: /opt/conda/envs/tgi/lib/python3.11/site-packages/moe_kernels/_moe_kernels_ops.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZNK3c105Error4whatEv
/opt/conda/envs/tgi/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py:658: UserWarning: You are using a Backend <class 'text_generation_server.utils.dist.FakeGroup'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0. Please use a public API of PyTorch Distributed instead.
warnings.warn(
Error when initializing model
Traceback (most recent call last):
File "/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/models/custom_modeling/mamba_modeling.py", line 213, in __init__
self.lm_head = SpeculativeHead.load(config, f"{prefix}.embeddings", weights)
File "/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/layers/speculative.py", line 40, in load
lm_head = TensorParallelHead.load(config, prefix, weights)
File "/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/layers/tensor_parallel.py", line 66, in load
weight = weights.get_tensor(f"{prefix}.weight")
File "/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/utils/weights.py", line 213, in get_tensor
filename, tensor_name = self.get_filename(tensor_name)
File "/home/ubuntu/Desktop/Code/text-generation-inference/server/text_generation_server/utils/weights.py", line 192, in get_filename
raise RuntimeError(f"weight {tensor_name} does not exist")
RuntimeError: weight backbone.embeddings.weight does not exist
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
SAFETENSORS_FAST_GPU=1 python text_generation_server/cli.py serve state-spaces/mamba-130m
### Expected behavior
Web server starting | closed | 2024-11-10T21:26:22Z | 2024-11-15T12:16:16Z | https://github.com/huggingface/text-generation-inference/issues/2737 | [] | mokeddembillel | 1 |
mars-project/mars | scikit-learn | 3,042 | [BUG] test_ownership_when_scale_in hang | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When running test case `DEBUG_OSCAR=1 pytest -v -s mars/deploy/oscar/tests/test_ray.py::test_ownership_when_scale_in`, it hangs occasionally.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.




6. Minimized code to reproduce the error.
**Expected behavior**
`test_ownership_when_scale_in` should finish in less than 120 seconds
| open | 2022-05-17T12:34:04Z | 2022-05-17T14:05:02Z | https://github.com/mars-project/mars/issues/3042 | [] | chaokunyang | 3 |
MilesCranmer/PySR | scikit-learn | 682 | [BUG]: Hard crash on import from MacOS System Integrity Protection (SIP) | ### What happened?
upon pip installing pysr into a virtual environment, making sure my PATH variable has the bin, exporting LD_LIBRARY_PATH as specified in github readme, and even removing quarantine status for the environment, importing pysr still results in python quitting
julia version supports arch64 (silicon)
### Version
Any version of PySR
### Operating System
macOS
### Package Manager
pip
### Interface
Jupyter Notebook
### Relevant log output
```shell
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: Python [40891]
Path: /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/Resources/Python.app/Contents/MacOS/Python
Identifier: com.apple.python3
Version: 3.9.6 (3.9.6)
Build Info: python3-141000000000000~1415
Code Type: ARM-64 (Native)
Parent Process: python [40765]
Responsible: pycharm [40727]
User ID: 501
Date/Time: 2024-07-27 21:46:46.1280 -0700
OS Version: macOS 14.5 (23F79)
Report Version: 12
Anonymous UUID: 6F31D97B-2A3B-8D95-FA9E-B1FE5CB86DF1
Sleep/Wake UUID: 404515B4-A7B3-4531-A2F4-F7C17B16EC40
Time Awake Since Boot: 240000 seconds
Time Since Wake: 27250 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_GUARD (SIGKILL)
Exception Codes: GUARD_TYPE_MACH_PORT
Exception Codes: 0x0000000000012740, 0x0000000000000000
Termination Reason: Namespace GUARD, Code 2305843035917854528
```
### Extra Info
tried all sorts of PySR and Julia versions, this seems to be independent of that, id prefer a solution that doesnt involve me booting in RecoveryOS and disabling SIP, although this is what I have done in the meantime | closed | 2024-07-28T04:48:53Z | 2024-07-29T07:29:35Z | https://github.com/MilesCranmer/PySR/issues/682 | [
"bug"
] | ev-watson | 10 |
d2l-ai/d2l-en | pytorch | 2,134 | A mistake in seq2seq prediction implementation? | https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py#L2996-L3030
**Bugs here:**
``` Python
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state)
```
As you can see here, dec_state will update in every loop. But it not only affects the hidden state for rnn, but also the context vector for each step. (I don't know why Seq2SeqDecoder seems not implemented in [d2l/torch.py](https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py)), In chapter 9.7.2, here wrote:
https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/chapter_recurrent-modern/seq2seq.md?plain=1#L383-L411

And according to this graph, the correct approach should be to keep the context vector always constant at each time step.
(there is no problem with this method when training, because the target sentence is complete and context vector is already broadcast in every time step)
**My Solution(Not tested):**
Modify Seq2SeqDecoder:
``` Python
class Seq2SeqDecoder(d2l.Decoder):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = d2l.GRU(embed_size+num_hiddens, num_hiddens,
num_layers, dropout)
self.dense = nn.LazyLinear(vocab_size)
self.apply(init_seq2seq)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
# Add a parameter here:
def forward(self, X, enc_state, enc_final_layer_output_at_last_step):
# X shape: (batch_size, num_steps)
# embs shape: (num_steps, batch_size, embed_size)
embs = self.embedding(d2l.astype(d2l.transpose(X), d2l.int32))
# context shape: (batch_size, num_hiddens)
# Broadcast context to (num_steps, batch_size, num_hiddens)
context = enc_final_layer_output_at_last_step.repeat(embs.shape[0], 1, 1)
# Concat at the feature dimension
embs_and_context = d2l.concat((embs, context), -1)
outputs, state = self.rnn(embs_and_context, enc_state)
outputs = d2l.swapaxes(self.dense(outputs), 0, 1)
# outputs shape: (batch_size, num_steps, vocab_size)
# state shape: (num_layers, batch_size, num_hiddens)
return outputs, state
```
Modify [EncoderDecoder](https://github.com/d2l-ai/d2l-en/blob/9e4fbb1e97f4e0b3919563073344368755fe205b/d2l/torch.py#L864-L868):
``` Python
def forward(self, enc_X, dec_X, *args):
enc_outputs = self.encoder(enc_X, *args)
dec_state = self.decoder.init_state(enc_outputs, *args)
# Return decoder output only
return self.decoder(dec_X, dec_state, enc_outputs[1][-1])[0]
```
Modify predict_seq2seq():
``` Python
for _ in range(num_steps):
Y, dec_state = net.decoder(dec_X, dec_state, enc_outputs[1][-1])
``` | closed | 2022-05-16T09:12:58Z | 2022-12-14T04:24:45Z | https://github.com/d2l-ai/d2l-en/issues/2134 | [] | zhmou | 2 |
pydata/bottleneck | numpy | 162 | warning: self-comparison always evaluates to true | The build logs on Debian report the following warnings:
```
bottleneck/src/move.c: In function ‘move_rank_int64’:
bottleneck/src/move.c:2009:24: warning: self-comparison always evaluates to true [-Wtautological-compare]
if (aj == aj) {
^~
bottleneck/src/move.c: In function ‘move_rank_int32’:
bottleneck/src/move.c:2070:24: warning: self-comparison always evaluates to true [-Wtautological-compare]
if (aj == aj) {
```
Is it intended? | closed | 2017-02-08T08:44:52Z | 2017-02-09T20:45:49Z | https://github.com/pydata/bottleneck/issues/162 | [] | ghisvail | 1 |
mwaskom/seaborn | matplotlib | 3,077 | Plan to reintroduce fitted models on plots (apart from `PolyFit`) | The functional API (`regplot`) had the option to draw logistic, robust or lowess regressions via statsmodels but the new object API only offers polynomial fit with `PolyFit`.
Is this an intentional choice or will these be added as extensions?
This gap feels odd since the related `Agg` and `Est` stats have been improved in terms of flexibility. | closed | 2022-10-12T09:40:17Z | 2022-10-13T10:44:20Z | https://github.com/mwaskom/seaborn/issues/3077 | [] | Rabeez | 0 |
axnsan12/drf-yasg | rest-api | 699 | swagger_serializer_method doesn't support swagger_schema_fields | I've tried setting the `serializer_or_field` to a field type that defines `swagger_schema_fields` on the Meta object, but it seems to be ignored entirely.
Notably, I think if this was supported it would provide a way to workaround things like #684 and #685 by providing the new schema directly. | open | 2021-02-04T10:41:22Z | 2025-03-07T12:13:28Z | https://github.com/axnsan12/drf-yasg/issues/699 | [
"triage"
] | palfrey | 0 |
jina-ai/clip-as-service | pytorch | 43 | Dependency between sentences embeddings within request | I run this code :
```
bc = BertClient()
a = bc.encode['hey you', 'hey you']
b = bc.encode['hey you']
c = bc.encode['hey you']
```
---
If I compare `b` and `c`, these are the same :
`print((b == c).all())`
> True
This is expected behavior
---
**But why `a[0]` and `a[1]` are not the same ?**
`print((a[0] == a[1]).all())`
> False
I would expect them to have the same embeddings.
| closed | 2018-11-23T01:33:53Z | 2018-11-23T10:48:52Z | https://github.com/jina-ai/clip-as-service/issues/43 | [] | astariul | 1 |
iperov/DeepFaceLab | machine-learning | 5,491 | Is there a way to give setting by redirecting input? | I have been trying to automate merging process by redirecting input for settings. But i have encountered loss of setting in the stage of interactive y/n.
I get EOFError: EOF when reading a line. before interactive option input_bool
My question is "is there a way to do these jobs in other way? or am i missing something?"
update:
I found the reason https://github.com/iperov/DeepFaceLab/blob/9704c5d8f87bb991d8c5b075a9d39760a931ab01/models/ModelBase.py#L179
is this really necessary? what does happen when i remove this
| closed | 2022-03-08T02:27:34Z | 2022-03-17T01:50:23Z | https://github.com/iperov/DeepFaceLab/issues/5491 | [] | jinwonkim93 | 0 |
tox-dev/tox | automation | 2,535 | specifying processor architecture does not work reliable | When a specific processor architecture is requested which is not installed tox implicitly falls back to another installed interpreter of the same version. I.e. if `envlist=py39-x86` is specified and only `python3.9-64` is installed on the system instead of printing an error (or skipping the environment if `skip_missing_interpreters = true` was specified) tox will implicitly use `python3.9-64`.
Tested on: Windows 10 (64bit), Python 3.10, tox 3.25.0 as well as 3.27.0 | open | 2022-11-12T22:32:03Z | 2023-06-16T17:11:32Z | https://github.com/tox-dev/tox/issues/2535 | [
"bug:normal",
"help:wanted"
] | mrh1997 | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 947 | `NoneType` object has no attribute `setdefault` | I'm aware a previous ticket has been closed about this (#928), but it seems that the issue is still present with the latest versions of SQLAlchemy + Flask-SQLAlchemy
In order to make it work, I had to revert both libs to the following version:
```
SQLAlchemy==1.3.24
Flask-SQLAlchemy==2.4.4
```
The issue for me was located at `apply_driver_hacks()` on __init__.py at line 937 (on SA I believe)
```
if sa_url.drivername != 'mysql+gaerdbms':
options.setdefault('pool_size', 10) # <- this is the faulty one
```
I'm posting here for both your information and to help out others if they need it: reverting to the above-mentioned versions fixes the problem.
Good luck! | closed | 2021-03-31T21:58:20Z | 2021-04-16T00:12:37Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/947 | [] | cnicodeme | 3 |
allenai/allennlp | nlp | 5,017 | A guide with the updated API | **Is your feature request related to a problem? Please describe.**
<!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
- As someone who is trying to get to know the library, I was looking at the documentation, which links to the guide [guide.allennlp.org](https://guide.allennlp.org/). It utilizes some APIs that seem to be deprecated in the later versions.
- It is sometimes a difficult task to try to look at the code given in `Setup` and `Source`.
**Describe the solution you'd like**
<!--- A clear and concise description of what you want to happen. -->
- Can a new guide be created which has the new API?
- Can a repo/gist/colab for the guide be created?
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
We can try to use the version used in the guide.
**Additional context**
Add any other context or screenshots about the feature request here.
Thanks! | closed | 2021-02-24T04:31:17Z | 2021-02-26T16:42:24Z | https://github.com/allenai/allennlp/issues/5017 | [
"Feature request"
] | ekdnam | 7 |
ansible/ansible | python | 84,850 | Ansible silently handles any exceptions raised in inventory plugin | ### Summary
We have custom inventory plugin (that fetches list of hosts & their data via external service API). One would expect when AnsibleParserError is raised inside parse() method, this will be shown in stderr.
But looking at ansible-core it is silently handles (ANY!) exception and execution continues. I stumbled into this debugging, why playbook is not executed on all out hosts. Plugin was failing and we were only getting partial inventory without any warning!
Related code is in ansible/inventory/manager.py
```
try:
# FIXME in case plugin fails 1/2 way we have partial inventory
plugin.parse(self._inventory, self._loader, source, cache=cache)
try:
plugin.update_cache_if_changed()
except AttributeError:
# some plugins might not implement caching
pass
parsed = True
display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name))
break
except AnsibleParserError as e:
display.debug('%s was not parsable by %s' % (source, plugin_name))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': e, 'tb': tb})
except Exception as e:
display.debug('%s failed while attempting to parse %s' % (plugin_name, source))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': AnsibleError(e), 'tb': tb})
```
In case of parser error, we would definitely want to stop ansible and display warning. Is there any other approach i am not aware of, to do this?
### Issue Type
Bug Report
### Component Name
inventory/manager.py
### Ansible Version
```console
$ ansible --version
2.15.8
```
### Configuration
```console
default/does not matter
```
### OS / Environment
any
### Steps to Reproduce
Trigger any exception in parse() method of official or custom inventory plugin
### Expected Results
Stop ansible execution and display error.
### Actual Results
```console
Silently ignored and continues to process next inventory and executes playbook.
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | open | 2025-03-18T11:20:18Z | 2025-03-18T18:45:49Z | https://github.com/ansible/ansible/issues/84850 | [
"bug",
"data_tagging"
] | eleksis | 7 |
geex-arts/django-jet | django | 97 | Change login.html name 'ADMIN SITE' | <body class=" login">
```
<div class="login-title">
<span class="bright">Admin</span> Site
</div>
<div class="login-container" id="content-main">
<div class="login-container-header">
Iniciar sesión
</div>
<div class="login-container-content">
<form action="/admin/login/?next=/admin/" method="post" class="login-form" id="login-form"><input type='hidden' name='csrfmiddlewaretoken' value='zfvLi5tbuH1NsXovHtVDbtvkje57t8la' />
```
| closed | 2016-08-09T23:39:58Z | 2016-08-19T09:00:37Z | https://github.com/geex-arts/django-jet/issues/97 | [] | sagoyanfisic | 5 |
minimaxir/textgenrnn | tensorflow | 157 | Migrate to TF 2.0/tf.keras | I had made textgenrnn with external Keras since native TF was missing features. Now that there is parity, I am OK with merging it back into native TF with TF 2.0 support. textgenrnn does not use much custom Keras code so it should be a relatively simply change; the concern is not breaking old models, which may be possible due to the `SavedModel` change.
TF 2.1 also has TPU/mixed precision support which will be very helpful for training performance. | closed | 2019-12-01T18:52:33Z | 2020-02-03T03:32:47Z | https://github.com/minimaxir/textgenrnn/issues/157 | [
"enhancement"
] | minimaxir | 5 |
InstaPy/InstaPy | automation | 5,797 | TypeError: document.getElementsByClassName(...)[0] is undefined | Hello everyone,
I have a problem started today, yesterday i was using instapy with no mistake and today when i started instapy after some likings it gives me following message "TypeError: document.getElementsByClassName(...)[0] is undefined" what should i do? please i am waiting for your support.
| closed | 2020-09-23T07:59:37Z | 2020-11-10T12:40:41Z | https://github.com/InstaPy/InstaPy/issues/5797 | [
"wontfix"
] | blackchayenne | 3 |
plotly/dash-core-components | dash | 273 | Nested Tabs | I have come across a behavior of the tabs component I can’t quite make sense of. A minimal example code is provided below.
```
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div([
dcc.Tabs(
id='tabs-1',
value='tab-1',
children=[
dcc.Tab(
label='Tab 1',
value='tab-1',
children=[
html.Div("Content 1"),
dcc.Tabs(
id='tabs-2',
value='tab-1-1',
children=[
dcc.Tab(label='Tab 1-1', value='tab-1-1', children=["Content 1-1"]),
dcc.Tab(label='Tab 1-2', value='tab-1-2', children=["Content 1-2"])
]
)
]
),
dcc.Tab(
label='Tab 2',
value='tab-2',
children=[
html.Div("Content 2"),
dcc.Tabs(
id='tabs-3',
value='tab-2-1',
children=[
dcc.Tab(label='Tab 2-1', value='tab-2-1', children=["Content 2-1"]),
dcc.Tab(label='Tab 2-2', value='tab-2-2', children=["Content 2-2"])
]
)
]
),
dcc.Tab(label='Tab 3', value='tab-3', children=["Content 3"])
]
)
])
app.css.config.serve_locally = True
app.scripts.config.serve_locally = True
if __name__ == '__main__':
app.run_server(debug=True)
```
The main tab component hast three tabs, two of which contain tabs components of their own and some other content. When starting the app, the first tab works as expected. Switching to the second tab. The content of the second tab is displayed, but the tabs component is not updated. The behavior can be “reset” by switching to the third tab. The content of the tab which is selected next (either tab 1 or tab 2) is displayed correctly, but then the behavior returns when switching between tabs 1 or 2. To mitigate this, one can either remove ONE of the contents `html.Div("Content 1")`, or `html.Div("Content 2")`, or wrap either one of the sub-level tabs components in an additional Div. In these cases, it only works if the structure in tab 1 and tab 2 is not “parallel”, meaning if both tab contents are removed, or both Tabs components are wrapped in a div, the behavior returns. Due to the dependence on the html structure, I was wondering, if this has anything to do with the handling of callbacks in React, but since I have literally never worked with that, I am just guessing…
However, even when the example works, the selection state of the second level tabs is not retained, as I would have expected. I seemed to remember a comment somewhere, that states, the Tabs component controls visibility, but renders (right word here?) all contents at once.
Any insights on this is greatly appreciated. | closed | 2018-08-22T13:16:04Z | 2021-11-14T12:40:10Z | https://github.com/plotly/dash-core-components/issues/273 | [] | roeap | 6 |
polakowo/vectorbt | data-visualization | 401 | How to change column name generated by indicator.run()? | 
| closed | 2022-03-02T02:35:30Z | 2022-03-11T08:20:56Z | https://github.com/polakowo/vectorbt/issues/401 | [] | GF-Huang | 2 |
aiortc/aiortc | asyncio | 129 | RTX packet with empty payload causes DTLS to shutdown | Hi jlaine:
I opened a new issue for the dtls close issue in 2 minutes.
I never find this issue on 0.9.13 and it happened on 0.9.18.
I add the debug info in dtlstransport.py in line 480.
The error info is "FAILED unpack requires a buffer of 2 bytes"
Can you give some comments?
I pasted the log here.
"DEBUG:rtp:receiver(video) < RtpPacket(seq=45129, ts=845146785, marker=1, payload=97, 229 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54920, ts=4185232608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54921, ts=4185232608, marker=1, payload=97, 1125 bytes)
DEBUG:rtp:receiver(video) < RtcpSrPacket(ssrc=2859732783, sender_info=RtcpSenderInfo(ntp_timestamp=16136650649041527701, rtp_timestamp=845150655, packet_count=78213, octet_count=86404907), reports=[])
DEBUG:rtp:receiver(video) < RtpPacket(seq=45130, ts=845149665, marker=1, payload=97, 353 bytes)
DEBUG:rtp:receiver(video) > RtcpPsfbPacket(fmt=15, ssrc=3015268611, media_ssrc=0, fci=b'REMB\x02\x00\xaa_\xaat\x0f/\x8b\xdd;i')
DEBUG:rtp:sender(video) > RtpPacket(seq=54922, ts=4185235608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54923, ts=4185235608, marker=1, payload=97, 327 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54924, ts=4185238608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54925, ts=4185238608, marker=1, payload=97, 1006 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=45131, ts=845152905, marker=1, payload=97, 378 bytes)
DEBUG:rtp:receiver(video) > RtcpPsfbPacket(fmt=15, ssrc=3015268611, media_ssrc=0, fci=b'REMB\x02\x00\xa2u\xaat\x0f/\x8b\xdd;i')
DEBUG:rtp:sender(video) > RtpPacket(seq=54926, ts=4185241608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54927, ts=4185241608, marker=1, payload=97, 354 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=45132, ts=845158575, marker=0, payload=97, 1034 bytes)
DEBUG:rtp:receiver(video) > RtcpPsfbPacket(fmt=15, ssrc=3015268611, media_ssrc=0, fci=b'REMB\x02\x00\xb6\x0f\xaat\x0f/\x8b\xdd;i')
DEBUG:rtp:sender(video) > RtpPacket(seq=54928, ts=4185244608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) > RtpPacket(seq=54929, ts=4185244608, marker=1, payload=97, 1169 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=45133, ts=845158575, marker=0, payload=97, 1035 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=45134, ts=845158575, marker=1, payload=97, 1035 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=18620, ts=844931325, marker=1, payload=98, 404 bytes)
DEBUG:rtp:receiver(video) < RtpPacket(seq=18621, ts=845160285, marker=0, payload=98, 0 bytes)
FAILED unpack requires a buffer of 2 bytes
DEBUG:dtls:server - State.CONNECTED -> State.CLOSED
DEBUG:rtp:sender(video) > RtpPacket(seq=54930, ts=4185247608, marker=0, payload=97, 1300 bytes)
DEBUG:rtp:sender(video) - RTP finished
DEBUG:rtp:sender(video) > RtcpSrPacket(ssrc=3015268611, sender_info=RtcpSenderInfo(ntp_timestamp=16136650775727046999, rtp_timestamp=4185244608, packet_count=15038, octet_count=14827840), reports=[])
DEBUG:rtp:sender(video) > RtcpSdesPacket(chunks=[RtcpSourceInfo(ssrc=3015268611, items=[(1, b'{4359bd3b-e51f-4159-b5f0-81e0189c8501}')])])
DEBUG:ice:Connection(0) protocol(1) > ('192.168.12.46', 39400) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'L8U]\xd1\x05cT\x95\x9f\xb7i')
DEBUG:ice:Connection(0) protocol(1) < ('192.168.12.46', 39400) Message(message_method=Method.BINDING, message_class=Class.RESPONSE, transaction_id=b'L8U]\xd1\x05cT\x95\x9f\xb7i')
DEBUG:rtp:receiver(video) > RtcpRrPacket(ssrc=3015268611, reports=[RtcpReceiverInfo(ssrc=2859732783, fraction_lost=0, packets_lost=0, highest_sequence=45134, jitter=1183, lsr=3863283890, dlsr=59173), RtcpReceiverInfo(ssrc=2346531689, fraction_lost=0, packets_lost=0, highest_sequence=18621, jitter=29019, lsr=0, dlsr=0)])
DEBUG:rtp:sender(video) > RtcpSrPacket(ssrc=3015268611, sender_info=RtcpSenderInfo(ntp_timestamp=16136650775727046999, rtp_timestamp=4185244608, packet_count=15038, octet_count=14827840), reports=[])
" | closed | 2019-01-22T00:57:29Z | 2019-01-23T09:36:37Z | https://github.com/aiortc/aiortc/issues/129 | [] | zhiweny1122 | 7 |
pytest-dev/pytest-django | pytest | 581 | Add pytest to requirements.txt... | It would be a kindness to projects using pyteste-django if pytest (which is a direct dependency for pytest-django) could be added to requirements.txt. Otherwise, pip / pipenv can't infer the dependency graph, and projects using pytest-django must themselves add pytest *before* pytest-django in their own dependencies. Thanks! Details in this PR: pytest-dev/pytest-django#579 | closed | 2018-02-11T02:38:55Z | 2018-04-14T13:45:25Z | https://github.com/pytest-dev/pytest-django/issues/581 | [] | ptressel | 2 |
jupyterlab/jupyter-ai | jupyter | 521 | Allow additional properties in AgentChatMessage | ### Problem
`AgentChatMessage` represents replies of LLM agents to users. Currently model is limited in its ability to handle diverse types of responses beyond plain text, for example error messages (see https://github.com/jupyterlab/jupyter-ai/pull/513/commits/b7ef4e32bf30932129444770b5872e36a8c19b35 in #513) or multi-modal responses that might include images or video.
https://github.com/jupyterlab/jupyter-ai/blob/976f8b9303d198fb339f7b594d29e4cd879618a4/packages/jupyter-ai/jupyter_ai/models.py#L32-L38
### Proposed Solution
Potential options:
1. Option suggested by @3coins. Make `AgentChatMessage.body` a JSON object instead of a string. Expected format of the JSON data can be defined with Pydantic classes or JSON schemas (I prefer Pydantic classes).
```python
body: {
"text": "Some text message",
"error": {
"type": "APIAuthenticationError",
"message": "There was an issue with the API authentication."
},
"image": "image_url_if_applicable",
...
}
```
2. Add one additional field `options` that would be a JSON object and would contain all (expanding) additional options.
```python
class AgentChatMessage(BaseModel):
...
options: Dict[str, Any]
```
3. Add additional properties `AgentChatMessage` 1-by-1 as was attempted in [this commit in #513 ](https://github.com/jupyterlab/jupyter-ai/commit/b7ef4e32bf30932129444770b5872e36a8c19b35#diff-d5e6ebdae0734547f381952e7199d8336f57d03c270a96f7ccb0b82dd163ff36R32-R39). Pros: straightforward approach. Cons: addition of further options would bloat the model
```python
class AgentChatMessage(BaseModel):
...
error_type: str
another_option: ...
...
```
| open | 2023-12-18T18:32:32Z | 2023-12-18T22:51:14Z | https://github.com/jupyterlab/jupyter-ai/issues/521 | [
"enhancement"
] | andrii-i | 0 |
mwaskom/seaborn | matplotlib | 3,794 | Question about abstraction | https://github.com/mwaskom/seaborn/blob/b4e5f8d261d6d5524a00b7dd35e00a40e4855872/seaborn/distributions.py#L1449
Is there an architectural reason you don't expose the stats data? (i.e. something like `ax.p = p`)
Most academic publications want to see the number behind the plots. | closed | 2024-11-30T23:48:10Z | 2024-12-01T19:42:42Z | https://github.com/mwaskom/seaborn/issues/3794 | [] | refack | 1 |
OFA-Sys/Chinese-CLIP | nlp | 301 | finetune时报错,且Traceback疑似被截断,无法定位出错线程 | (torch) ppop@DESKTOP-NMJBJQC:~/Chinese-CLIP$ sudo bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ~/Chinese-CLIP/datapath
Loading vision model config from cn_clip/clip/model_configs/ViT-L-14.json
Loading text model config from cn_clip/clip/model_configs/RoBERTa-wwm-ext-base-chinese.json
2024-04-18,22:23:46 | INFO | Rank 0 | train LMDB file contains 35000 images and 105000 pairs.
2024-04-18,22:23:46 | INFO | Rank 0 | val LMDB file contains 7500 images and 22500 pairs.
2024-04-18,22:23:46 | INFO | Rank 0 | Params:
2024-04-18,22:23:46 | INFO | Rank 0 | accum_freq: 1
2024-04-18,22:23:46 | INFO | Rank 0 | aggregate: True
2024-04-18,22:23:46 | INFO | Rank 0 | batch_size: 128
2024-04-18,22:23:46 | INFO | Rank 0 | bert_weight_path: None
2024-04-18,22:23:46 | INFO | Rank 0 | beta1: 0.9
2024-04-18,22:23:46 | INFO | Rank 0 | beta2: 0.98
2024-04-18,22:23:46 | INFO | Rank 0 | checkpoint_path: /home/ppop/Chinese-CLIP/datapath/experiments/muge_finetune_vit-H-14_roberta-base_bs128_1gpu/checkpoints
2024-04-18,22:23:46 | INFO | Rank 0 | clip_weight_path: None
2024-04-18,22:23:46 | INFO | Rank 0 | context_length: 52
2024-04-18,22:23:46 | INFO | Rank 0 | debug: False
2024-04-18,22:23:46 | INFO | Rank 0 | device: cuda:0
2024-04-18,22:23:46 | INFO | Rank 0 | distllation: False
2024-04-18,22:23:46 | INFO | Rank 0 | eps: 1e-06
2024-04-18,22:23:46 | INFO | Rank 0 | freeze_vision: False
2024-04-18,22:23:46 | INFO | Rank 0 | gather_with_grad: False
2024-04-18,22:23:46 | INFO | Rank 0 | grad_checkpointing: False
2024-04-18,22:23:46 | INFO | Rank 0 | kd_loss_weight: 0.5
2024-04-18,22:23:46 | INFO | Rank 0 | local_device_rank: 0
2024-04-18,22:23:46 | INFO | Rank 0 | log_interval: 1
2024-04-18,22:23:46 | INFO | Rank 0 | log_level: 20
2024-04-18,22:23:46 | INFO | Rank 0 | log_path: /home/ppop/Chinese-CLIP/datapath/experiments/muge_finetune_vit-H-14_roberta-base_bs128_1gpu/out_2024-04-18-14-23-43.log
2024-04-18,22:23:46 | INFO | Rank 0 | logs: /home/ppop/Chinese-CLIP/datapath/experiments/
2024-04-18,22:23:46 | INFO | Rank 0 | lr: 5e-05
2024-04-18,22:23:46 | INFO | Rank 0 | mask_ratio: 0
2024-04-18,22:23:46 | INFO | Rank 0 | max_epochs: 3
2024-04-18,22:23:46 | INFO | Rank 0 | max_steps: 2463
2024-04-18,22:23:46 | INFO | Rank 0 | name: muge_finetune_vit-H-14_roberta-base_bs128_1gpu
2024-04-18,22:23:46 | INFO | Rank 0 | num_workers: 4
2024-04-18,22:23:46 | INFO | Rank 0 | precision: amp
2024-04-18,22:23:46 | INFO | Rank 0 | rank: 0
2024-04-18,22:23:46 | INFO | Rank 0 | report_training_batch_acc: True
2024-04-18,22:23:46 | INFO | Rank 0 | reset_data_offset: False
2024-04-18,22:23:46 | INFO | Rank 0 | reset_optimizer: False
2024-04-18,22:23:46 | INFO | Rank 0 | resume: /home/ppop/Chinese-CLIP/datapath/pretrained_weights/clip_cn_vit-l-14.pt
2024-04-18,22:23:46 | INFO | Rank 0 | save_epoch_frequency: 1
2024-04-18,22:23:46 | INFO | Rank 0 | save_step_frequency: 999999
2024-04-18,22:23:46 | INFO | Rank 0 | seed: 123
2024-04-18,22:23:46 | INFO | Rank 0 | skip_aggregate: False
2024-04-18,22:23:46 | INFO | Rank 0 | skip_scheduler: False
2024-04-18,22:23:46 | INFO | Rank 0 | teacher_model_name: None
2024-04-18,22:23:46 | INFO | Rank 0 | text_model: RoBERTa-wwm-ext-base-chinese
2024-04-18,22:23:46 | INFO | Rank 0 | train_data: /home/ppop/Chinese-CLIP/datapath/datasets/yyut/lmdb/train
2024-04-18,22:23:46 | INFO | Rank 0 | use_augment: False
2024-04-18,22:23:46 | INFO | Rank 0 | use_bn_sync: False
2024-04-18,22:23:46 | INFO | Rank 0 | use_flash_attention: False
2024-04-18,22:23:46 | INFO | Rank 0 | val_data: /home/ppop/Chinese-CLIP/datapath/datasets/yyut/lmdb/valid
2024-04-18,22:23:46 | INFO | Rank 0 | valid_batch_size: 128
2024-04-18,22:23:46 | INFO | Rank 0 | valid_epoch_interval: 1
2024-04-18,22:23:46 | INFO | Rank 0 | valid_num_workers: 1
2024-04-18,22:23:46 | INFO | Rank 0 | valid_step_interval: 150
2024-04-18,22:23:46 | INFO | Rank 0 | vision_model: ViT-L-14
2024-04-18,22:23:46 | INFO | Rank 0 | warmup: 100
2024-04-18,22:23:46 | INFO | Rank 0 | wd: 0.001
2024-04-18,22:23:46 | INFO | Rank 0 | world_size: 1
2024-04-18,22:23:46 | INFO | Rank 0 | Use GPU: 0 for training
2024-04-18,22:23:46 | INFO | Rank 0 | => begin to load checkpoint '/home/ppop/Chinese-CLIP/datapath/pretrained_weights/clip_cn_vit-l-14.pt'
2024-04-18,22:23:47 | INFO | Rank 0 | train LMDB file contains 35000 images and 105000 pairs.
2024-04-18,22:23:47 | INFO | Rank 0 | val LMDB file contains 7500 images and 22500 pairs.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/ppop/miniconda3/envs/torch/lib/python3.8/threading.py", line 932, in _bootstrap_inner | open | 2024-04-18T14:26:33Z | 2024-05-14T06:05:03Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/301 | [] | wrtppp | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.