repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
slackapi/python-slack-sdk | asyncio | 1,612 | websockets adapter fails to check the current session with error "'ClientConnection' object has no attribute 'closed'" | (Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The Slack SDK version
```
slack_bolt==1.21.3
slack_sdk==3.33.5
```
#### Python runtime version
```
Python 3.11.11
```
#### OS info
```
ProductName: macOS
ProductVersion: 15.1.1
BuildVersion: 24B91
Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Install async related packages via pip: `pip install slack_bolt==1.21.3 aiohttp==3.11.10 websockets==14.1`
2. Set `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` env vars
3. Run the following app script:
```
import os
import asyncio
from slack_bolt.async_app import AsyncApp
from slack_bolt.adapter.socket_mode.websockets import AsyncSocketModeHandler
# Initializes your app with your bot token and socket mode handler
app = AsyncApp(token=os.environ.get("SLACK_BOT_TOKEN"))
# Start your app
if __name__ == "__main__":
asyncio.run(AsyncSocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start_async())
```
4. Wait for 10 seconds
### Expected result:
See `⚡️ Bolt app is running!` on the terminal, then nothing else.
### Actual result:
See `⚡️ Bolt app is running!` on the terminal, then every 10 seconds see the following:
```
Failed to check the current session or reconnect to the server (error: AttributeError, message: 'ClientConnection' object has no attribute 'closed', session: s_123456789)
```
This is because at the following line, `session.closed` is no longer defined for `websockets>=14`. Instead, `session.state` should be used to check whether the session is closed.
https://github.com/slackapi/python-slack-sdk/blob/a7223d9852de8a4ef9156552d7c50a92ec92669e/slack_sdk/socket_mode/websockets/__init__.py#L120 ,
Related websockets documentations:
* https://websockets.readthedocs.io/en/stable/reference/asyncio/client.html#websockets.asyncio.client.ClientConnection.state
* https://websockets.readthedocs.io/en/stable/reference/sansio/common.html#websockets.protocol.State
### Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at https://my.slack.com/help/requests/new instead. :bow:
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-12-11T18:31:53Z | 2024-12-13T00:03:05Z | https://github.com/slackapi/python-slack-sdk/issues/1612 | [
"bug",
"Version: 3x",
"socket-mode",
"area:async"
] | ericli-splunk | 2 |
huggingface/datasets | pandas | 7,337 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory of | ### Describe the bug
ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. However, after loading, an error was reported: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of.
The data in my jsonl file is as follows:
> {"id": "GCC_train_002448550", "file_name": "GCC_train_002448550.jpg", "conversations": [{"from": "human", "value": "<image>\nProvide a brief description of the given image."}, {"from": "gpt", "value": "a view of a city , where the flyover was proposed to reduce the increasing traffic on thursday ."}]}
### Steps to reproduce the bug
from datasets import load_dataset
image = load_dataset("imagefolder",data_dir='data/opensource_data')
### Expected behavior
success
### Environment info
datasets==3.2.0 | open | 2024-12-17T12:58:43Z | 2025-01-03T15:28:13Z | https://github.com/huggingface/datasets/issues/7337 | [] | mst272 | 1 |
aiortc/aiortc | asyncio | 1,190 | Consent to Send Failure using examples on Edge (Chrome) | I'am consistently getting Consent to Send Failures after the video has been streaming over webrtc for 10-20 seconds on Edge (Chrome). Regular Chrome and Safari (iPhone) work great.
### Steps to reproduce
1. Download or clone the examples directory.
2. Utilize the webcam example.
3. Execute `pip install aiortc aiohttp`.
4. Download a sample MP4 file, such as Big Buck Bunny.
5. Start the webserver using `python3 webcam.py --play-from ./big_buck_bunny_480p_30mb.mp4`.
6. Open a web browser and navigate to localhost:8080.
7. Choose to select or not select the Use Stun checkbox (either option results in an error).
8. After about 10 seconds, the following errors occur, terminating the connection.
```
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.1.11', 46058) -> ('192.168.1.220', 57644)) State.FROZEN -> State.WAITING
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.122.1', 40392) -> ('192.168.1.220', 57644)) State.FROZEN -> State.WAITING
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.17.0.1', 59447) -> ('192.168.1.220', 57644)) State.FROZEN -> State.WAITING
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.18.0.1', 54686) -> ('192.168.1.220', 57644)) State.FROZEN -> State.WAITING
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.1.11', 46058) -> ('38.78.242.228', 57644)) State.FROZEN -> State.WAITING
Connection state is connecting
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.1.11', 46058) -> ('192.168.1.220', 57644)) State.WAITING -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.122.1', 40392) -> ('192.168.1.220', 57644)) State.WAITING -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.17.0.1', 59447) -> ('192.168.1.220', 57644)) State.WAITING -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.18.0.1', 54686) -> ('192.168.1.220', 57644)) State.WAITING -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.1.11', 46058) -> ('38.78.242.228', 57644)) State.WAITING -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.122.1', 40392) -> ('38.78.242.228', 57644)) State.FROZEN -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.17.0.1', 59447) -> ('38.78.242.228', 57644)) State.FROZEN -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('172.18.0.1', 54686) -> ('38.78.242.228', 57644)) State.FROZEN -> State.IN_PROGRESS
INFO:aioice.ice:Connection(3) Check CandidatePair(('192.168.1.11', 46058) -> ('192.168.1.220', 57644)) State.IN_PROGRESS -> State.SUCCEEDED
INFO:aioice.ice:Connection(3) ICE completed
Connection state is connected
INFO:aioice.ice:Connection(3) Consent to send expired
Connection state is closed
```
Ive tried this on different computers, different linux devices (WSL or Native Ubuntu).
I have noted that this example for webcam with playing back a file does not work in Firefox with or without STUN but that was already noted in the README.md for the example.
I'm mostly reporting this here for anyone else who has this issue with Edge (Chrome based) browsers. | closed | 2024-11-14T02:28:55Z | 2025-02-01T09:34:21Z | https://github.com/aiortc/aiortc/issues/1190 | [] | JoshuaHintze | 0 |
saleor/saleor | graphql | 17,280 | `mime-support` is not available now | `mime-support` is not available now
https://github.com/saleor/saleor/blob/438595033c608219a88583ae91d50f2e2415e558/Dockerfile#L34C3-L34C15
Please check https://wiki.debian.org/mime-support
> The mime-support package ([final version number: 3.66](http://snapshot.debian.org/package/mime-support/3.66/)) was split into [media-types](https://packages.debian.org/media-types) and [mailcap](https://packages.debian.org/mailcap) during the Bullseye release cycle. | open | 2025-01-21T17:27:47Z | 2025-01-21T17:27:47Z | https://github.com/saleor/saleor/issues/17280 | [] | http600 | 0 |
erdewit/ib_insync | asyncio | 92 | Assertation error on connection | I run ib_insync on an anaconda installation (python 3.6) on Debian.
```
from ib_insync import *
ib = IB()
ib.connect('ip_here', portno, clientId=12)
```
I then get the following error:
```
--------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-10-88bbb3c3d188> in <module>()
1 ib = IB()
----> 2 ib.connect('35.205.54.121', 4001, clientId=12)
~/anaconda3/lib/python3.6/site-packages/ib_insync/ib.py in connect(self, host, port, clientId, timeout)
200 This method is blocking.
201 """
--> 202 self._run(self.connectAsync(host, port, clientId, timeout))
203 return self
204
~/anaconda3/lib/python3.6/site-packages/ib_insync/ib.py in _run(self, *awaitables)
234
235 def _run(self, *awaitables):
--> 236 return util.run(*awaitables, timeout=self.RequestTimeout)
237
238 def waitOnUpdate(self, timeout: float=0) -> True:
~/anaconda3/lib/python3.6/site-packages/ib_insync/util.py in run(timeout, *awaitables)
245 if timeout:
246 future = asyncio.wait_for(future, timeout)
--> 247 result = syncAwait(future)
248 return result
249
~/anaconda3/lib/python3.6/site-packages/ib_insync/util.py in syncAwait(future)
362 result = _syncAwaitQt(future)
363 else:
--> 364 result = _syncAwaitAsyncio(future)
365 return result
366
~/anaconda3/lib/python3.6/site-packages/ib_insync/util.py in _syncAwaitAsyncio(future)
368 def _syncAwaitAsyncio(future):
369 assert asyncio.Task is asyncio.tasks._PyTask, \
--> 370 'To allow nested event loops, use util.patchAsyncio()'
371 loop = asyncio.get_event_loop()
372 task = asyncio.tasks.ensure_future(future)
AssertionError: To allow nested event loops, use util.patchAsyncio()
```
Any ideas why this happens? | closed | 2018-08-16T13:37:39Z | 2018-08-17T07:34:53Z | https://github.com/erdewit/ib_insync/issues/92 | [] | tfrojd | 2 |
roboflow/supervision | pytorch | 1,451 | How does line zone get triggered? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi, I am trying to under how the line zone get triggered.
Let's say I set the triggering_anchors as BOTTOM_LEFT. Does xy of the bottom_left corner of the detection box of the object has to cross the line exactly to trigger the line zone? For example, does the bottom left corner needs to be one pixel below the line in frame 1, then the corner is on the line in frame 2, and then the corner is one pixel above the line in frame 3, to trigger the line?
Or is there a buffer zone? For example, if the bottom_left corner is a few pixels BELOW the line in frame 1 and it is a few pixels ABOVE the line in frame 2, will it trigger the line?
Sorry I am aware how confusing my question sounds...
### Additional
_No response_ | closed | 2024-08-15T11:17:47Z | 2024-08-15T11:25:06Z | https://github.com/roboflow/supervision/issues/1451 | [
"question"
] | abichoi | 0 |
pyqtgraph/pyqtgraph | numpy | 2,272 | png export issue | <!-- In the following, please describe your issue in detail! -->
I was trying to run the Demo to export .png file. Then I found the picture was deficiency.
<!-- If some of the sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->


### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
import numpy as np
```
### Expected behavior
<!-- What should happen? -->
### Real behavior
<!-- What happens? -->
```
An error occurred?
Post the full traceback inside these 'code fences'!
```
### Tested environment(s)
* PyQtGraph version: <!-- output of pyqtgraph.__version__ -->
* Qt Python binding: <!-- output of pyqtgraph.Qt.VERSION_INFO -->
* Python version:
* NumPy version: <!-- output of numpy.__version__ -->
* Operating system:
* Installation method: <!-- e.g. pip, conda, system packages, ... -->
### Additional context
| open | 2022-04-26T10:08:35Z | 2022-05-04T16:13:15Z | https://github.com/pyqtgraph/pyqtgraph/issues/2272 | [
"exporters"
] | owenbearPython | 1 |
huggingface/transformers | pytorch | 36,145 | Problems with Training ModernBERT | ### System Info
- `transformers` version: 4.48.3
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.12.9
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Parallel (I'm not sure; I'm using a single GPU on a single machine)
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 2060
I have also tried installing `python3.12-dev` in response to the following initial error message (included with the code snippet later)
```python
/usr/include/python3.12/pyconfig.h:3:12: fatal error: x86_64-linux-gnu/python3.12/pyconfig.h: No such file or directory
# include <x86_64-linux-gnu/python3.12/pyconfig.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
```
but the error persists.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am essentially replicating the code from the following:
- https://github.com/di37/ner-electrical-engineering-finetuning/blob/main/notebooks/01_data_tokenization.ipynb
- https://github.com/di37/ner-electrical-engineering-finetuning/blob/main/notebooks/02_finetuning.ipynb
- See also: https://blog.cubed.run/automating-electrical-engineering-text-analysis-with-named-entity-recognition-ner-part-1-babd2df422d8
The following is the code for setting up the training process:
```python
import os
from transformers import BertTokenizerFast
from datasets import load_dataset
# from utilities import MODEL_ID, DATASET_ID, OUTPUT_DATASET_PATH
DATASET_ID = "disham993/ElectricalNER"
MODEL_ID = "answerdotai/ModernBERT-large"
LOGS = "logs"
OUTPUT_DATASET_PATH = os.path.join(
"data", "tokenized_electrical_ner_modernbert"
) # "data"
OUTPUT_DIR = "models"
MODEL_PATH = os.path.join(OUTPUT_DIR, MODEL_ID)
OUTPUT_MODEL = os.path.join(OUTPUT_DIR, f"electrical-ner-{MODEL_ID.split('/')[-1]}")
EVAL_STRATEGY = "epoch"
LEARNING_RATE = 1e-5
PER_DEVICE_TRAIN_BATCH_SIZE = 64
PER_DEVICE_EVAL_BATCH_SIZE = 64
NUM_TRAIN_EPOCHS = 5
WEIGHT_DECAY = 0.01
LOCAL_MODELS = {
"google-bert/bert-base-uncased": "electrical-ner-bert-base-uncased",
"distilbert/distilbert-base-uncased": "electrical-ner-distilbert-base-uncased",
"google-bert/bert-large-uncased": "electrical-ner-bert-large-uncased",
"answerdotai/ModernBERT-base": "electrical-ner-ModernBERT-base",
"answerdotai/ModernBERT-large": "electrical-ner-ModernBERT-large",
}
ONLINE_MODELS = {
"google-bert/bert-base-uncased": "disham993/electrical-ner-bert-base",
"distilbert/distilbert-base-uncased": "disham993/electrical-ner-distilbert-base",
"google-bert/bert-large-uncased": "disham993/electrical-ner-bert-large",
"answerdotai/ModernBERT-base": "disham993/electrical-ner-ModernBERT-base",
"answerdotai/ModernBERT-large": "disham993/electrical-ner-ModernBERT-large",
}
electrical_ner_dataset = load_dataset(DATASET_ID, trust_remote_code=True)
print(electrical_ner_dataset)
from datasets import DatasetDict
shrunk_train = electrical_ner_dataset['train'].select(range(10))
shrunk_valid = electrical_ner_dataset['validation'].select(range(5))
shrunk_test = electrical_ner_dataset['test'].select(range(5))
electrical_ner_dataset = DatasetDict({
'train': shrunk_train,
'validation': shrunk_valid,
'test': shrunk_test
})
electrical_ner_dataset.shape
tokenizer = BertTokenizerFast.from_pretrained(MODEL_ID)
def tokenize_and_align_labels(examples, label_all_tokens=True):
"""
Function to tokenize and align labels with respect to the tokens. This function is specifically designed for
Named Entity Recognition (NER) tasks where alignment of the labels is necessary after tokenization.
Parameters:
examples (dict): A dictionary containing the tokens and the corresponding NER tags.
- "tokens": list of words in a sentence.
- "ner_tags": list of corresponding entity tags for each word.
label_all_tokens (bool): A flag to indicate whether all tokens should have labels.
If False, only the first token of a word will have a label,
the other tokens (subwords) corresponding to the same word will be assigned -100.
Returns:
tokenized_inputs (dict): A dictionary containing the tokenized inputs and the corresponding labels aligned with the tokens.
"""
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
labels = []
for i, label in enumerate(examples["ner_tags"]):
word_ids = tokenized_inputs.word_ids(batch_index=i)
# word_ids() => Return a list mapping the tokens
# to their actual word in the initial sentence.
# It Returns a list indicating the word corresponding to each token.
previous_word_idx = None
label_ids = []
# Special tokens like `<s>` and `<\s>` are originally mapped to None
# We need to set the label to -100 so they are automatically ignored in the loss function.
for word_idx in word_ids:
if word_idx is None:
# set –100 as the label for these special tokens
label_ids.append(-100)
# For the other tokens in a word, we set the label to either the current label or -100, depending on
# the label_all_tokens flag.
elif word_idx != previous_word_idx:
# if current word_idx is != prev then its the most regular case
# and add the corresponding token
label_ids.append(label[word_idx])
else:
# to take care of sub-words which have the same word_idx
# set -100 as well for them, but only if label_all_tokens == False
label_ids.append(label[word_idx] if label_all_tokens else -100)
# mask the subword representations after the first subword
previous_word_idx = word_idx
labels.append(label_ids)
tokenized_inputs["labels"] = labels
return tokenized_inputs
tokenized_datasets = electrical_ner_dataset.map(tokenize_and_align_labels, batched=True)
tokenized_electrical_ner_dataset = tokenized_datasets
import os
import numpy as np
from transformers import AutoTokenizer
from transformers import DataCollatorForTokenClassification
from transformers import AutoModelForTokenClassification
from datasets import load_from_disk
from transformers import TrainingArguments, Trainer
import evaluate
import json
import pandas as pd
label_list= tokenized_electrical_ner_dataset["train"].features["ner_tags"].feature.names
num_labels = len(label_list)
print(f"Labels: {label_list}")
print(f"Number of labels: {num_labels}")
model = AutoModelForTokenClassification.from_pretrained(MODEL_ID, num_labels=num_labels)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
args = TrainingArguments(
output_dir=MODEL_PATH,
eval_strategy=EVAL_STRATEGY,
learning_rate=LEARNING_RATE,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=NUM_TRAIN_EPOCHS,
weight_decay=WEIGHT_DECAY,
push_to_hub=False
)
data_collator = DataCollatorForTokenClassification(tokenizer)
def compute_metrics(eval_preds):
"""
Function to compute the evaluation metrics for Named Entity Recognition (NER) tasks.
The function computes precision, recall, F1 score and accuracy.
Parameters:
eval_preds (tuple): A tuple containing the predicted logits and the true labels.
Returns:
A dictionary containing the precision, recall, F1 score and accuracy.
"""
pred_logits, labels = eval_preds
pred_logits = np.argmax(pred_logits, axis=2)
# the logits and the probabilities are in the same order,
# so we don’t need to apply the softmax
# We remove all the values where the label is -100
predictions = [
[label_list[eval_preds] for (eval_preds, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(pred_logits, labels)
]
true_labels = [
[label_list[l] for (eval_preds, l) in zip(prediction, label) if l != -100]
for prediction, label in zip(pred_logits, labels)
]
metric = evaluate.load("seqeval")
results = metric.compute(predictions=predictions, references=true_labels)
return {
"precision": results["overall_precision"],
"recall": results["overall_recall"],
"f1": results["overall_f1"],
"accuracy": results["overall_accuracy"],
}
trainer = Trainer(
model,
args,
train_dataset=tokenized_electrical_ner_dataset["train"],
eval_dataset=tokenized_electrical_ner_dataset["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
```
```python
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'PreTrainedTokenizerFast'.
The class this function is called from is 'BertTokenizerFast'.
Map: 100%|█████████████████████████████| 10/10 [00:00<00:00, 1220.41 examples/s]
Map: 100%|████████████████████████████████| 5/5 [00:00<00:00, 818.40 examples/s]
Map: 100%|███████████████████████████████| 5/5 [00:00<00:00, 1246.82 examples/s]
Some weights of ModernBertForTokenClassification were not initialized from the model checkpoint at answerdotai/ModernBERT-large and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/hyunjong/Documents/Development/Python/trouver_personal_playground/playgrounds/ml_model_training_playground/modernbert_training_error.py:183: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `Trainer.__init__`. Use `processing_class` instead.
trainer = Trainer(
0%| | 0/50 [00:00<?, ?it/s]In file included from /usr/include/python3.12/Python.h:12:0,
from /tmp/tmpjbmobkir/main.c:5:
/usr/include/python3.12/pyconfig.h:3:12: fatal error: x86_64-linux-gnu/python3.12/pyconfig.h: No such file or directory
# include <x86_64-linux-gnu/python3.12/pyconfig.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "/home/hyunjong/Documents/Development/Python/trouver_personal_playground/playgrounds/ml_model_training_playground/modernbert_training_error.py", line 193, in <module>
trainer.train()
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/trainer.py", line 2171, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/trainer.py", line 3675, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/trainer.py", line 3731, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1349, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 958, in forward
hidden_states = self.embeddings(input_ids=input_ids, inputs_embeds=inputs_embeds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 217, in forward
self.compiled_embeddings(input_ids)
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 678, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2027, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2033, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1968, in codegen
self.scheduler.codegen()
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 3477, in codegen
return self._codegen()
^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/scheduler.py", line 3554, in _codegen
self.get_backend(device).codegen_node(node)
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/codegen/cuda_combined_scheduling.py", line 80, in codegen_node
return self._triton_scheduling.codegen_node(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 1219, in codegen_node
return self.codegen_node_schedule(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/codegen/simd.py", line 1263, in codegen_node_schedule
src_code = kernel.codegen_kernel()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/codegen/triton.py", line 3154, in codegen_kernel
**self.inductor_meta_common(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/_inductor/codegen/triton.py", line 3013, in inductor_meta_common
"backend_hash": torch.utils._triton.triton_hash_with_backend(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/utils/_triton.py", line 111, in triton_hash_with_backend
backend = triton_backend()
^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/torch/utils/_triton.py", line 103, in triton_backend
target = driver.active.get_current_target()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 23, in __getattr__
self._initialize_obj()
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
self._obj = self._init_fn()
^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/runtime/driver.py", line 9, in _create_driver
return actives[0]()
^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 450, in __init__
self.utils = CudaUtils() # TODO: make static
^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 80, in __init__
mod = compile_module_from_src(Path(os.path.join(dirname, "driver.c")).read_text(), "cuda_utils")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/backends/nvidia/driver.py", line 57, in compile_module_from_src
so = _build(name, src_path, tmpdir, library_dirs(), include_dir, libraries)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/runtime/build.py", line 50, in _build
ret = subprocess.check_call(cc_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/subprocess.py", line 415, in check_call
raise CalledProcessError(retcode, cmd)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CalledProcessError: Command '['/home/hyunjong/anaconda3/bin/x86_64-conda-linux-gnu-cc', '/tmp/tmpjbmobkir/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpjbmobkir/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/backends/nvidia/lib', '-L/lib/x86_64-linux-gnu', '-L/lib/i386-linux-gnu', '-I/home/hyunjong/Documents/Development/Python/trouver_py312_venv/lib/python3.12/site-packages/triton/backends/nvidia/include', '-I/tmp/tmpjbmobkir', '-I/usr/include/python3.12']' returned non-zero exit status 1.
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
0%| | 0/50 [00:01<?, ?it/s]
```
### Expected behavior
For the training process to happen. | closed | 2025-02-12T04:04:50Z | 2025-02-14T04:21:22Z | https://github.com/huggingface/transformers/issues/36145 | [
"bug"
] | hyunjongkimmath | 4 |
noirbizarre/flask-restplus | flask | 498 | api.model doesnt accept null values | flask-restplus version: 0.11.0
Input model doesnt access null values from the payload, looking into flask-restplus code found that jsonschema definition type is not designed for that , right not it just accepts the data type i.e.,, either integer or string etc it should have been [<datatype>,"null"] , attached is the code where it needs a change in the code , please help to change with that or let me know if I can push this change (am new to python world)

| open | 2018-07-23T15:43:10Z | 2018-07-23T15:43:10Z | https://github.com/noirbizarre/flask-restplus/issues/498 | [] | boyaps | 0 |
ymcui/Chinese-BERT-wwm | nlp | 54 | 有roberta large版本的下载地址吗 | @ymcui | closed | 2019-10-14T09:27:47Z | 2019-10-15T00:26:09Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/54 | [] | xiongma | 2 |
kornia/kornia | computer-vision | 2,813 | Improve the `ImageSequential` docs | now that i see this one, i think it's worth somewhere to update the docs properly explaining when to use AuggmentationSequential vs ImageSequential
_Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2799#discussion_r1488008322_
worth discuss the advantages, examples, etc for each API
| open | 2024-02-23T00:53:33Z | 2024-02-28T13:35:26Z | https://github.com/kornia/kornia/issues/2813 | [
"help wanted",
"good first issue",
"docs :books:"
] | johnnv1 | 1 |
marcomusy/vedo | numpy | 198 | k3d: TraitError: colors has wrong size: 4000000 (1000000 required) | Copying and paste the example https://github.com/marcomusy/vedo/blob/master/examples/basic/manypoints.py does not work (is that supposed to work in a jupyter notebook?). By the way, is the k3d backend supporting transparency?
```
"""Colorize a large cloud of 1M points by passing
colors and transparencies in the format (R,G,B,A)
"""
from vedo import *
import numpy as np
import time
settings.renderPointsAsSpheres = False
settings.pointSmoothing = False
settings.xtitle = 'red axis'
settings.ytitle = 'green axis'
settings.ztitle = 'blue*alpha axis'
N = 1000000
pts = np.random.rand(N, 3)
RGB = pts * 255
Alpha = pts[:, 2] * 255
RGBA = np.c_[RGB, Alpha] # concatenate
print("clock starts")
t0 = time.time()
# passing c in format (R,G,B,A) is ~50x faster
pts = Points(pts, r=2, c=RGBA) #fast
#pts = Points(pts, r=2, c=pts, alpha=pts[:, 2]) #slow
t1 = time.time()
print("-> elapsed time:", t1-t0, "seconds for N:", N)
show(pts, __doc__, axes=True)
```
into a jupyter notebook produces the error
```
clock starts
-> elapsed time: 1.2373969554901123 seconds for N: 1000000
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
<ipython-input-1-2c67a1dbfaf3> in <module>
29 print("-> elapsed time:", t1-t0, "seconds for N:", N)
30
---> 31 show(pts, __doc__, axes=True)
~/.virtualenvs/aiida/lib/python3.7/site-packages/vedo/plotter.py in show(*actors, **options)
310 bg2=bg2,
311 axes=axes,
--> 312 q=q,
313 )
314
~/.virtualenvs/aiida/lib/python3.7/site-packages/vedo/plotter.py in show(self, *actors, **options)
1550 #########################################################################
1551 if settings.notebookBackend and settings.notebookBackend != "panel" and settings.notebookBackend != "2d":
-> 1552 return backends.getNotebookBackend(actors2show, zoom, viewup)
1553 #########################################################################
1554
~/.virtualenvs/aiida/lib/python3.7/site-packages/vedo/backends.py in getNotebookBackend(actors2show, zoom, viewup)
197 shader="3d",
198 point_size=iap.GetPointSize()*sqsize/800,
--> 199 name=name,
200 #compression_level=9,
201 )
~/.virtualenvs/aiida/lib/python3.7/site-packages/k3d/factory.py in points(positions, colors, color, point_size, shader, opacity, opacities, name, compression_level, mesh_detail, **kwargs)
262 opacity=opacity, opacities=opacities,
263 mesh_detail=mesh_detail, name=name,
--> 264 compression_level=compression_level),
265 **kwargs
266 )
~/.virtualenvs/aiida/lib/python3.7/site-packages/k3d/objects.py in __init__(self, **kwargs)
439
440 def __init__(self, **kwargs):
--> 441 super(Points, self).__init__(**kwargs)
442
443 self.set_trait('type', 'Points')
~/.virtualenvs/aiida/lib/python3.7/site-packages/k3d/objects.py in __init__(self, **kwargs)
79 self.id = id(self)
80
---> 81 super(Drawable, self).__init__(**kwargs)
82
83 def __iter__(self):
~/.virtualenvs/aiida/lib/python3.7/site-packages/ipywidgets/widgets/widget.py in __init__(self, **kwargs)
410 """Public constructor"""
411 self._model_id = kwargs.pop('model_id', None)
--> 412 super(Widget, self).__init__(**kwargs)
413
414 Widget._call_widget_constructed(self)
~/.virtualenvs/aiida/lib/python3.7/site-packages/traitlets/traitlets.py in __init__(self, *args, **kwargs)
998 else:
999 # passthrough args that don't set traits to super
-> 1000 super_kwargs[key] = value
1001 try:
1002 super(HasTraits, self).__init__(*super_args, **super_kwargs)
~/miniconda3/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
117 if type is None:
118 try:
--> 119 next(self.gen)
120 except StopIteration:
121 return False
~/.virtualenvs/aiida/lib/python3.7/site-packages/traitlets/traitlets.py in hold_trait_notifications(self)
1120 self._trait_values.pop(name)
1121 cache = {}
-> 1122 raise e
1123 finally:
1124 self._cross_validation_lock = False
~/.virtualenvs/aiida/lib/python3.7/site-packages/traitlets/traitlets.py in hold_trait_notifications(self)
1106 for name in list(cache.keys()):
1107 trait = getattr(self.__class__, name)
-> 1108 value = trait._cross_validate(self, getattr(self, name))
1109 self.set_trait(name, value)
1110 except TraitError as e:
~/.virtualenvs/aiida/lib/python3.7/site-packages/traitlets/traitlets.py in _cross_validate(self, obj, value)
597 if self.name in obj._trait_validators:
598 proposal = Bunch({'trait': self, 'value': value, 'owner': obj})
--> 599 value = obj._trait_validators[self.name](obj, proposal)
600 elif hasattr(obj, '_%s_validate' % self.name):
601 meth_name = '_%s_validate' % self.name
~/.virtualenvs/aiida/lib/python3.7/site-packages/traitlets/traitlets.py in __call__(self, *args, **kwargs)
905 """Pass `*args` and `**kwargs` to the handler's function if it exists."""
906 if hasattr(self, 'func'):
--> 907 return self.func(*args, **kwargs)
908 else:
909 return self._init_call(*args, **kwargs)
~/.virtualenvs/aiida/lib/python3.7/site-packages/k3d/objects.py in _validate_colors(self, proposal)
451 actual = proposal['value'].size
452 if actual != 0 and required != actual:
--> 453 raise TraitError('colors has wrong size: %s (%s required)' % (actual, required))
454 return proposal['value']
455
TraitError: colors has wrong size: 4000000 (1000000 required)
```
installed k3d package:
K3D-2.9.0-py2.py3-none-any.whl
ipydatawidgets-4.0.1-py2.py3-none-any.whl
vedo package:
```
$ pip show vedo
Name: vedo
Version: 2020.3.4
.
.
.
```
python 3.7.3 | open | 2020-08-24T14:41:14Z | 2020-11-11T06:47:57Z | https://github.com/marcomusy/vedo/issues/198 | [] | rikigigi | 2 |
QuivrHQ/quivr | api | 3,015 | Supabase local creation of new users not working | New version of supabase. Need to update config.toml | closed | 2024-08-16T09:06:23Z | 2024-08-16T09:07:13Z | https://github.com/QuivrHQ/quivr/issues/3015 | [] | StanGirard | 1 |
fastapi/sqlmodel | sqlalchemy | 202 | Showing Warning | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine, select #
class Hero(SQLModel, table=True): #
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True) #
def create_db_and_tables():
SQLModel.metadata.create_all(engine) #
def create_heroes():
hero_1 = Hero(name="Deadpond", secret_name="Dive Wilson") #
hero_2 = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
hero_3 = Hero(name="Rusty-Man", secret_name="Tommy Sharp", age=48)
with Session(engine) as session: #
session.add(hero_1)
session.add(hero_2)
session.add(hero_3)
session.commit()
def select_heroes():
with Session(engine) as session: #
statement = select(Hero) #
results = session.exec(statement) #
for hero in results: #
print(hero) #
#
def main():
create_db_and_tables()
create_heroes()
select_heroes() #
if __name__ == "__main__":
main()
```
### Description
```python
select_heroes() # issue
```
When selecting a database showing warnings
### Operating System
Linux
### Operating System Details
```shell
system=Linux
node=Abhijith
release=5.11.0-43-generic
version=#47~20.04.2-Ubuntu SMP Mon Dec 13 11:06:56 UTC 2021
machine=x86_64
processor=x86_64
```
### SQLModel Version
0.0.5
### Python Version
Python 3.8.10
### Additional Context
```shell
SAWarning: Class SelectOfScalar will not make use of SQL compilation caching as it does not set the 'inherit_cache' attribute to ``True``. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this object can make use of the cache key generated by the superclass. Alternatively, this attribute may be set to False which will disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
results = super().execute(
2021-12-24 08:34:27,069 INFO sqlalchemy.engine.Engine SELECT hero.id, hero.name, hero.secret_name, hero.age
FROM hero
2021-12-24 08:34:27,074 INFO sqlalchemy.engine.Engine [no key 0.00552s] ()
name='Deadpond' age=None secret_name='Dive Wilson' id=1
name='Spider-Boy' age=None secret_name='Pedro Parqueador' id=2
name='Rusty-Man' age=48 secret_name='Tommy Sharp' id=3
2021-12-24 08:34:27,084 INFO sqlalchemy.engine.Engine ROLLBACK
``` | closed | 2021-12-24T03:07:54Z | 2022-01-02T05:22:21Z | https://github.com/fastapi/sqlmodel/issues/202 | [
"question"
] | abhint | 2 |
serengil/deepface | deep-learning | 923 | Use an interface for basemodels and detectors | We have many different base model, detectors and extended models. We can define an interface and inherit it in existing models. This will help maintainers to add new models in the future because required methods can be found in that interface easily. | closed | 2023-12-17T15:16:42Z | 2024-01-20T20:37:35Z | https://github.com/serengil/deepface/issues/923 | [
"enhancement"
] | serengil | 1 |
joerick/pyinstrument | django | 76 | Can't see any data regarding any raised exception | Thanks for this lib.
I tried to raise an exception manually to see if I can check it in the trace result, it looks like it doesn't show anything regarding that exception. it just says an error occurred, am I missing something?
Here's the configuration I'm using:
```python
@app.before_request
def before_request():
g.profiler = Profiler()
g.profiler.start()
@app.after_request
def after_request(response):
if not hasattr(g, "profiler"):
return response
g.profiler.stop()
output_html = g.profiler.output_html()
return make_response(output_html)
``` | closed | 2019-12-19T07:44:13Z | 2021-04-03T22:27:21Z | https://github.com/joerick/pyinstrument/issues/76 | [] | AbdoDabbas | 2 |
PokeAPI/pokeapi | graphql | 584 | Add encounter condition to encounter details | It would be useful for sorting and to avoid querying the encounter-condition-value

| closed | 2021-03-07T13:33:53Z | 2021-03-08T19:04:43Z | https://github.com/PokeAPI/pokeapi/issues/584 | [] | SimplyBLGDev | 1 |
jina-ai/clip-as-service | pytorch | 239 | BERT for Information Retrieval (Feature Request) | Hi @hanxiao
Thanks for sharing this amazing work! I am really amazed by this super scalable architecture! I did the query to related query similarity. It works quite Ok with a custom build BERT model for my content corpus (which is much different than wiki corpus). I am trying to see if i can use BERT for query to document similarity (based on similarity between an NLP query and paragraph of a document). Was planning to encode sentences within a paragraph within a document along with a meta-data and do tf.reduce_sum and then compare that with query vector for similarity. What are your thoughts?
I am assuming passing all the sentences in the paragraph to encode at once to the bert client encode function, is a bad idea and a dot product between vectors of individual sentences in a paragraph might be better. If the latter is better, what are your thoughts on adding this to the server side of your service, as this could keep the client side code lightweight (w/o tf) by passing a different separator than ||| from client function while passing all the sentences.
**Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
| open | 2019-02-18T03:56:27Z | 2020-02-14T02:25:13Z | https://github.com/jina-ai/clip-as-service/issues/239 | [] | sujithjoseph | 2 |
Avaiga/taipy | data-visualization | 1,739 | Get geolocation from clicking on a map | ### Description
The goal would be to get the lon/lat of a point when clicking anywhere on a map.
This was asked by a user [here](https://discord.com/channels/1125797687476887563/1279625158394511360/1280344356066431037)
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
- [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-09-03T08:02:02Z | 2024-09-05T07:47:22Z | https://github.com/Avaiga/taipy/issues/1739 | [
"🖰 GUI",
"🟩 Priority: Low",
"✨New feature"
] | FlorianJacta | 3 |
aiortc/aiortc | asyncio | 1,188 | Getting Bundled Media in the SessionDescription to Work with Pion | We're trying to get `aiortc` talking with our [Pion](https://github.com/pion/webrtc) WebRTC API. We get errors from Pion because `aiortc` constructs different `ice-ufrag` and `ice-pwd` for every m-line, even though a bundle is configured.
The solution is to have consistent `ice-ufrag` and `ice-pwd` for each media/application line in the SDP.
Any help to achieve this in `aiortc` would be greatly appreciated.
We looked into to monkey-patching the [SDP construction](https://github.com/aiortc/aiortc/blob/main/src/aiortc/sdp.py) inside the [RTCPeerConnection](https://github.com/aiortc/aiortc/blob/main/src/aiortc/rtcpeerconnection.py) and [RTCIceTransport](https://github.com/aiortc/aiortc/blob/main/src/aiortc/rtcicetransport.py) modules.
We can confirm that changing the `__str__` representation with consistent `ice-ufrag` and `ice-pwd` values inside SDP makes Pion accept the local offer from the client. However, this doesn't change the actual offer object.
The call chain that constructs SDP is: `RTCPPeerConnection.create_offer(...)` > `create_media_description_for_transceiver(...)` & `create_media_description_for_sctp(...)` -> `add_transport_description(...)` > `RTCIceGatherer.getLocalParameters()`
How could we modify the `media.ice` assignment, to have consistent `ice-ufrag` and `ice-pwd` entries?
```python
def add_transport_description(
media: sdp.MediaDescription, dtlsTransport: RTCDtlsTransport) -> None:
....
# This sets `ice-ufrag` and `ice-pwd` attributes on each media section; each time with different params.
media.ice = iceGatherer.getLocalParameters()
...
```
**SDP Example**
```
v=0
o=- 3938608397 3938608397 IN IP4 0.0.0.0
s=-
t=0 0
a=group:BUNDLE 0 1
a=msid-semantic:WMS *
m=audio 61547 UDP/TLS/RTP/SAVPF 96 0 8
c=IN IP6 2a02:a03f:63c3:ab01:8dd:b522:c44e:4c4a
a=sendrecv
a=extmap:1 urn:ietf:params:rtp-hdrext:sdes:mid
a=extmap:2 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:0
a=msid:cb8e4359-9b62-4c55-9e67-c6aafa14cb5e 073e433b-4499-4470-87b9-c823855577b4
a=rtcp:9 IN IP4 0.0.0.0
a=rtcp-mux
a=ssrc:2665274437 cname:2492e33b-7366-49f3-8761-5f1fa9c2c67c
a=rtpmap:96 opus/48000/2
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=candidate:3d9fb1c05760a5aec3bb63582c0ee75f 1 udp 2130706431 2a02:a03f:63c3:ab01:8dd:b522:c44e:4c4a 61547 typ host
a=candidate:b3f8102527b52eb84bbce9e970243c72 1 udp 2130706431 2a02:a03f:63c3:ab01:19ae:93db:2f2:7c29 50461 typ host
a=candidate:7b995085fe42009d6857ed9f99301acb 1 udp 2130706431 192.168.XXX.XX 61487 typ host
a=candidate:e5bcfe9c71647ac306da9adfe5d02ca8 1 udp 2130706431 192.168.XX.XXX 60962 typ host
a=candidate:70277552984fbd952854c62500e75a48 1 udp 2130706431 fd82:6bd2:bee9:8cd7:892:8881:c79a:3fa9 61877 typ host
a=candidate:bf46076bc5b044080f7d76470e336d40 1 udp 16777215 216.39.253.22 59791 typ relay raddr 192.168.129.25 rport 63566
a=end-of-candidates
a=ice-ufrag:SOc1
a=ice-pwd:3G6gWKQO58pVo3tdBZnxgN
a=fingerprint:sha-256 3E:EE:9E:55:XX:CF:9C:9F:04:68:EB:XX:09:B7:DF:28:69:XX:58:80:8C:42:A1:1E:1C:XX:7A:C8:ED:37:D5:XX
a=setup:actpass
m=application 51010 DTLS/SCTP 5000
c=IN IP6 2a02:a03f:63c3:ab01:8dd:b522:c44e:4c4a
a=mid:1
a=sctpmap:5000 webrtc-datachannel 65535
a=max-message-size:65536
a=candidate:3d9fb1c05760a5aec3bb63582c0ee75f 1 udp 2130706431 2a02:a03f:63c3:ab01:8dd:b522:c44e:4c4a 51010 typ host
a=candidate:b3f8102527b52eb84bbce9e970243c72 1 udp 2130706431 2a02:a03f:63c3:ab01:19ae:93db:2f2:7c29 57433 typ host
a=candidate:7b995085fe42009d6857ed9f99301acb 1 udp 2130706431 192.168.XXX.XX 64485 typ host
a=candidate:e5bcfe9c71647ac306da9adfe5d02ca8 1 udp 2130706431 192.168.XX.XX 53464 typ host
a=candidate:70277552984fbd952854c62500e75a48 1 udp 2130706431 fd82:6bd2:bee9:8cd7:892:8881:c79a:3fa9 57665 typ host
a=candidate:bf46076bc5b044080f7d76470e336d40 1 udp 16777215 216.XX.XXX.XX 37545 typ relay raddr 192.168.129.25 rport 54827
a=end-of-candidates
a=ice-ufrag:1drh
a=ice-pwd:u6bgmyXf3HzFaNq7blYQZ3
a=fingerprint:sha-256 3E:EE:9E:55:D0:XX:9C:9F:04:68:EB:0D:09:B7:DF:XX:69:78:58:80:8C:42:A1:1E:1C:XX:7A:C8:ED:37:D5:XX
a=setup:actpass
```
| open | 2024-11-10T14:22:20Z | 2025-01-31T03:37:02Z | https://github.com/aiortc/aiortc/issues/1188 | [] | quintenrosseel | 3 |
tfranzel/drf-spectacular | rest-api | 459 | Private endpoints or hide schema? | Is there any way to hide the schema generation for a view all together? I know you can set exclude=True for specific http methods, but can it be done for an entire url? | closed | 2021-07-15T18:34:28Z | 2021-07-15T19:00:24Z | https://github.com/tfranzel/drf-spectacular/issues/459 | [] | li-darren | 4 |
zappa/Zappa | django | 489 | [Migrated] Refactor Let's Encrypt implementation to use available packages [proposed code] | Originally from: https://github.com/Miserlou/Zappa/issues/1300 by [rgov](https://github.com/rgov)
The Let's Encrypt integration works by invoking the `openssl` command line tool, creating various temporary files, and communicating with the Let's Encrypt certificate authority API directly.
The Python package that Let's Encrypt's `certbot` itself uses is called [`acme`](https://github.com/certbot/certbot/tree/master/acme) and it handles the network protocol. Additionally, the [`cryptography`](https://cryptography.io) package offers functions for generate keys, certificate requests, etc. in-process without invoking a subprocess. Both of these packages are also well-tested and actively developed.
Therefore I would recommend switching to use them in place of the current implementation.
I've made [a gist](https://gist.github.com/rgov/fb97a9585fa18549851d810b1045f0a4) which creates a simple wrapper around the basic functionality I think that's needed:
- `load_private_key` deserializes a PEM private key
- `generate_private_key` generates an asymmetric key pair (2048-bit RSA)
- `generate_csr` creates a certificate signing request for a set of domains
- `get_certificate` communicates with Let's Encrypt to retrieve the certificate
While not necessarily everything you need (perhaps you'd need to serialize out the certificate as a PEM file as well), it should be a good start to improving the implementation.
(The example code may not work without applying [this change](https://github.com/certbot/josepy/pull/5) to the `josepy` package that I proposed.) | closed | 2021-02-20T09:43:23Z | 2024-04-13T16:36:18Z | https://github.com/zappa/Zappa/issues/489 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
unionai-oss/pandera | pandas | 1,205 | Static type hint error on class pandera DataFrame | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [x] (optional) I have confirmed this bug exists on the master branch of pandera.
Currently, the type hint for the static method of the class pandera DataFrame in pandera.typing.pandas.DataFrame is as follows:
```python
@staticmethod
def from_records( # type: ignore
schema: T,
data: Union[ # type: ignore
np.ndarray, List[Tuple[Any, ...]], Dict[Any, Any], pd.DataFrame
],
**kwargs,
) -> "DataFrame[T]":
```
giving pyright the following error:
Expression of type "DataFrame[Type[Schema]]" cannot be assigned to return type "DataFrame[Schema]"
when:
```python
class Schema(SchemaModel):
timestamp: Series[datetime]
DataFrame.from_records(
schema=Schema, data={"data": []}
)
```
because models of dataframes must no be instantiated and the type hint of schema suggests that the value must be an instance of a generic model schema.
#### Environment:
- python: 3.11
- pandera: 0.15.1
- pyright: 1.1.311
#### Expected code
```python
from typing import Type
...
@staticmethod
def from_records( # type: ignore
schema: Type[T],
data: Union[ # type: ignore
np.ndarray, List[Tuple[Any, ...]], Dict[Any, Any], pd.DataFrame
],
**kwargs,
) -> "DataFrame[T]":
"""
Convert structured or record ndarray to pandera-validated DataFrame.
Creates a DataFrame object from a structured ndarray, sequence of tuples
or dicts, or DataFrame.
See :doc:`pandas:reference/api/pandas.DataFrame.from_records` for
more details.
"""
schema = schema.to_schema() # type: ignore[attr-defined]
schema_index = schema.index.names if schema.index is not None else None
if "index" not in kwargs:
kwargs["index"] = schema_index
return DataFrame[T](
pd.DataFrame.from_records(data=data, **kwargs,)[
schema.columns.keys()
] # set the column order according to schema
)
```
notice the `schema: Type[T]`. | closed | 2023-05-31T11:05:05Z | 2023-06-28T10:55:32Z | https://github.com/unionai-oss/pandera/issues/1205 | [
"bug"
] | manel-ab | 3 |
openapi-generators/openapi-python-client | rest-api | 721 | TypeError when model default value is a list | **Describe the bug**
When generating models for an API where a default argument is a list of enums, a TypeError is raised [on this line](https://github.com/openapi-generators/openapi-python-client/blob/main/openapi_python_client/parser/properties/__init__.py#L463):
return f"{prop.class_info.name}.{inverse_values[default]}"
A possible fix could be to explicitly check for this case. Below is a proposed fix, but I'm not familiar enough with the codebase to know whether this is a correct fix. However, I was able to parse the API docs correctly with this fix in place:
if type(default) is list:
return [f"prop.class_info.name.inverse_values[d]" for d in default]
else:
return f"{prop.class_info.name}.{inverse_values[default]}"
**To Reproduce**
Steps to reproduce the behavior:
1. Run `openapi-python-client generate --url https://sand-docs.ilevelsolutions.eu/openapi.yaml`
4. See error
**Expected behavior**
The API docs should be able to be parsed
**OpenAPI Spec File**
https://sand-docs.ilevelsolutions.eu/openapi.yaml
**Desktop (please complete the following information):**
- OS: macOS 13.1
- Python Version: 3.10
- openapi-python-client version: 0.13.1
| open | 2023-01-23T15:51:03Z | 2023-01-23T15:51:03Z | https://github.com/openapi-generators/openapi-python-client/issues/721 | [
"🐞bug"
] | maxbergmark | 0 |
mljar/mercury | jupyter | 27 | Error when uploading file | I'm getting this error when uploading a file through the widget
Forbidden: /api/v1/fp/process/ | closed | 2022-01-26T04:10:35Z | 2022-01-27T03:39:38Z | https://github.com/mljar/mercury/issues/27 | [] | ismaelc | 4 |
holoviz/panel | jupyter | 6,887 | `pn.widgets.Tabulator`: Hover effect not disabled with `selectable=False` | https://github.com/holoviz/panel/blob/6f63f81c827d197a6f367f86fa1eff98b257c116/panel/models/tabulator.ts#L657C69-L657C72
Why NaN? It seems that it doesn't let `selectable=False` as in
```
pn.widgets.Tabulator(df, show_index=False, disabled=True, selectable=False)
```
properly apply to the Tabulator JavaScript component.
#### Description of expected behavior and the observed behavior
Expected:
- `selectable=False` disables selection and the hover effect on Tabulator table rows
Actual:
- selection is disabled, but not the hover effect
Tabulator CSS
```
@media (hover: hover) and (pointer: fine) {
.tabulator-row.tabulator-selectable:hover {
background-color: #bbb;
cursor: pointer;
}
}
```
still applies, indicating that the `tabulator-selectable` CSS class is still present.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import panel as pn
pn.extension('tabulator')
df = pd.DataFrame([{'a': 'b'}])
pn.widgets.Tabulator(df, show_index=False, disabled=True, selectable=False)
```
#### Stack traceback and/or browser JavaScript console output
—
#### Screenshots or screencasts of the bug in action

#### Ideas for the fix
```
const selectable = this.model.select_mode;
```
in place of https://github.com/holoviz/panel/blob/6f63f81c827d197a6f367f86fa1eff98b257c116/panel/models/tabulator.ts#L657C69-L657C72 would solve my problem (but would probably break something related to the "toggle" select mode).
| open | 2024-06-01T20:52:14Z | 2025-02-20T15:04:43Z | https://github.com/holoviz/panel/issues/6887 | [
"component: tabulator"
] | earshinov | 0 |
stanfordnlp/stanza | nlp | 729 | [QUESTION] Converting CoreNLP ParseTree to nltk.Tree | How do I convert the constituency parseTree generated from the code below to [`nltk.Tree`](https://www.nltk.org/_modules/nltk/tree.html)?
```python
from stanza.server import CoreNLPClient
with CoreNLPClient(annotators=["parse"], timeout=30000, memory="16G") as client:
ann = client.annotate("Is there such a thing as x ray glasses")
sentence = ann.sentence[0]
constituency_parse = sentence.parseTree
``` | closed | 2021-06-24T13:07:20Z | 2021-06-24T14:55:29Z | https://github.com/stanfordnlp/stanza/issues/729 | [
"question"
] | hardianlawi | 3 |
keras-team/keras | pytorch | 20,103 | Module not found errors 3.4.1 | Not useful. After I updated my libraries in Anaconda, all of my Keras codes started to give dramatic errors. I can import the Keras, but can not use it! I re-installed but the situation is same. I don't know how the dependencies or methods changed, but you should consider how people are using these. Now I have to install a previous version, but which one?
ModuleNotFoundError: No module named 'keras.layers'
ModuleNotFoundError: No module named 'keras.optimizers'
ModuleNotFoundError: No module named 'keras.regularizers'
ModuleNotFoundError: No module named 'tensorflow.keras'
| closed | 2024-08-09T11:35:27Z | 2025-01-17T01:59:25Z | https://github.com/keras-team/keras/issues/20103 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | O-Memis | 14 |
chatopera/Synonyms | nlp | 142 | 请问该库是需要联网才能工作吗 | ## 概述
你好,由于数据安全的问题,请问该库需要一直联网才能工作吗?还是在下载完词库文件后,就不需要联网?
<!-- 其它相关事项,或通过其它方式联系我们:https://www.chatopera.com/mail.html -->
| open | 2024-05-17T10:31:02Z | 2024-05-17T10:31:43Z | https://github.com/chatopera/Synonyms/issues/142 | [] | Dengshunge | 1 |
lepture/authlib | flask | 218 | Potential compliance-fix issue with Zoom refresh token headers | **Is your feature request related to a problem? Please describe.**
The Zoom refresh token process requires a header like so:
https://marketplace.zoom.us/docs/guides/auth/oauth#refreshing
```
Authorization | The string "Basic" with your Client ID and Client Secret with a colon : in between, Base64 Encoded. For example, Client_ID:Client_Secret
```
Is this something that needs to be fixed in a compliance fix?
**Describe the solution you'd like**
If this is a non-standard issue:
https://docs.authlib.org/en/latest/client/oauth2.html#compliance-fix-for-non-standard
https://docs.authlib.org/en/latest/client/frameworks.html#compliance-fix-for-oauth-2-0
Currently we have:
- access_token_response: invoked before token parsing.
- refresh_token_response: invoked before refresh token parsing.
- protected_request: invoked before making a request.
Could we add:
- refresh_token_request: invoked before refresh token request.
**Describe alternatives you've considered**
Using protected_request and then checking if it's the URL for the refresh token. And then adding the headers. But I'm not sure if that would work. | closed | 2020-04-20T20:14:49Z | 2020-05-07T12:55:21Z | https://github.com/lepture/authlib/issues/218 | [] | wgwz | 4 |
saulpw/visidata | pandas | 1,714 | Replay of command log aborts erratically | **Small description**
When replaying a (probably fairly long) command log file against a (probably fairly extensive) data set, VisiData encourters errors sporadically and non-reproducably, triggering the replay to be aborted. Interestingly, this does not seem to be a problem if the same command log file is run in batch mode (with the `-b` command-line switch).
Every once in a while, the replay _does_ run through completely.
**Expected result**
A reliable and reproducable replay of the command log.
**Actual result with screenshot**
The actual errors vary from call to call, but are generally of the nature `no sheet named XXX` or `no "YYY" column`. A screenshot and stack trace of an example run are attached.
**Steps to reproduce with sample data and a .vd**
See attached files (I had to zip up the command log file because `.vd` files are not accepted for upload); replaying the command log with
```
vd -p Sample_command_log.vd
```
is unreliable, while replaying it with
```
vd -b -p Sample_command_log.vd -o Sample_data.tsv
```
works just fine.
**Additional context**
Using VisiData version 2.11 at the moment, though this problem has been around ever since I started using VisiData about a year ago. Also, this problem exists on all of the machines I have tried so far (four different sets of hardware).
[Sample_command_log.vd.zip](https://github.com/saulpw/visidata/files/10568236/Sample_command_log.vd.zip)
[Sample_data.csv](https://github.com/saulpw/visidata/files/10568188/Sample_data.csv)
[Sample_run_-_Screenshot](https://user-images.githubusercontent.com/6197517/216308497-2b5ebce6-6077-4e9d-9b8b-bda445ed2a78.png)
[Sample_run_-_Stack_trace.txt](https://github.com/saulpw/visidata/files/10568197/Sample_run_-_Stack_trace.txt)
| closed | 2023-02-02T11:11:54Z | 2023-03-03T07:11:09Z | https://github.com/saulpw/visidata/issues/1714 | [
"bug",
"fixed"
] | tdussa | 9 |
proplot-dev/proplot | data-visualization | 261 | Linear ticks for LogLocator | ### Description
The official tutorial adds the ticks for LogLocator like this: `xlocator='log', xminorlocator='logminor'`.
But, if I manually set the ticks to linear one, it doesn't work well.
### Steps to reproduce
```python
fig, axs = plot.subplots(share=0)
axs.format(xlim=(1, 18), xlocator='log', xminorlocator=1, xtickminor=True)
```
**Expected behavior**:
The minor ticks should follow what we set in the format.

See the matplotlib code below.
**Actual behavior**:

### Equivalent steps in matplotlib
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
fig, ax = plt.subplots()
ax.plot([1, 10, 15], [1,2,3])
ax.set_xscale('log')
ax.xaxis.set_minor_locator(matplotlib.ticker.LinearLocator(17))
```
### Proplot version
0.6.4 | closed | 2021-07-13T08:48:38Z | 2021-07-13T21:47:31Z | https://github.com/proplot-dev/proplot/issues/261 | [
"already fixed"
] | zxdawn | 1 |
MaartenGr/BERTopic | nlp | 1,768 | how to get ctf-idf formula inputs for top-10 topic words? | Hello Maarten,
Thank you for creating and maintaining BERTopic, it is an incredibly useful tool in my current work!
I want to ask if it is possible to obtain input components for ctf-idf formula for each of the top 10 words returned per topic. That is I would like to have actual values of tf_{x,c} , f_{x}, and A for each of the top-10 topic words. Below for completeness is a motivation for why I need these inputs in case you wondering or can offer an easier solution without tf_{x,c}.
I fit a topic model on a smaller representative news data and I want to use top-10 words in searches of similar articles in another very large news database to track each topics coverage intensity over time. It is infeasible to run the topic model on this larger data due to its huge size and because I do not have access to full text anyways. I can only submit text queries and count results over time ranges. Since not every document in a topic cluster contains each of the top-10 words I want to randomly draw smaller groups (eg tuples or triples) from top-10 words with probabilities proportional to their frequencies tf_{x,c} and then join the words within a group with an 'AND'. Then join several such groups by OR and submit this query for articles that contain all words from at least one random group. | open | 2024-01-23T18:18:52Z | 2024-01-26T15:22:22Z | https://github.com/MaartenGr/BERTopic/issues/1768 | [] | vpolkovn | 1 |
hankcs/HanLP | nlp | 718 | 训练最新中文wiki问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.5.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
训练最新的zh wiki抛出数组越界异常
## 复现问题
[hadoop@LOCAL-202-89 new]$ java -cp hanlp-portable-1.5.2.jar com.hankcs.hanlp.mining.word2vec.Train -input zhwiki-latest-pages-articles.xml.simplified -output zhwiki.txt
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -3
at com.hankcs.hanlp.mining.word2vec.TextFileCorpus.reduceVocab(TextFileCorpus.java:69)
at com.hankcs.hanlp.mining.word2vec.TextFileCorpus.learnVocab(TextFileCorpus.java:142)
at com.hankcs.hanlp.mining.word2vec.Word2VecTraining.trainModel(Word2VecTraining.java:326)
at com.hankcs.hanlp.mining.word2vec.Train.execute(Train.java:33)
at com.hankcs.hanlp.mining.word2vec.Train.main(Train.java:38) | closed | 2017-12-18T11:57:06Z | 2017-12-19T00:29:48Z | https://github.com/hankcs/HanLP/issues/718 | [
"bug"
] | zhengzhuangjie | 1 |
d2l-ai/d2l-en | pytorch | 2,478 | Chapter 15.4. Pretraining word2vec: AttributeError: Can't pickle local object 'load_data_ptb.<locals>.PTBDataset' | AttributeError: Can't pickle local object 'load_data_ptb.<locals>.PTBDataset'

can anyone help with this error? | open | 2023-04-30T20:01:53Z | 2023-07-12T03:00:55Z | https://github.com/d2l-ai/d2l-en/issues/2478 | [] | keyuchen21 | 2 |
zappa/Zappa | flask | 420 | [Migrated] API Gateway caching and query parameters | Originally from: https://github.com/Miserlou/Zappa/issues/1081 by [tspecht](https://github.com/tspecht)
I'm currently trying to configure the caching on API Gateway side to reduce the load of my database. The endpoint I'm trying to configure is using a query parameter to pass-in the term the user typed into the search bar. I already enabled caching on APG side but unfortunately had to find out that by default it's ignoring the query parameters when building the cache key. While I don't really get why that is sensible default behavior to AWS I was wondering what would be the best way to configure it properly using Zappa. I already found one example for `serverless` (http://theburningmonk.com/2016/04/serverless-enable-caching-on-query-string-parameters-in-api-gateway/) but am wondering what I need to put for `CacheKeyParameters` into my CF template?
Any ideas? | closed | 2021-02-20T08:32:40Z | 2024-04-13T15:37:38Z | https://github.com/zappa/Zappa/issues/420 | [
"help wanted",
"hacktoberfest",
"no-activity",
"auto-closed"
] | jneves | 2 |
wkentaro/labelme | deep-learning | 1,300 | When I click on Create AI-polygon the program crashes and flashes back | ### Provide environment information
I'm using version v5.3.0a0 (Labelme.exe) in releases on windows, which should be able to run standalone without the python environment
### What OS are you using?
Windows 10 22H2 19042.3086
### Describe the Bug
When I click on Create AI-polygon the program crashes and flashes back. But I don't get any error message, how can I troubleshoot and solve this problem?

### Expected Behavior
SAM generates AI annotations like [https://github.com/wkentaro/labelme/pull/1262](#1262)
### To Reproduce
_No response_ | open | 2023-07-12T04:58:28Z | 2024-11-06T17:32:45Z | https://github.com/wkentaro/labelme/issues/1300 | [
"issue::bug"
] | jaycecd | 15 |
jumpserver/jumpserver | django | 14,384 | [Bug] 查询sql表格数据值,查询出来结果,无法复制,不能复制及其不方便二次查询数据 | ### 产品版本
v4.3.0
### 版本类型
- [X] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
1、系统:Rocky Linux release 9.2 (Blue Onyx)
2、内核:5.14.0-284.30.1.el9_2.x86_64
3、初次安装v4.1.0,使用脚本jmsctl.sh upgrade 依次离线升级成v4.3.0
4、jumpserver连接的是自己搭建的社区版本mysql5.6与阿里云rds的mysql8.0
5、使用edge与chrome浏览器访问jumpserver
### 🐛 缺陷描述

无法复制sql查询出的数据,极其不方便二次查询
更换浏览器依旧不行
连接不同的mysql资源故障依旧
### 复现步骤
v4.1.0没有这个问题,升级到v4.3.0后,就发现无法复制表格中数据
### 期望结果
大佬们,帮忙修复下吧,啥版本会修复呀
### 补充信息
_No response_
### 尝试过的解决方案
_No response_ | open | 2024-10-31T02:48:25Z | 2025-01-25T09:37:09Z | https://github.com/jumpserver/jumpserver/issues/14384 | [
"🐛 Bug",
"📦 z~release:Version TBD",
"📝 Recorded"
] | czwHNB | 2 |
pytest-dev/pytest-html | pytest | 55 | IDE specific auto-generated files need to be in gitignore. | In the recent times many IDEs have become popular for Python development. Among them are Jetbrain Community's [Pycharm](https://www.jetbrains.com/pycharm/) and [PyDev](http://www.pydev.org/), an IDE extended from Eclipse for Python development.
Upon opening a project within, these IDEs generate certain files and directories which are specific to them, and are not project specific.
These files and directories need not be pushed on Github with the main source. So they must be included in _.gitignore_ file.
| closed | 2016-07-03T03:53:51Z | 2016-07-04T10:39:21Z | https://github.com/pytest-dev/pytest-html/issues/55 | [] | kdexd | 1 |
plotly/dash-table | dash | 411 | Pagination - move current_page out of pagination_settings? | `current_page` is more state than configuration - it would be nice to be able to both create `pagination_settings` without involving `current_page`, and to subscribe to `current_page` in a callback independent of `pagination_settings`. I might also move `pagination_mode` to be `pagination_settings.mode` while we're at it... | closed | 2019-04-21T20:20:53Z | 2019-06-25T14:26:04Z | https://github.com/plotly/dash-table/issues/411 | [] | alexcjohnson | 2 |
huggingface/pytorch-image-models | pytorch | 2,355 | [BUG] mobilenetv4_conv RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation | **Describe the bug**
I tried to train model with 'mobilenetv4_conv_large.e600_r384_in1k' as backbone and got this error. Other models train without any problems.
```
/home/xxxxxx/.local/lib/python3.12/site-packages/torch/autograd/graph.py:825: UserWarning: Error detected in ReluBackward0. Traceback of forward call that caused the error:
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 2210, in <module>
main(0, world_size, ddp_init_file, args)
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 407, in main
train_acc, train_CE_loss, train_FL_loss, train_SmLbCE_loss, train_time = train(train_iterator, model, model_ema, criterion, optimizer, epoch, enable_OHEM, enable_CosineAnnealingLR, steps_per_epoch, accumulation_steps, rank, world_size, scaler)
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 544, in train
outputs = model(images)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 903, in forward
x = self.features(x)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/timm/models/mobilenetv3.py", line 273, in forward
x = self.forward_head(x)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/timm/models/mobilenetv3.py", line 262, in forward_head
x = self.norm_head(x)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/timm/layers/norm_act.py", line 127, in forward
x = self.act(x)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/activation.py", line 133, in forward
return F.relu(input, inplace=self.inplace)
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/nn/functional.py", line 1702, in relu
result = torch.relu_(input)
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:110.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 2210, in <module>
main(0, world_size, ddp_init_file, args)
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 407, in main
train_acc, train_CE_loss, train_FL_loss, train_SmLbCE_loss, train_time = train(train_iterator, model, model_ema, criterion, optimizer, epoch, enable_OHEM, enable_CosineAnnealingLR, steps_per_epoch, accumulation_steps, rank, world_size, scaler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxxxxx/goods-recognition-v3/scripts/train_classes_v11.py", line 562, in train
scaler.scale(loss).backward()
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/home/xxxxxx/.local/lib/python3.12/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [80, 1280, 1, 1]], which is output 0 of ReluBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
my model:
```
model_name = 'mobilenetv4_conv_large.e600_r384_in1k'
class ClassModel(torch.nn.Module):
def __init__(self, num_classes, model_name):
super(ClassModel, self).__init__()
self.features = timm.create_model(model_name, pretrained=True, num_classes=0, drop_path_rate=0.2) # https://rwightman.github.io/pytorch-image-models/feature_extraction/
self.drop_rate = 0.6
self.linear1 = torch.nn.Linear(self.features.head_hidden_size, num_classes, bias=True)
def forward(self, x):
x = self.features(x)
x = torch.nn.functional.dropout(x, p=self.drop_rate, inplace=True, training=self.training)
x = self.linear1(x)
return x
def finetuning(self, enable):
for param in self.features.parameters():
param.requires_grad = enable
```
**Desktop (please complete the following information):**
- OS: Ubuntu 24.04
- timm 1.0.12
- torch 2.5.1+cu124
- torchvision 0.20.1+cu124
| closed | 2024-12-04T09:24:04Z | 2024-12-05T05:29:34Z | https://github.com/huggingface/pytorch-image-models/issues/2355 | [
"bug"
] | MichaelMonashev | 3 |
ageitgey/face_recognition | python | 1,130 | dlib is using GPU but the cnn model is still taking too much, something is off somewhere on my setup. | * face_recognition version: 1.2.3
* Python version: 3.6.9
* Operating System:
Ubuntu 18.04.4 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
### Description
Hi.
I've been using the face_recognition library for some time under jetson nano Jetpack 4.3, having a performance of around 500 ms per frame on a 1280 x 720 image using the cnn model, with CUDA support on dlib, and everything working great.
Last night I decided to try Jetpack 4.4 and everything went fine until I saw the performance of the running process. It was around 2000 ms per frame, with the very same setup as before.
The first thing I suspected was that dlib, for some reason, may have not been compiled with CUDA support, but no, that was not the problem, as you can see below.
```
>>> import face_recognition
>>> face_recognition.__version__
'1.2.3'
>>> import dlib
>>> dlib.DLIB_USE_CUDA
True
>>> dlib.cuda.get_num_devices()
1
>>> dlib.__version__
'19.19.0'
>>>
```
Not only that, using jtop I can verify that when running the model GPU usage jumps to almost 100% instantly, meaning that the GPU is actually being used. However, the time it takes to process every frame is around 2 full seconds, a lot compared to the 500 ms I was getting just yesterday when running on Jetpack 4.3.
I've run out of ideas on where to look for the problem.
Any ideas?
Thank you. | open | 2020-05-01T15:23:27Z | 2020-06-16T08:14:09Z | https://github.com/ageitgey/face_recognition/issues/1130 | [] | drakorg | 5 |
plotly/dash | jupyter | 2,877 | [Feature Request] Add `outputs` and `outputs_list` to `window.dash_clientside.callback_context` | For dash's client-side callbacks, adding `outputs` and `outputs_list` for `window.dash_clientside.callback_context` will improve operational freedom in many scenarios. Currently, only the following information can be obtained in the client-side callbacks:

```python
import dash
from dash import html
from dash.dependencies import Input, Output
app = dash.Dash(__name__)
app.layout = html.Div(
[html.Button("trigger", id="trigger-demo"), html.Pre(id="output-demo")],
style={"padding": 50},
)
app.clientside_callback(
"""(n_clicks) => {
return JSON.stringify(Object.keys(window.dash_clientside.callback_context));
}""",
Output("output-demo", "children"),
Input("trigger-demo", "n_clicks"),
prevent_initial_call=True,
)
if __name__ == '__main__':
app.run(debug=True)
``` | closed | 2024-06-06T08:41:49Z | 2024-06-13T18:44:27Z | https://github.com/plotly/dash/issues/2877 | [
"good first issue"
] | CNFeffery | 1 |
iperov/DeepFaceLab | deep-learning | 842 | Don't run 4) data_src faceset extract and 5) data_dst faceset extract | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
Successfully run 4) data_src faceset extract.bat
## Actual behavior
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1050 Ti
[0] Which GPU indexes to choose? :
0
[wf] Face type ( f/wf/head ?:help ) :
wf
[0] Max number of faces from image ( ?:help ) :
0
[512] Image size ( 256-2048 ?:help ) : 256
256
[90] Jpeg quality ( 1-100 ?:help ) :
90
[n] Write debug images to aligned_debug? ( y/n ) :
n
Extracting faces...
Error while subprocess initialization: Traceback (most recent call last):
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Extractor.py", line 68, in on_initialize
nn.initialize (device_config)
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\nn.py", line 109, in initialize
nn.tf_sess = tf.Session(config=nn.tf_sess_config)
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 45, in process_extract
force_gpu_idxs = [ int(x) for x in arguments.force_gpu_idxs.split(',') ] if arguments.force_gpu_idxs is not None else None,
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Extractor.py", line 853, in main
device_config=device_config).run()
File "C:\Users\Daniel\Desktop\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 210, in run
raise Exception ( "Unable to start subprocesses." )
Exception: Unable to start subprocesses.
Presione una tecla para continuar . . .
## Steps to reproduce
My specs are: Intel Pentium G4400 and a GTX 1050 TI. The weird thing is that when I select the CPU it works (very slow, but it works), the problem is when I select the GPU (GTX 1050 TI)
## Other relevant information
- Windows 10 Pro 1903
- Windows Binary = DeepFaceLab_NVIDIA_build_07_18_2020.exe | closed | 2020-07-28T02:54:04Z | 2020-07-29T05:10:39Z | https://github.com/iperov/DeepFaceLab/issues/842 | [] | Danielfoit | 3 |
unionai-oss/pandera | pandas | 930 | Best way to integrate parsing logic | ### Question
What is the best way to integrate data cleaning/parsing logic with Pandera? I have included an example use case and my current solution below, but looking for feedback on other approaches etc.
### Scenario
Let's say I have a dataframe, like this:
```
name, phone_number
"user1", "+11231231234"
"user2", "(123)-123 1234"
"user3", 1231231234
```
I want to:
1. parse/process/clean the data
2. validate that my processing worked
I can do # 2 using pandera by creating a schema with checks. To do # 1 I would simply use pandas operations. However, if the logic remains totally separate, I may end up having duplicate code, such as column names etc.
What I ended up doing was creating a custom `Column` class that allowed me to store parser functions next to a column:
```
def default_parser(series: pd.Series) -> pd.Series:
return series
class Column(pa.Column):
def __init__(
self,
*args: Any,
parsers: Optional[List[Callable[..., Any]]] = None,
**kwargs: Any
) -> None:
self.parsers = [parser.default_parser]
if parsers:
self.parsers = parsers
super().__init__(*args, **kwargs)
def parse(self, series: Any) -> pd.Series:
for column_parser in self.parsers:
series = column_parser(series)
return series
```
I then have the option to specify parsers for each column:
```
...
"phone_number": Column(
str, nullable=True, parsers=[parse_phone_numbers]
),
...
```
I can then optionally call the parsers before running validation:
```
for column in schema_columns:
series: pd.Series = dataframe[column]
dataframe[column] = schema_columns[column].parse(series)
schema.validate(dataframe)
```
I like this approach as it means I can store my expected schema next to operations required to achieve that state, which had been especially useful when dealing with files with 50+ column names. Technically I could just have processing logic and use tests on mock data, but by integrating with pandera I can validate/test against real data each time to ensure no edge cases were missed by a new unexpected data format (from a csv, for example).
It is also essentially a more advanced/custom version of the `coerce` option, which adds a parser to each column, only it does simple type conversion rather than cleaning etc.
Is there a better way of doing this? If not, is this something that would be considered for Pandera? I have seen in previous issues mentions that Pandera should be strictly for validation, not processing, but in this case Pandera isn't doing any processing, it's just providing a place to plugin custom processing functions.
| open | 2022-08-31T02:36:04Z | 2022-08-31T02:36:04Z | https://github.com/unionai-oss/pandera/issues/930 | [
"question"
] | pwithams | 0 |
gradio-app/gradio | data-visualization | 10,731 | The reply text rendered by the streaming chatbot page is incomplete | ### Describe the bug
The reply text rendered by the streaming chatbot page is incomplete and missing from the content of response_message in yield response_message, state
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
yield response_message, state
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Name: gradio
Version: 5.20.0
```
### Severity
Blocking usage of gradio | closed | 2025-03-05T07:11:51Z | 2025-03-07T14:27:22Z | https://github.com/gradio-app/gradio/issues/10731 | [
"bug",
"needs repro"
] | wuxianyess | 8 |
ploomber/ploomber | jupyter | 172 | Jupyter extension improvements | * Better error log messages (task does not exist, dag failed to initialize, etc) in notebook and console
* Option to parse spec on file load vs jupyter start
| closed | 2020-07-06T13:46:28Z | 2020-07-08T06:18:25Z | https://github.com/ploomber/ploomber/issues/172 | [] | edublancas | 2 |
huggingface/datasets | tensorflow | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
| closed | 2024-08-28T12:27:48Z | 2024-12-06T11:32:02Z | https://github.com/huggingface/datasets/issues/7129 | [] | sergiopaniego | 0 |
fastapi/sqlmodel | pydantic | 466 | Alembic migration generated always set enum nullable to true | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
import enum
from sqlmodel import SQLModel, Field, UniqueConstraint, Column, Enum
class Contract_Status(str, enum.Enum):
CREATED = "Created"
COMPLETED = "Completed"
DEPOSITED = "Deposited"
CANCELLED = "Cancelled"
class Contract(SQLModel, table = True):
__table_args__ = (UniqueConstraint("ctrt_id"),)
ctrt_id: str = Field(primary_key = True, nullable = False)
status: Contract_Status = Field(sa_column = Column(Enum(Contract_Status, values_callable = lambda enum: [e.value for e in enum])), nullable = False)
```
### Description
1. Create Contract Model
2. Create migration using alembic
`alembic revision --autogenerate -m "MIGRATION MESSAGE"`
3. Migration generated, however, nullable for enum is always set to True
Note: I have tested using sqlalchemy defination, and it is working but not for SQLModel.
```
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('contract',
sa.Column('status', sa.Enum('Created', 'Completed', 'Deposited', 'Cancelled', name='contract_status'), nullable=True),
sa.Column('ctrt_id', sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.PrimaryKeyConstraint('ctrt_id'),
sa.UniqueConstraint('ctrt_id')
)
# ### end Alembic commands ###
```
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.10.5
### Additional Context
_No response_ | closed | 2022-10-11T03:35:15Z | 2023-11-09T00:10:17Z | https://github.com/fastapi/sqlmodel/issues/466 | [
"question",
"answered",
"investigate"
] | yixiongngvsys | 4 |
Evil0ctal/Douyin_TikTok_Download_API | api | 23 | 国际Tiktok 下载的是720p, 可以下1080p 吗? | 国际Tiktok 下载的是720p, 可以下1080p 吗? | closed | 2022-05-05T16:50:45Z | 2022-11-09T21:10:24Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/23 | [
"Fixed"
] | EddyN8 | 13 |
nerfstudio-project/nerfstudio | computer-vision | 2,663 | How can I enter the camera view after rotating a camera | After I rotate a camera using the rotate widget, how can I then enter that rotated camera view? | open | 2023-12-09T21:41:30Z | 2023-12-09T21:41:30Z | https://github.com/nerfstudio-project/nerfstudio/issues/2663 | [] | ecations | 0 |
plotly/dash | data-science | 2,567 | [BUG] Pinning werkzeug<2.3.0 causing issues with file watcher for hot reloader. |
**Describe your context**
- replace the result of `pip list | grep dash` below
```
dash 2.10.2
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows 11/WSL2
- Browser: edge
**Describe the bug**
App gets caught in an endless loop of restarting the app, making hot-reloading feature impossible
related issue in watchdog repo: https://github.com/gorakhargosh/watchdog/issues/967
logs show:
```
...
DEBUG:watchdog.observers.inotify_buffer:in-event <InotifyEvent: src_path=b'/opt/conda/envs/pa_venv/lib/python3.11/idlelib/idle_test', wd=40, mask=IN_ISDIR|IN_OPEN, cookie=0, name=''>
DEBUG:watchdog.observers.inotify_buffer:in-event <InotifyEvent: src_path=b'/opt/conda/envs/pa_venv/lib/python3.11/idlelib/idle_test/__pycache__', wd=40, mask=IN_ISDIR|IN_OPEN, cookie=0, name='__pycache__'>
DEBUG:watchdog.observers.inotify_buffer:in-event <InotifyEvent: src_path=b'/opt/conda/envs/pa_venv/lib/python3.11/idlelib/idle_test/__pycache__', wd=42, mask=IN_ISDIR|IN_OPEN, cookie=0, name=''>
DEBUG:watchdog.observers.inotify_buffer:in-event <InotifyEvent: src_path=b'/opt/conda/envs/pa_venv/lib/python3.11/idlelib/__pycache__', wd=2, mask=IN_ISDIR|IN_OPEN, cookie=0, name='__pycache__'>
DEBUG:watchdog.observers.inotify_buffer:in-event <InotifyEvent: src_path=b'/opt/conda/envs/pa_venv/lib/python3.11/idlelib/__pycache__', wd=41, mask=IN_ISDIR|IN_OPEN, cookie=0, name=''>
...
```
and on and on for every non-module file in the project directory
according to issue above it appears that werkzeug=2.3.0 resolved this issue, but is currently pinned by dash to be <2.3.0
If there is an outstanding issue motivating the pin i would be happy to try to help that along.
Thanks!!! | closed | 2023-06-16T10:54:53Z | 2024-07-25T13:16:15Z | https://github.com/plotly/dash/issues/2567 | [] | schlich | 4 |
python-restx/flask-restx | api | 191 | Custom swagger filename | Currently, when trying to use multiple blueprints with different restx Api's but with the same url_prefix, restx will default to using the same swagger.json file (since it is located in the folder). This makes eg Api(bp1, doc='/docs/internal') and Api(bp2, doc='docs/external') show the same swagger.json file, with only the first blueprint registered to the flask application will being shown (but both routes still works).
I would like to be able to be able to specify my swagger.json filename myself when creating the Api, to be able to have multiple docs for different blueprints under the same url_prefix scope.
| open | 2020-08-06T09:31:31Z | 2021-09-08T21:19:09Z | https://github.com/python-restx/flask-restx/issues/191 | [
"enhancement"
] | ProgHaj | 5 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 79 | New routes not reflecting in docs | Hi, I need to create new routes to add various utilities for example `/calculate_focrmula_1`. Even though I am adding rotes in `app.py` and creating new endpoint it is not reflecting. | closed | 2023-08-11T01:43:19Z | 2023-08-11T05:42:58Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/79 | [] | ranjeetds | 0 |
gradio-app/gradio | machine-learning | 10,856 | Save the history to a json file or a database and load it after restart gradio, using a chatinterface with save_history=True | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I created a chatbot program using the component chatinterface for gradio. I added the parameter save_history=True and it works fine. It creates the option to create a new conversation, clear the current conversation and etc.
But, when I restart the application, all the conversation history is cleared and starts with null again.
Is there a way to save all conversations history to a file or a database in order to restore it when the application is restarted?
**Describe the solution you'd like**
Save all conversations history to a file or a database in order to restore it when the application is restarted.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2025-03-21T18:05:26Z | 2025-03-23T21:30:46Z | https://github.com/gradio-app/gradio/issues/10856 | [] | clebermarq | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,280 | raise^H^H^H^H^H^H automatically proxy when column is reused in new values | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10278
| closed | 2023-08-25T14:39:33Z | 2023-08-30T15:02:03Z | https://github.com/sqlalchemy/sqlalchemy/issues/10280 | [
"bug",
"sql",
"near-term release"
] | zzzeek | 2 |
eriklindernoren/ML-From-Scratch | machine-learning | 102 | Project dependencies may have API risk issues | Hi, In **ML-From-Scratch**, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
```
matplotlib
numpy
sklearn
pandas
cvxopt
scipy
progressbar33
terminaltables
gym
```
The version constraint **==** will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint **No Upper Bound** and **\*** will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency **numpy** can be changed to *>=1.8.0,<=1.23.0rc3*.
The version constraint of dependency **pandas** can be changed to *>=0.4.0,<=1.2.5*.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
<details><summary>The calling methods from the numpy</summary>
<pre>numpy.linalg.eigh
numpy.linalg.eig
numpy.linalg.svd
numpy.linalg.norm
numpy.linalg.det
numpy.linalg.inv
numpy.linalg.pinv
</pre>
</details>
<details><summary>The calling methods from the pandas</summary>
<pre>pandas.read_csv
</pre>
</details>
<details><summary>The calling methods from the all methods</summary>
<pre>mlfromscratch.supervised_learning.LassoRegression.predict
w.T.dot
self.layer_input.T.dot
progressbar.Percentage
mlfromscratch.reinforcement_learning.DeepQNetwork
self.W.reshape
numpy.concatenate
KNN
model_builder
matplotlib.pyplot.legend
GAN.train
layer.backward_pass
mlfromscratch.utils.polynomial_features.dot
mlfromscratch.utils.to_categorical.astype
numpy.linalg.pinv
NotImplementedError
self.U_opt.update
self._determine_frequent_itemsets
self._mutate
NaiveBayes.fit
mlfromscratch.unsupervised_learning.KMeans
mlfromscratch.supervised_learning.NaiveBayes.fit
numpy.argsort
self._transaction_contains_items
sample_predictions.astype
l1_l2_regularization
sets.append
self.output_shape
self._calculate_centroids
self._get_frequent_items
mlfromscratch.supervised_learning.XGBoostRegressionTree.fit
accum_grad.transpose.reshape
mlfromscratch.supervised_learning.PolynomialRidgeRegression.predict
y_pred.append
mlfromscratch.supervised_learning.BayesianRegression.predict
mlfromscratch.deep_learning.NeuralNetwork.add
NeuralNetwork.add
self.layers.append
self._insert_tree
numpy.random.shuffle
self.combined.train_on_batch
mlfromscratch.supervised_learning.BayesianRegression.fit
self.output_activation
ClassificationTree.fit
sklearn.datasets.load_digits
mlfromscratch.supervised_learning.LassoRegression
numpy.mean
cutoff.i.parent2.layers.w0.copy
self.train_on_batch
grad_wrt_state.T.dot
self._transform
mlfromscratch.supervised_learning.XGBoost.predict
mlfromscratch.supervised_learning.KNN.predict
self._converged
self.GradientBoostingClassifier.super.__init__
self.omega0.X_X.np.linalg.pinv.dot
reversed
gym.make
accum_grad.transpose.ravel
mlfromscratch.utils.calculate_covariance_matrix
mlfromscratch.unsupervised_learning.PCA
y.np.array.astype
self.hidden_activation.gradient
self.sigmoid
numpy.split
sigmoid.dot
logging.basicConfig
self.memory.pop
self._get_frequent_itemsets
X.diag_gradient.X.T.dot.dot.np.linalg.pinv.dot
tmp_y2.astype
mlfromscratch.deep_learning.layers.Reshape
Autoencoder.train
self._construct_tree
self.ElasticNet.super.predict
diff.any
max
self._sample
mlfromscratch.deep_learning.NeuralNetwork
isinstance
self.LassoRegression.super.predict
mean.sample.T.dot
self.predict_value
diag_gradient.X.T.dot.dot
self.activation_func
tmp_y1.astype
os.path.dirname
j.i.axs.axis
neighbor_labels.astype
numpy.expand_dims
X.T.dot.dot
dqn.model.summary
batch.sum
numpy.mean.append
self._init_random_centroids
self._calculate_likelihood
grad_func
mlfromscratch.utils.make_diagonal
self._build_tree
matplotlib.pyplot.get_cmap
self.activation_func.gradient
self.build_encoder
mlfromscratch.reinforcement_learning.DeepQNetwork.play
mlfromscratch.supervised_learning.decision_tree.RegressionTree
mlfromscratch.supervised_learning.ParticleSwarmOptimizedNN
SupportVectorMachine
posteriors.append
LogisticLoss
hidden_output.T.dot
self.omega0.dot
gen_mult_ser
grad_wrt_state.dot.dot
mlfromscratch.supervised_learning.SupportVectorMachine.predict
self.model.train_on_batch
math.ceil
V.dot
sigmoid
self._forward_pass
layer.forward_pass
numpy.array
numpy.random.randint
numpy.linalg.norm
self.LinearRegression.super.__init__
self._calculate_support
numpy.round
mlfromscratch.deep_learning.NeuralNetwork.test_on_batch
LDA
numpy.power
matplotlib.pyplot.figure.add_subplot
LDA.fit
self.test_on_batch
mlfromscratch.supervised_learning.Adaboost
self.PolynomialRidgeRegression.super.__init__
mlfromscratch.utils.make_diagonal.dot
determine_padding
layer.initialize
self.save_imgs
mlfromscratch.supervised_learning.LinearRegression.fit
mlfromscratch.utils.data_operation.accuracy_score
self._split
self.find_frequent_itemsets
min
model.evolve.test_on_batch
numpy.repeat
self.__call__
self.build_discriminator
print
numpy.linalg.inv
mlfromscratch.supervised_learning.RandomForest.fit
X.reshape.repeat
LDA.predict
numpy.linspace
mlfromscratch.unsupervised_learning.PAM.predict
l2_regularization
self._calculate_fitness.append
RandomForest
enumerate
mlfromscratch.utils.divide_on_feature
mlfromscratch.utils.batch_iterator
mlfromscratch.utils.data_manipulation.train_test_split
X.resp.sum
mlfromscratch.supervised_learning.NaiveBayes.predict
mlfromscratch.utils.train_test_split
self._pool_forward
math.floor
sklearn.datasets.make_classification
layer.parameters
self.LassoRegression.super.__init__
numpy.clip
f.read.split
Perceptron.predict
mlfromscratch.unsupervised_learning.Apriori
calculate_std_dev
self.env.render
log2
numpy.identity
Perceptron.fit
X.reshape.dot
mlfromscratch.supervised_learning.SupportVectorMachine.fit
self._expectation
self.build_generator
tree.predict
numpy.tile
filter_width.filter_height.channels.np.arange.np.repeat.reshape
self.LassoRegression.super.fit
mlfromscratch.unsupervised_learning.PCA.transform
numpy.atleast_2d
self.discriminator.summary
batch_errors.append
self._backward_pass
numpy.linalg.det
self.ElasticNet.super.fit
layer.set_input_shape
mlfromscratch.deep_learning.layers.Conv2D
self._vote
self.sigmoid.gradient
mlfromscratch.utils.calculate_entropy
numpy.sum
self.model.predict
mlfromscratch.unsupervised_learning.PAM
individual.test_on_batch
index_combinations
progressbar.Bar
col.mean
numpy.ones
numpy.arange
cvxopt.solvers.qp
NaiveBayes
self.letters.index
t.accum_grad.T.dot
mlfromscratch.unsupervised_learning.FPGrowth.find_frequent_itemsets
cutoff.i.parent1.layers.W.copy
mlfromscratch.supervised_learning.GradientBoostingRegressor.fit
matplotlib.pyplot.title
range
mlfromscratch.reinforcement_learning.DeepQNetwork.train
matplotlib.pyplot.figure
mlfromscratch.supervised_learning.ElasticNet.fit
GradientBoostingClassifier.fit
numpy.insert
mlfromscratch.supervised_learning.Adaboost.predict
os.path.join
self.hidden_activation
self.PolynomialRidgeRegression.super.predict
mlfromscratch.deep_learning.loss_functions.CrossEntropy
self._expand_cluster
mlfromscratch.supervised_learning.PolynomialRidgeRegression.fit
mlfromscratch.utils.k_fold_cross_validation_sets
matplotlib.pyplot.axis
self._calculate_scatter_matrices
X.dot
frequent.append
self._get_non_medoids
mlfromscratch.supervised_learning.BayesianRegression
self.w0_opt.update
mlfromscratch.deep_learning.layers.ZeroPadding2D
mlfromscratch.deep_learning.activation_functions.Sigmoid
X.std
DCGAN.train
sklearn.datasets.load_iris
numpy.exp
self.regularization.grad
mlfromscratch.unsupervised_learning.DBSCAN.predict
Perceptron
self.XGBoostRegressionTree.super.fit
x.strip.replace
sigmoid.sum
rules.append
matplotlib.pyplot.close
Adaboost.predict
col.var
KNN.predict
matplotlib.pyplot.ylabel
self._build_model
mlfromscratch.unsupervised_learning.RBM
self.cmap
numpy.amax
S.np.linalg.pinv.V.dot.dot
self._calculate_prior
float
mlfromscratch.supervised_learning.XGBoost.fit
mlfromscratch.supervised_learning.LDA.predict
matplotlib.pyplot.subplots
mlfromscratch.supervised_learning.KNN
numpy.random.random_sample
self.multivariate_gaussian
image_to_column
numpy.bincount
matplotlib.pyplot.suptitle
mlfromscratch.supervised_learning.ClassificationTree.fit
self.env.close
numpy.random.normal
column_to_image
sklearn.datasets.make_moons
items.sort
mu_n.T.dot
sample_predictions.astype.np.bincount.argmax
activation_function
get_im2col_indices
self.kernel
X1.mean
cutoff.i.parent2.layers.W.copy
mlfromscratch.deep_learning.optimizers.Adam
self.print_tree
X_test.reshape.reshape
mlfromscratch.supervised_learning.GradientBoostingClassifier.fit
y_pred.np.sign.flatten
mlfromscratch.supervised_learning.MultiClassLDA
XGBoost.fit
pow
shuffle_data
self.W_opt.update
numpy.empty
conditional_database.append
cols.transpose.reshape
self._closest_centroid
XGBoost.predict
mlfromscratch.supervised_learning.GradientBoostingRegressor.predict
self._inherit_weights
accum_grad.reshape.transpose
mlfromscratch.utils.data_operation.calculate_covariance_matrix
mlfromscratch.utils.accuracy_score
self.transform
numpy.ravel
numpy.argmax
X_train.reshape.reshape
self._generate_candidates
terminaltables.AsciiTable
numpy.random.uniform
mnist.data.reshape
self.LinearRegression.super.fit
X.T.X_X.np.linalg.pinv.dot.dot
split_func
self._leaf_value_calculation
mlfromscratch.deep_learning.layers.Activation
mlfromscratch.utils.calculate_variance
mlfromscratch.supervised_learning.LogisticRegression.predict
self.param.X.dot.self.sigmoid.np.round.astype
self.loss.hess
mlfromscratch.supervised_learning.LinearRegression
mlfromscratch.supervised_learning.GradientBoostingRegressor
numpy.pad.transpose
i.self.parameters.append
progressbar.ProgressBar
model.evolve.add
self.w_updt.any
j.i.axs.imshow
self.autoencoder.train_on_batch
X.reshape.reshape
x.strip
self.PolynomialRegression.super.__init__
numpy.percentile
mlfromscratch.supervised_learning.LogisticRegression
numpy.tile.reshape
numpy.log
self._calculate_cost
mlfromscratch.supervised_learning.MultiClassLDA.plot_in_2d
covar.np.linalg.pinv.mean.sample.T.dot.dot
mlfromscratch.deep_learning.NeuralNetwork.summary
mlfromscratch.reinforcement_learning.DeepQNetwork.set_model
numpy.divide
numpy.var
numpy.add.at
noise.self.generator.predict.reshape
self.autoencoder.summary
t.accum_grad.dot
numpy.random.seed
t.X.dot
mlfromscratch.utils.polynomial_features
mlfromscratch.utils.to_categorical
mlfromscratch.supervised_learning.RegressionTree.predict
math.sqrt
model.velocity.append
numpy.random.rand
MultilayerPerceptron.fit
mlfromscratch.supervised_learning.XGBoostRegressionTree
y_pred.y.self.loss.hess.sum
mlfromscratch.supervised_learning.RegressionTree.fit
self._create_clusters
mlfromscratch.unsupervised_learning.Apriori.find_frequent_itemsets
super
layer.output_shape
mlfromscratch.supervised_learning.XGBoost
X.T.X.diag_gradient.X.T.dot.dot.np.linalg.pinv.dot.dot
i.parent.layers.W.copy
self.loss_function.gradient
mlfromscratch.deep_learning.layers.BatchNormalization
self.trees.append
self.W_col.T.dot
numpy.roll
self._initialize_weights
self.V_opt.update
i.self.trees.predict
mlfromscratch.unsupervised_learning.Apriori.generate_rules
slice
self._get_likelihoods
activation.activation_functions
SW.np.linalg.inv.dot
numpy.sign
mlfromscratch.supervised_learning.ClassificationTree
self.discriminator.train_on_batch
math.log
str
self._memorize
mlfromscratch.utils.euclidean_distance
NeuralNetwork.predict
self._get_non_medoids.append
model.evolve.predict
numpy.atleast_2d.reshape
numpy.argmax.any
hasattr
SupportVectorMachine.predict
self.X_col.T.accum_grad.dot.reshape
numpy.zeros
scipy.stats.chi2.rvs
self._initialize_population
DCGAN
XGBoost
SupportVectorMachine.fit
y_pred.y.dot
mlfromscratch.supervised_learning.LinearRegression.predict
self.RegressionTree.super.fit
self._update_weights
mlfromscratch.supervised_learning.ClassificationTree.predict
mlfromscratch.supervised_learning.Adaboost.fit
X.mean
NaiveBayes.predict
mlfromscratch.utils.standardize
mlfromscratch.supervised_learning.LDA
self.beta_opt.update
numpy.repeat.reshape
numpy.full
format
batch.dot
self.autoencoder.predict
self._determine_prefixes
os.path.abspath
mlfromscratch.deep_learning.loss_functions.SquareLoss
DecisionStump
self.autoencoder.layers.extend
self.population.append
scipy.stats.multivariate_normal.rvs
list
f.read
self.loss.gradient.dot
sklearn.datasets.make_regression
self._classify
cluster.append
sklearn.datasets.make_blobs
numpy.multiply
model
self._closest_medoid
numpy.shape
self.bar
self.RidgeRegression.super.__init__
total_mean._mean.dot
self._pool_backward
mlfromscratch.supervised_learning.GradientBoostingClassifier.predict
self.output_activation.gradient
numpy.unique
tuple
accum_grad.reshape.reshape
self.parameters.append
numpy.outer
mlfromscratch.utils.get_random_subsets
Adaboost.fit
self.activation.gradient
numpy.zeros_like
len
self._gain
numpy.array_equal
X.diag_gradient.dot.dot
self._get_frequent_items.index
S.np.linalg.pinv.V.dot.dot.dot
zip
y.shape.X.shape.model_builder.summary
self.clusters.append
self._has_infrequent_itemsets
self._init_random_medoids.copy
t.self.states.dot
self.reconstruct
self.env.step
self.size.X.repeat.repeat
MultilayerPerceptron
self.responsibility.argmax
loss
mlfromscratch.unsupervised_learning.DBSCAN
sample.dot
numpy.atleast_1d
sklearn.datasets.fetch_mldata
X.mean.X.T.dot
mlfromscratch.utils.normalize.dot
self.fit
itertools.combinations_with_replacement
mlfromscratch.deep_learning.layers.RNN
self._init_random_gaussians
centroid_i.clusters.append
self.model_builder
set
self._draw_scaled_inv_chi_sq
l1_regularization
self.ElasticNet.super.__init__
self.clfs.append
self.PolynomialRegression.super.predict
fig.savefig
sum
Y.mean
mlfromscratch.supervised_learning.LDA.fit
accum_grad.reshape.dot
setuptools.setup
LogisticRegression.predict
mlfromscratch.supervised_learning.LassoRegression.fit
self.initialize_weights
self.memory.append
mlfromscratch.supervised_learning.decision_tree.RegressionTree.predict
X_X.np.linalg.pinv.dot
X_col.np.argmax.flatten
cmap
cutoff.i.parent1.layers.w0.copy
copy.copy
self._crossover
layer.layer_name
y_pred.y.self.loss.gradient.y.sum
name.activation_functions
self.hidden_activation.dot
self.GradientBoostingRegressor.super.__init__
numpy.pad.reshape
omega_n.dot
GradientBoostingClassifier.predict
numpy.bincount.argmax
mlfromscratch.supervised_learning.Perceptron.fit
numpy.inner
mlfromscratch.supervised_learning.GradientBoostingClassifier
join
matplotlib.pyplot.scatter
matplotlib.pyplot.show
self._rules_from_itemset
matplotlib.pyplot.xlabel
MultilayerPerceptron.predict
transaction.sort
mlfromscratch.supervised_learning.PolynomialRidgeRegression
mlfromscratch.supervised_learning.LogisticRegression.fit
numpy.diag
itertools.combinations
self._sample.dot
medoid_i.clusters.append
numpy.append
mlfromscratch.deep_learning.NeuralNetwork.predict
self.env.reset
X.T.dot
LogisticRegression
random.sample
self.generator.predict
self.responsibilities.append
mlfromscratch.utils.Plot.plot_in_2d
setuptools.find_packages
self._get_cluster_labels
Adaboost
mlfromscratch.supervised_learning.RegressionTree
self.PolynomialRegression.super.fit
self.log_func
mlfromscratch.unsupervised_learning.RBM.fit
Autoencoder
self.build_decoder
mlfromscratch.unsupervised_learning.GeneticAlgorithm.run
self.responsibility.sum
self.loss_function.acc
numpy.linalg.svd
mlfromscratch.deep_learning.layers.UpSampling2D
eigenvalues.argsort
mlfromscratch.supervised_learning.RandomForest
numpy.pad
mlfromscratch.unsupervised_learning.FPGrowth
mlfromscratch.supervised_learning.Neuroevolution
mlfromscratch.supervised_learning.RandomForest.predict
self._maximization
self._get_itemset_key
cols.transpose.transpose
pandas.read_csv
self._construct_training_set
self.training_reconstructions.append
numpy.linalg.eig
numpy.argmax.astype
math.exp
self.progressbar
table_data.append
mlfromscratch.deep_learning.layers.Dropout
numpy.expand_dims.dot
numpy.max
self._get_neighbors
NeuralNetwork.fit
mlfromscratch.supervised_learning.Perceptron
self.training_errors.append
RandomForest.predict
cvxopt.matrix
cov_tot.np.linalg.pinv.dot
matplotlib.pyplot.plot
X.T.X_sq_reg_inv.dot.dot
mlfromscratch.deep_learning.layers.Flatten
mlfromscratch.unsupervised_learning.GaussianMixtureModel
self.generator.summary
int
fig.add_subplot.scatter
numpy.where
self.GradientBoostingClassifier.super.fit
mlfromscratch.unsupervised_learning.GaussianMixtureModel.predict
FPTreeNode
neighbors.append
self.visited_samples.append
self.activation
self.regularization
cnt.gen_imgs.reshape
RandomForest.fit
mlfromscratch.supervised_learning.Perceptron.predict
DecisionNode
self._initialize
i.parent.layers.w0.copy
self.loss.gradient
mlfromscratch.supervised_learning.ElasticNet.predict
self.omega0.self.mu0.T.dot.dot
self.errors.append
self.freq_itemsets.append
main
abs
self._generate_candidates.append
self.combined.layers.extend
self.W_col.dot
i.self.trees.fit
mlfromscratch.supervised_learning.SupportVectorMachine
numpy.random.random
self.ClassificationTree.super.fit
GradientBoostingClassifier
x.startswith
GAN
items.append
batch_error.append
codecs.open
self.discriminator.set_trainable
ClassificationTree.predict
Rule
self.loss_function.loss
self._init_random_medoids
numpy.random.binomial
self._initialize_parameters
mlfromscratch.supervised_learning.XGBoostRegressionTree.predict
self.mu0.T.dot
self.layers.output_shape
mlfromscratch.utils.mean_squared_error
self._impurity_calculation
mean.X.T.dot
numpy.prod
model.evolve.evolve
LogisticRegression.fit
calculate_variance
y.T.dot
numpy.sqrt
mlfromscratch.deep_learning.layers.Dense
mlfromscratch.unsupervised_learning.KMeans.predict
numpy.vstack
class_distr.append
mlfromscratch.deep_learning.NeuralNetwork.fit
negative_visible.T.dot
mlfromscratch.utils.data_operation.mean_squared_error
numpy.expand_dims.sum
numpy.random.choice
self.gamma_opt.update
progressbar.ETA
mlfromscratch.utils.Plot
X.astype.astype
mlfromscratch.supervised_learning.NaiveBayes
NeuralNetwork
mlfromscratch.unsupervised_learning.GeneticAlgorithm
ClassificationTree
mlfromscratch.utils.normalize
self.PolynomialRidgeRegression.super.fit
batch.T.dot
self._is_prefix
mlfromscratch.deep_learning.activation_functions.Softmax
X2.mean
math.pow
subsets.append
imgs.self.autoencoder.predict.reshape
numpy.linalg.eigh
mlfromscratch.supervised_learning.ElasticNet
y.reshape
self._calculate_fitness
self._select_action
</pre>
</details>
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much. | open | 2022-10-26T02:13:37Z | 2022-10-26T02:13:37Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/102 | [] | PyDeps | 0 |
ageitgey/face_recognition | machine-learning | 1,547 | Windows installation | * face_recognition version: 1.3.0
* Python version: 3.12
* Operating System: windows 11
I trying to built face recognition application, I had successfully installed opencv-python, dlib, face_recognition libraries but after this when I tried to run the file I was asked to install the face_recognition_models and I did that to even after I was repeatedly ask to do the same,
(venv) PS D:\sathish\face_recognition> ```python main.py```
Please install `face_recognition_models` with this command before using `face_recognition`:
pip install git+https://github.com/ageitgey/face_recognition_models
if I put ```pip show face_recognition``` it show the information as below
Name: face-recognition
Version: 1.3.0
Summary: Recognize faces from Python or from the command line
Home-page: https://github.com/ageitgey/face_recognition
Author: Adam Geitgey
Author-email: ageitgey@gmail.com
License: MIT license
Location: D:\sathish\face_recognition\venv\Lib\site-packages
Requires: Click, dlib, face-recognition-models, numpy, Pillow
Required-by:
why I am repeatedly asking to do the installation, please suggest any solution.
Thanks... | open | 2024-01-07T06:51:57Z | 2025-03-13T13:13:21Z | https://github.com/ageitgey/face_recognition/issues/1547 | [] | SATHISHK108 | 12 |
plotly/dash | data-visualization | 2,711 | [BUG] Import error with typing_extensions == 4.9.0 | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
python 3.9.7
Linux
typing_extensions == 4.9.0
```
dash 2.14.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
When importing dash an error was returned related to typing_extensions (cannot import name 'NotRequired' from 'typing_extensions').
Falling back to the lower version possible that dash permitted for typing extensions fixed the bug (4.1.1)
| closed | 2023-12-14T10:36:38Z | 2023-12-14T16:04:35Z | https://github.com/plotly/dash/issues/2711 | [] | abetatos | 3 |
scikit-image/scikit-image | computer-vision | 7,087 | TypeError: No matching signature found in watershed | Hi,
I am a new user, I encountered the problem with watershed of the package. Here are the errors:
```
TypeError Traceback (most recent call last)
Input In [1], in <cell line: 52>()
49 markers[resized_im >= upper_cutoff] = 2
50 print (markers)
---> 52 segmentation = watershed(elevation_map, markers)
53 plt.imshow(segmentation)
54 plt.title('Segmented tissues')
File ~/.local/lib/python3.9/site-packages/skimage/segmentation/_watershed.py:215, in watershed(image, markers, connectivity, offset, mask, compactness, watershed_line)
212 marker_locations = np.flatnonzero(output)
213 image_strides = np.array(image.strides, dtype=np.intp) // image.itemsize
--> 215 _watershed_cy.watershed_raveled(image.ravel(),
216 marker_locations, flat_neighborhood,
217 mask, image_strides, compactness,
218 output.ravel(),
219 watershed_line)
221 output = crop(output, pad_width, copy=True)
223 return output
File _watershed_cy.pyx:70, in skimage.segmentation._watershed_cy.__pyx_fused_cpdef()
TypeError: No matching signature found
```
Could you please help with the bug?
Thank you very much!
| closed | 2023-08-07T05:11:48Z | 2024-02-21T13:05:53Z | https://github.com/scikit-image/scikit-image/issues/7087 | [
":people_hugging: Support"
] | weibei43 | 7 |
pytest-dev/pytest-mock | pytest | 252 | When pytest-mock installed "assert_not_called" has an exception when assertion fails | # Description
Hi,
I am trying to use MagicMock's `assert_not_called_with` method. When I have `pytest-mock` installed and the assertion is failing, I see this message: `During handling of the above exception, another exception occurred:`
This might be undesired because there is too much output while trying to find why my test has failed.
## Example Code
```python
# Contents of tests/test_something.py
from unittest.mock import MagicMock
def test_assert_not_called_with_pytest():
magic_mock = MagicMock()
magic_mock.hello(1)
magic_mock.hello.assert_not_called()
```
### Example run without pytest-mock installed
```
============================= test session starts ==============================
platform darwin -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /private/var/folders/6g/63d9pqp17ksf50rrs6qw65br0000gp/T/tmp.KtOTIK0R
collected 1 item
tests/test_something.py F [100%]
=================================== FAILURES ===================================
______________________ test_assert_not_called_with_pytest ______________________
def test_assert_not_called_with_pytest():
magic_mock = MagicMock()
magic_mock.hello(1)
> magic_mock.hello.assert_not_called()
tests/test_something.py:6:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='mock.hello' id='4355535152'>
def assert_not_called(self):
"""assert that the mock was never called.
"""
if self.call_count != 0:
msg = ("Expected '%s' to not have been called. Called %s times.%s"
% (self._mock_name or 'mock',
self.call_count,
self._calls_repr()))
> raise AssertionError(msg)
E AssertionError: Expected 'hello' to not have been called. Called 1 times.
E Calls: [call(1)].
/Users/jnguyen/.pyenv/versions/3.9.5/lib/python3.9/unittest/mock.py:868: AssertionError
=========================== short test summary info ============================
FAILED tests/test_something.py::test_assert_not_called_with_pytest - Assertio...
============================== 1 failed in 0.10s ===============================
```
### Example run with pytest-mock installed
```
➜ python -m pytest tests
============================= test session starts ==============================
platform darwin -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /private/var/folders/6g/63d9pqp17ksf50rrs6qw65br0000gp/T/tmp.KtOTIK0R
plugins: mock-3.6.1
collected 1 item
tests/test_something.py F [100%]
=================================== FAILURES ===================================
______________________ test_assert_not_called_with_pytest ______________________
__wrapped_mock_method__ = <function NonCallableMock.assert_not_called at 0x10421da60>
args = (<MagicMock name='mock.hello' id='4365369984'>,), kwargs = {}
__tracebackhide__ = True
msg = "Expected 'hello' to not have been called. Called 1 times.\nCalls: [call(1)].\n\npytest introspection follows:\n\nArgs:\nassert (1,) == ()\n Left contains one more item: 1\n Use -v to get the full diff"
__mock_self = <MagicMock name='mock.hello' id='4365369984'>, actual_args = (1,)
actual_kwargs = {}
introspection = '\nArgs:\nassert (1,) == ()\n Left contains one more item: 1\n Use -v to get the full diff'
@py_assert2 = (), @py_assert1 = None
@py_format4 = '(1,) == ()\n~Left contains one more item: 1\n~Use -v to get the full diff'
def assert_wrapper(
__wrapped_mock_method__: Callable[..., Any], *args: Any, **kwargs: Any
) -> None:
__tracebackhide__ = True
try:
> __wrapped_mock_method__(*args, **kwargs)
venv/lib/python3.9/site-packages/pytest_mock/plugin.py:414:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='mock.hello' id='4365369984'>
def assert_not_called(self):
"""assert that the mock was never called.
"""
if self.call_count != 0:
msg = ("Expected '%s' to not have been called. Called %s times.%s"
% (self._mock_name or 'mock',
self.call_count,
self._calls_repr()))
> raise AssertionError(msg)
E AssertionError: Expected 'hello' to not have been called. Called 1 times.
E Calls: [call(1)].
/Users/jnguyen/.pyenv/versions/3.9.5/lib/python3.9/unittest/mock.py:868: AssertionError
During handling of the above exception, another exception occurred:
def test_assert_not_called_with_pytest():
magic_mock = MagicMock()
magic_mock.hello(1)
> magic_mock.hello.assert_not_called()
E AssertionError: Expected 'hello' to not have been called. Called 1 times.
E Calls: [call(1)].
E
E pytest introspection follows:
E
E Args:
E assert (1,) == ()
E Left contains one more item: 1
E Use -v to get the full diff
tests/test_something.py:6: AssertionError
=========================== short test summary info ============================
FAILED tests/test_something.py::test_assert_not_called_with_pytest - Assertio...
============================== 1 failed in 0.09s ===============================
``` | closed | 2021-08-09T18:43:00Z | 2022-01-28T12:31:35Z | https://github.com/pytest-dev/pytest-mock/issues/252 | [
"enhancement",
"help wanted"
] | ecs-jnguyen | 5 |
pyg-team/pytorch_geometric | deep-learning | 9,709 | add_random_edge triggers type error | ### 🐛 Describe the bug
Hi,
```python
import torch
from torch_geometric.utils import add_random_edge
edge_index = torch.tensor([[0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4],
[1, 2, 3, 4, 0, 2, 3, 4, 0, 1, 4, 0, 1, 4, 0, 1, 2, 3]])
edge_index, added_edges = add_random_edge(edge_index, p=0.05, force_undirected=True, num_nodes=5)
```
And the error goes:
```bash
Traceback (most recent call last):
File "/workspaces/mygcl/test.py", line 7, in <module>
edge_index, added_edges = add_random_edge(edge_index, p=0.05, force_undirected=True, num_nodes=5)
File "/usr/local/lib/python3.9/dist-packages/torch_geometric/utils/augmentation.py", line 230, in add_random_edge
edge_index_to_add = negative_sampling(
File "/usr/local/lib/python3.9/dist-packages/torch_geometric/utils/_negative_sampling.py", line 114, in negative_sampling
return vector_to_edge_index(neg_idx, size, bipartite, force_undirected)
File "/usr/local/lib/python3.9/dist-packages/torch_geometric/utils/_negative_sampling.py", line 377, in vector_to_edge_index
col = offset[row].add_(idx) % num_nodes
RuntimeError: result type Float can't be cast to the desired output type Long
```
It seems not adding any edges, which is okay. But there might be some modifications needed in `negative_sampling` to prevent the runtime type error.
### Versions
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23357 100 23357 0 0 82052 0 --:--:-- --:--:-- --:--:-- 82242
Collecting environment information...
PyTorch version: 2.1.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.20 (main, Sep 7 2024, 18:35:25) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Nvidia driver version: 560.94
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 3
BogoMIPS: 5376.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.0
[pip3] torch==2.1.0+cu118
[pip3] torch-cluster==1.6.3+pt21cu118
[pip3] torch-geometric==2.6.1
[pip3] torch-scatter==2.1.2+pt21cu118
[pip3] torch-sparse==0.6.18+pt21cu118
[pip3] torch-spline-conv==1.2.2+pt21cu118
[pip3] torchaudio==2.1.0+cu118
[pip3] torchvision==0.16.0+cu118
[pip3] triton==2.1.0
[conda] Could not collect | open | 2024-10-14T19:37:36Z | 2024-10-14T19:37:36Z | https://github.com/pyg-team/pytorch_geometric/issues/9709 | [
"bug"
] | Zero-Yi | 0 |
miguelgrinberg/python-socketio | asyncio | 831 | python-socketio sio.emit missing / automatically disconnects | **Describe the bug**
I am doing an emit with a json as data but it automatically disconnects me
sio.emit('createC', json)
error: Disconnected from "myservername:port"
**Logs**
Dec 15 18:42:04 b1 python3[13906]: AQkJ_fz9NXpxSkucAAAA: Received packet PONG data
Dec 15 18:42:29 b1 python3[13906]: AQkJ_fz9NXpxSkucAAAA: Sending packet PING data None
Dec 15 18:42:29 b1 python3[13906]: AQkJ_fz9NXpxSkucAAAA: Received packet PONG data
Dec 15 18:42:48 b1 python3[13906]: hGTfoVkyW7PQsWvWAAAC: Sending packet OPEN data {'sid': 'hGTfoVkyW7PQsWvWAAAC', 'upgrades': [], 'pingTimeout': 20000, 'pingInterval': >
Dec 15 18:42:48 b1 python3[13906]: hGTfoVkyW7PQsWvWAAAC: Received request to upgrade to websocket
Dec 15 18:42:48 b1 python3[13906]: hGTfoVkyW7PQsWvWAAAC: Upgrade to websocket successful
Dec 15 18:42:48 b1 python3[13906]: hGTfoVkyW7PQsWvWAAAC: Received packet MESSAGE data 0
Dec 15 18:42:48 b1 python3[13906]: hGTfoVkyW7PQsWvWAAAC: Sending packet MESSAGE data 0{"sid":"mfYNSxjMOPko8lKsAAAD"}
Dec 15 18:42:54 b1 python3[13906]: AQkJ_fz9NXpxSkucAAAA: Sending packet PING data None
Dec 15 18:42:54 b1 python3[13906]: AQkJ_fz9NXpxSkucAAAA: Received packet PONG data
| closed | 2021-12-16T00:44:15Z | 2022-07-16T11:11:15Z | https://github.com/miguelgrinberg/python-socketio/issues/831 | [
"question"
] | skorvus | 2 |
flairNLP/flair | nlp | 3,434 | [Question]: | ### Question
I am working on a Sequence Labelling problem using the [FLAIR module](https://github.com/flairNLP/flair).
I have dummy e-commerce data with 3 different types of entities and each entity has approx ~1K sub-entities. Training Data (size ~200K) is synthetically created with a combination of 3K labels.
I tried to validate the FLAIR Sequence Labelling with a Query Classification model (with 3K labels). The FLAIR model (F1-score: 60%) seriously underperforms than Classification model (F1-score: 80%).
I am reluctant to develop a Sequence Labelling module because I expect the Sequence Labeller to detect and propose new entities as well.
Can you help me understand where I could go wrong and what other models I could try? | open | 2024-03-27T09:16:17Z | 2024-03-27T09:16:17Z | https://github.com/flairNLP/flair/issues/3434 | [
"question"
] | keshavgarg139 | 0 |
polarsource/polar | fastapi | 4,789 | Customer portal is empty | ### Description
<!-- A brief description with a link to the page on the site where you found the issue. -->
When I open a customer portal, it just shows an empty page
### Current Behavior
<!-- A brief description of the current behavior of the issue. -->
- Go to a customers
- Click on a customer
- Click Generate Customer Portal Link
- Copy and open in a new tab or current tab
- Page is empty, I can't do anything
### Expected Behavior
<!-- A brief description of what you expected to happen. -->
- I should be able to view my transactions and cancel my subscription
### Screenshots
<!-- Add screenshots, if applicable, to help explain your problem. -->

Video: https://www.awesomescreenshot.com/video/35226934?key=6b53773ed9b47a086bd8163e0b9c7e78
### Environment:
- Operating System: N/A
- Browser (if applicable): [e.g., Chrome, Firefox, Safari]
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). --> | closed | 2025-01-06T03:12:06Z | 2025-01-06T12:48:45Z | https://github.com/polarsource/polar/issues/4789 | [
"bug"
] | rotimi-best | 2 |
deepfakes/faceswap | deep-learning | 1,281 | Error occured during extraction | *Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum)
or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without
response.*
**Crash reports MUST be included when reporting bugs.**
**Describe the bug**
I placed Trump.mp4 in faceswap/src and set output dir to faceswap/faces. And I pressed extract button and this error is occured
```bash
Loading...
Setting Faceswap backend to NVIDIA
11/07/2022 09:03:33 INFO Log level set to: INFO
11/07/2022 09:03:35 INFO Loading Detect from S3Fd plugin...
11/07/2022 09:03:35 INFO Loading Align from Fan plugin...
11/07/2022 09:03:35 INFO Downloading model: 'face-alignment-network_2d4_keras' from: https://github.com/deepfakes-models/faceswap-models/releases/download/v13.2/face-alignment-network_2d4_keras_v2.zip
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Users\Don Oh\anaconda3\envs\faceswap\lib\threading.py", line 950, in _bootstrap_inner
self.run()
File "C:\Users\Don Oh\anaconda3\envs\faceswap\lib\threading.py", line 888, in run
self._target(*self._args, **self._kwargs)
File "D:\faceswap\lib\gui\wrapper.py", line 269, in read_stderr
output = self.process.stderr.readline()
UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 19: illegal multibyte sequence
11/07/2022 09:03:46 INFO Extracting: 'face-alignment-network_2d4_keras'
```
This shows me that error occured in faceswap\lib\gui\wrapper.py, line 269 that has a problem with encoding. And I can't find the line to set the encoding. If I can find it, then I can replace to utf-8 or something that doesn't occur the error. Can you let me know where the encoding setting is placed in? It's good to know as specific as.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on extract
3. Scroll down to '....'
4. See error
**Expected behavior**
Let me know where I can find encoding setting in the project.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10 Home 21H2
- Python Version: 3.9
- Conda Version: 22.9.0
- Commit ID: X
**Additional context**
Add any other context about the problem here.
**Crash Report**
And there is an error log in the root of faceswap folder. I'm not sure it's crash report, cause there's no file named as "crash report".
```bash
11/06/2022 21:04:37 MainProcess MainThread logger log_setup INFO Log level set to: INFO
11/06/2022 21:04:40 MainProcess MainThread plugin_loader _import INFO Loading Detect from S3Fd plugin...
11/06/2022 21:04:40 MainProcess MainThread utils _download_model INFO Downloading model: 's3fd_keras' from: https://github.com/deepfakes-models/faceswap-models/releases/download/v11.2/s3fd_keras_v2.zip
11/06/2022 21:04:43 MainProcess MainThread utils _unzip_model INFO Extracting: 's3fd_keras'
11/06/2022 21:04:43 MainProcess MainThread plugin_loader _import INFO Loading Align from Fan plugin...
11/06/2022 21:04:43 MainProcess MainThread utils _download_model INFO Downloading model: 'face-alignment-network_2d4_keras' from: https://github.com/deepfakes-models/faceswap-models/releases/download/v13.2/face-alignment-network_2d4_keras_v2.zip
11/06/2022 21:05:17 MainProcess MainThread utils _download_model WARNING Error downloading model ([Errno 22] Invalid argument). Retrying 2 of 6...
11/06/2022 21:05:17 MainProcess MainThread utils _download_model WARNING Error downloading model ([Errno 22] Invalid argument). Retrying 3 of 6...
11/06/2022 21:05:17 MainProcess MainThread utils _download_model WARNING Error downloading model ([Errno 22] Invalid argument). Retrying 4 of 6...
11/06/2022 21:05:18 MainProcess MainThread utils _download_model WARNING Error downloading model ([Errno 22] Invalid argument). Retrying 5 of 6...
11/06/2022 21:05:18 MainProcess MainThread utils _download_model WARNING Error downloading model ([Errno 22] Invalid argument). Retrying 6 of 6...
11/06/2022 21:05:18 MainProcess MainThread utils _download_model ERROR Failed to download model. Exiting. (Error: '[Errno 22] Invalid argument', URL: 'https://github.com/deepfakes-models/faceswap-models/releases/download/v13.2/face-alignment-network_2d4_keras_v2.zip')
11/06/2022 21:05:18 MainProcess MainThread utils _download_model INFO You can try running again to resume the download.
11/06/2022 21:05:18 MainProcess MainThread utils _download_model INFO Alternatively, you can manually download the model from: https://github.com/deepfakes-models/faceswap-models/releases/download/v13.2/face-alignment-network_2d4_keras_v2.zip and unzip the contents to: D:\faceswap\.fs_cache
``` | closed | 2022-11-07T00:23:51Z | 2022-11-10T13:15:17Z | https://github.com/deepfakes/faceswap/issues/1281 | [] | DonOhhhh | 1 |
jacobgil/pytorch-grad-cam | computer-vision | 322 | batch size for cam results | For batch size N > 1, I used to set targets = [ClassifierOutputTarget(cls)] * N to generate the grayscale_cam with output [N, H, W]
But not sure why, I tried again today and it doesn't work anymore. the grayscale_cam output is always [1, H, W]
The input tensor is [N, C, H, W]
Could you please check?
In compute_cam_per_layer:
activations_list = [a.cpu().data.numpy()
for a in self.activations_and_grads.activations]
grads_list = [g.cpu().data.numpy()
for g in self.activations_and_grads.gradients]
target_size = self.get_target_width_height(input_tensor)
cam_per_target_layer = []
# Loop over the saliency image from every layer
for i in range(len(self.target_layers)):
target_layer = self.target_layers[i]
layer_activations = None
layer_grads = None
if i < len(activations_list):
layer_activations = activations_list[i]
if i < len(grads_list):
layer_grads = grads_list[i]
cam = self.get_cam_image(input_tensor,
target_layer,
targets,
layer_activations,
layer_grads,
eigen_smooth)
cam = np.maximum(cam, 0)
Even though the grads_list & activation_list are length N, the layer_grads only contain 1 grad as my target_layers is size=1. thus the get_cam_image would only always return 1 cam instead of N
| closed | 2022-08-30T03:57:31Z | 2023-02-27T01:17:51Z | https://github.com/jacobgil/pytorch-grad-cam/issues/322 | [] | YangjiaqiDig | 9 |
mitmproxy/mitmproxy | python | 6,467 | Mitmproxy with authentication does not work with Maven | #### Problem Description
I have configured mitmproxy in my localhost with credentials in ~/.mitmproxy/config.yaml file. When I try to use maven deployment with the mitmproxy set-up. It hangs.
Maven CLI command
```
./mvn deploy:deploy-file -Durl=https://maven.pkg.github.com/Thevakumar-Luheerathan/module-ballerina-c2c -DrepositoryId=repo1 -Dfile=./pat-0.1.0.bala -DgroupId=luheerathan -DartifactId=pat -Dversion=0.1.124 -Dpackaging=bala -s ./settings.xml -X
```
settings.xml
```
<settings>
<profiles>
<profile>
<id>
my-repo-profile
</id>
<activation>
<activeByDefault>
true
</activeByDefault>
</activation>
<repositories>
<!-- Repository using the defined server ID -->
<repository>
<id>
repo1
</id>
<url>
https://maven.pkg.github.com/Thevakumar-Luheerathan/module-ballerina-c2c
</url>
</repository>
</repositories>
</profile>
</profiles>
<activeProfiles>
<activeProfile>
my-repo-profile
</activeProfile>
</activeProfiles>
<servers>
<server>
<id>
repo1
</id>
<username>
Thevakumar-Luheerathan
</username>
<password>
ghp_nA**************************************sdmLs
</password>
</server>
</servers>
<proxies>
<proxy>
<id>
example-proxy
</id>
<active>
true
</active>
<protocol>
http
</protocol>
<host>
127.0.0.1
</host>
<port>
8080
</port>
<username>
luhee
</username>
<password>
modi
</password>
</proxy>
</proxies>
</settings>
```
When I remove the credentials and try, It works fine.(This deployment works fine with squid proxy)
#### Steps to reproduce the behavior:
1. Configure Mitmproxy with credentials and start it.
2. Use above maven deployment command to deploy any artifact to a https repository.
#### System Information
Mitmproxy: 10.0.0
Python: 3.12.0
OpenSSL: OpenSSL 3.1.4 24 Oct 2023
Platform: macOS-13.5.2-arm64-arm-64bit
| open | 2023-11-07T07:40:47Z | 2023-11-08T14:15:12Z | https://github.com/mitmproxy/mitmproxy/issues/6467 | [
"kind/triage"
] | Thevakumar-Luheerathan | 1 |
yt-dlp/yt-dlp | python | 11,786 | How to get all video links from Facebook page and Instagram? | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hello Admin. How to get all video links from Facebook page and Instagram. YouTube and TikTok download automatically by simply entering the URL profile, but Facebook and Instagram need to download the video link themselves.. Please help!
Ex. https://www.facebook.com/ChaseDrama123
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | closed | 2024-12-11T03:37:04Z | 2024-12-15T04:04:42Z | https://github.com/yt-dlp/yt-dlp/issues/11786 | [
"question"
] | seaklin83546 | 1 |
huggingface/transformers | tensorflow | 36,277 | The output tensor's data type is not torch.long when the input text is empty. | ### System Info
- `transformers` version: 4.48.1
- Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
- Python version: 3.12.8
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
- Jax version: 0.5.0
- JaxLib version: 0.5.0
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA GeForce RTX 3060 Ti
### Who can help?
@ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The output tensor's data type is not torch.long when the input text is empty.
```python
t = tokenizer('', return_tensors='pt')
print(t['input_ids'].dtype)
# torch.float32
```
### Expected behavior
```python
t = tokenizer('', return_tensors='pt')
print(t['input_ids'].dtype)
# torch.int64
``` | open | 2025-02-19T09:43:29Z | 2025-03-04T14:54:19Z | https://github.com/huggingface/transformers/issues/36277 | [
"bug"
] | wangzhen0518 | 7 |
miguelgrinberg/flasky | flask | 111 | A problem when use "git push" | Hello, Miguel.
When I use "git push heroku master", the problem happends:
`Total 504 (delta 274), reused 501 (delta 273)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Using set buildpack heroku/python
remote:
remote: ! Push rejected, failed to detect set buildpack heroku/python
remote: More info: https://devcenter.heroku.com/articles/buildpacks#detection-failure
remote:
remote: Verifying deploy....
remote:
remote: ! Push rejected to flaskycn.
remote:
To https://git.heroku.com/flaskycn.git
! [remote rejected] master -> master (pre-receive hook declined)
`
What may happen to my situation?
| closed | 2016-01-28T16:45:45Z | 2017-01-23T10:08:50Z | https://github.com/miguelgrinberg/flasky/issues/111 | [
"question"
] | xpleaf | 8 |
robotframework/robotframework | automation | 5,032 | Collections: No default value shown in documentation for `Get/Pop From Dictionary` | When Checking the Documentation of keywords which uses `NOT_SET` as default , It shows "mandatory argument missing" which is wrong.
Though the functionality of the keyword is working fine.
https://robotframework.slack.com/archives/C3C28F9DF/p1705692610282559

| closed | 2024-01-22T06:27:46Z | 2024-06-04T14:08:48Z | https://github.com/robotframework/robotframework/issues/5032 | [
"bug",
"priority: low",
"rc 1"
] | A1K2V3 | 3 |
ExpDev07/coronavirus-tracker-api | rest-api | 52 | Thank you for this API! | It's really [nice](https://github.com/mugetsu/covid-tracker) API and hopefully you can find a more up-to-date sources better than the current one, thanks @ExpDev07 ❤️ | open | 2020-03-16T07:06:39Z | 2020-03-21T13:38:15Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/52 | [
"feedback"
] | mugetsu | 1 |
indico/indico | sqlalchemy | 6,706 | Configurable event types | **Is your feature request related to a problem? Please describe.**
Our organization supports only two types of events.
**Describe the solution you'd like**
Make available event types configurable.
**Describe alternatives you've considered**
Currently we patch the Indico installation, since the types are hard coded into several different places. | open | 2025-01-22T07:12:15Z | 2025-01-22T08:11:31Z | https://github.com/indico/indico/issues/6706 | [
"enhancement"
] | Reis-A-CIT | 2 |
agronholm/anyio | asyncio | 863 | Quadratic traceback-growth in TaskGroup nesting-level on asyncio backend | ### Things to check first
- [x] I have searched the existing issues and didn't find my bug already reported there
- [x] I have checked that my bug is still present in the latest release
### AnyIO version
4.8.0
### Python version
3.11
### What happened?
Since python implicitly adds a context linking the current Exception to the new one when raising while exiting a context manager due to an unhandled Exception and task groups always wrap errors in their own `BaseExceptionGroup` the resulting traceback grows quadratically with the level of nested task groups due to the "During handling of the above..."
This seems to be straightforward to fix by simply explicitly raising from None in https://github.com/agronholm/anyio/blob/897d69aca35ff38ad230ee65f63186e420677f25/src/anyio/_backends/_asyncio.py#L767-L769
It looks like we already have added the exception to the group so breaking the context should be ok without loss of information.
### How can we reproduce the bug?
Simple example with 5 levels
```python
import anyio
async def main():
async with(
anyio.create_task_group(),
anyio.create_task_group(),
anyio.create_task_group(),
anyio.create_task_group(),
anyio.create_task_group(),
):
raise Exception("Something went wrong")
if __name__ == "__main__":
anyio.run(main)
```
Results in
```
Traceback (most recent call last):
File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
raise Exception("Something went wrong")
Exception: Something went wrong
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
| raise Exception("Something went wrong")
| Exception: Something went wrong
+------------------------------------
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
| raise Exception("Something went wrong")
| Exception: Something went wrong
+------------------------------------
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
| raise Exception("Something went wrong")
| Exception: Something went wrong
+------------------------------------
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
| raise Exception("Something went wrong")
| Exception: Something went wrong
+------------------------------------
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 15, in <module>
| anyio.run(main)
| File "/home/tobias/Projects/anyio/src/anyio/_core/_eventloop.py", line 74, in run
| return async_backend.run(func, args, {}, backend_options)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 2306, in run
| return runner.run(wrapper())
| ^^^^^^^^^^^^^^^^^^^^^
| File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
| return self._loop.run_until_complete(task)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/usr/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
| return future.result()
| ^^^^^^^^^^^^^^^
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 2294, in wrapper
| return await func(*args)
| ^^^^^^^^^^^^^^^^^
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 5, in main
| async with(
| File "/home/tobias/Projects/anyio/src/anyio/_backends/_asyncio.py", line 771, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/tobias/Projects/anyio/test_tb.py", line 12, in main
| raise Exception("Something went wrong")
| Exception: Something went wrong
+------------------------------------
```
| closed | 2025-01-28T16:02:18Z | 2025-01-29T19:39:44Z | https://github.com/agronholm/anyio/issues/863 | [
"bug"
] | tapetersen | 0 |
ray-project/ray | tensorflow | 51,010 | [distributed debugger] exception in regular remote worker function leading to access violation when debugger connects | ### What happened + What you expected to happen
In my setup we are using a Ray serve deployment with fast API and starlette middleware
To test the use use of the distributed debugger, I created a class that has remote static member functions that are decorated with Ray remote but it is not an actor.
The deployment has a pair of endpoints. One of them calls a function that enables a breakpoint with the breakpoint function and also has the Red Dot Break points enabled within vs. Code. The second endpoint function calls a function that immediately throws an exception.
The deployment calls These remote decorated member functions, which means they will be tasks rather than functions running on an actor.
The break point properly pauses and allows the debugger to connect. A similar setup using an actor works for both the breakpoint and the exception.
However, the function that throws an exception and is not inside of an actor leads to an access violation as soon as the debugger connects.
From the stack Trace it looks like pydevd is trying to create a list of frames from the trace back when this access violation occurs.
### Versions / Dependencies
Python 3.11
Ray[default,serve]==2.42.1
Windows 11
### Reproduction script
I am not allowed to share corporate code.
If needed, I can try to do this on my personal computer after hours.
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-03-01T00:58:27Z | 2025-03-06T20:31:48Z | https://github.com/ray-project/ray/issues/51010 | [
"bug",
"triage",
"serve"
] | kotoroshinoto | 1 |
Esri/arcgis-python-api | jupyter | 1,475 | [Question]: How do I load a previously generated model in TextClassifier? | I have used arcgis learn.text to import TextClassifier in order for creating a Machine learning module. Now I want to use the same model in Streamlitfor creating an interface for re-use and displaying the predictions.
Code for the app I am creating:
```
import streamlit as st
import os
from arcgis.learn.text import TextClassifier, SequenceToSequence
import pickle
with st.sidebar:
st.image('https://www.attomdata.com/wp-content/uploads/2021/05/ATTOM-main-full-1000.jpg')
st.title("AutoAttom")
st.info("This project application will help in text classification and sequence to sequence labelling")
```
# Text Classifier Section
```
st.title("Text Classifier")
user_input = st.text_input(""" """)
if user_input:
model_folder = os.path.join('models', 'text-classifier')
print(os.listdir(model_folder))
model = TextClassifier.load(model_folder, name_or_path=model_folder)
st.write(model.predict(user_input))
```
I am getting the error as follows:
```
Traceback (most recent call last):
File "d:\python projects\attom\text2seq\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "D:\Python Projects\ATTOM\app.py", line 17, in <module>
model = TextClassifier.load(model_folder, name_or_path=model_folder)
File "d:\python projects\attom\text2seq\lib\site-packages\arcgis\learn\text\_text_classifier.py", line 298, in load
return super().load(name_or_path)
TypeError: super(type, obj): obj must be an instance or subtype of type
```
My models folder tree directory is as follows:
```
D:.
├───seq2seq_unfrozen8E_bleu_88
│ │ model_metrics.html
│ │ seq2seq_unfrozen8E_bleu_88.emd
│ │ seq2seq_unfrozen8E_bleu_88.pth
│ │
│ └───ModelCharacteristics
│ loss_graph.png
│ sample_results.html
│
└───text-classifier
│ model_metrics.html
│ text-classifier.dlpk
│ text-classifier.emd
│ text-classifier.pth
│
└───ModelCharacteristics
loss_graph.png
sample_results.html
```
All solutions I am seeing is asking to create the model in the code interface again and then load. Is there a way to load the model from previously generated?
This is rather a question from my side instead of a bug.
| closed | 2023-02-27T09:56:30Z | 2023-04-21T08:24:02Z | https://github.com/Esri/arcgis-python-api/issues/1475 | [
"bug",
"question",
"learn"
] | Daremitsu1 | 4 |
voila-dashboards/voila | jupyter | 903 | Uncaught ReferenceError: IPython is not defined | Hi,
when i try to render a simple notebook
```
import IPython.display
IPython.display.HTML('''
<script type="text/javascript">
IPython.notebook.kernel.execute("foo=11")
</script>
''')
```
in the JS console Voila comes back with
Uncaught ReferenceError: IPython is not defined
voila --version
0.2.10
Starting with
voila notebooks/ --enable_nbextensions=True --debug | closed | 2021-06-12T19:35:02Z | 2021-09-13T18:07:17Z | https://github.com/voila-dashboards/voila/issues/903 | [] | VitoKovacic | 3 |
ageitgey/face_recognition | machine-learning | 679 | cv with knn | How do you combine knn with cv | open | 2018-11-18T07:50:45Z | 2018-11-18T15:10:19Z | https://github.com/ageitgey/face_recognition/issues/679 | [] | nhangox22 | 1 |
supabase/supabase-py | fastapi | 57 | Current upload does not support inclusion of mime-type | Our current upload/update methods do not include the mime-type. As such, when we upload photos to storage and download them again they don't render properly.
The current fix was proposed by John on the discord channel. We should integrate it in so that Users can download/use photos.
```
multipart_data = MultipartEncoder(
fields={
"file": (
"party-parrot.gif",
open("./out/party-parrot.gif", 'rb'),
"image/gif"
)
})
formHeaders = {
"Content-Type": multipart_data.content_type,
}
headers = dict(supabase._get_auth_headers(), **formHeaders)
response = requests.post(
url=request_url,
headers=headers,
data=multipart_data,
)
``` | closed | 2021-10-09T23:53:49Z | 2021-10-30T21:28:39Z | https://github.com/supabase/supabase-py/issues/57 | [
"bug",
"good first issue",
"hacktoberfest"
] | J0 | 7 |
iperov/DeepFaceLab | machine-learning | 5,629 | Дип фейк | open | 2023-02-24T01:50:14Z | 2023-06-08T20:03:38Z | https://github.com/iperov/DeepFaceLab/issues/5629 | [] | Johny-tech-creator | 4 | |
PokeAPI/pokeapi | api | 347 | Documentation API endpoints missing trailing forward slash | #### api/v2/pokemon/{id or name} should be api/v2/pokemon/{id or name}/
Some HTTP client doesn't follow redirect by default so this could potentially cause issues? And it's probably better if the user requests the correct endpoints in the first place instead of following redirects | closed | 2018-08-29T03:24:45Z | 2018-09-22T03:02:17Z | https://github.com/PokeAPI/pokeapi/issues/347 | [] | tien | 2 |
onnx/onnx | machine-learning | 6,100 | ERROR: Could not build wheels for onnx which use PEP 517 and cannot be installed directly | # Bug Report
### Is the issue related to model conversion?
When I install onnx using pip, and run `pip install onnx`, I failed!
### Describe the bug
when I use pip install Onnx ,and run `pip install onnx==1.14.1`, I failed!
it print:
```
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting onnx==1.14.1
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/8f/71/1543d8dad6a26df1da8953653ebdbedacea9f1a5bcd023fe10f8c5f66d63/onnx-1.14.1.tar.gz (11.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... done
Requirement already satisfied: typing-extensions>=3.6.2.1 in d:\anaconda3\envs\veighna_studio_py36\lib\site-packages (from onnx==1.14.1) (4.7.1)
Collecting protobuf>=3.20.2
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/27/82/986065ef305c0989c99d8ef3f29e58a03fac6e64bb2c36ffe64500cc6955/protobuf-4.21.0-py3-none-any.whl (291 kB)
Requirement already satisfied: numpy in d:\anaconda3\envs\veighna_studio_py36\lib\site-packages (from onnx==1.14.1) (1.23.1)
WARNING: The candidate selected for download or install is a yanked version: 'protobuf' candidate (version 4.21.0 at https://pypi.tuna.tsinghua.edu.cn/packages/27/82/986065ef305c0989c99d8ef3f29e58a03fac6e64bb2c36ffe64500cc6955/protobuf-4.21.0-py3-none-any.whl#sha256=4e78116673ba04e01e563f6a9cca2c72db0be8a3e1629094816357e81cc39d36 (from https://pypi.tuna.tsinghua.edu.cn/simple/protobuf/))
Reason for being yanked: Required python version not configured correctly (https://github.com/protocolbuffers/protobuf/issues/10076)
Building wheels for collected packages: onnx
Building wheel for onnx (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: 'D:\Anaconda3\envs\veighna_studio_py36\python.exe' 'D:\Anaconda3\envs\veighna_studio_py36\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\ADMINI~1\AppData\Local\Temp\tmp6y0imhlb'
cwd: C:\Users\ADMINI~1\AppData\Local\Temp\pip-install-j6mlkoam\onnx_37f776d24b5240fc9852b7cb21079ee1
Complete output (79 lines):
fatal: not a git repository (or any of the parent directories): .git
running bdist_wheel
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['D:\\Cmake\\bin\\cmake.exe', '-DPYTHON_INCLUDE_DIR=D:\\Anaconda3\\envs\\veighna_studio_py36\\include', '-DPYTHON_EXECUTABLE=D:\\Anaconda3\\envs\\veighna_studio_py36\\python.exe', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cp36-win_amd64.pyd', '-DCMAKE_BUILD_TYPE=Release', '-DPY_VERSION=3.6', '-A', 'x64', '-T', 'host=x64', '-DONNX_ML=1', 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\pip-install-j6mlkoam\\onnx_37f776d24b5240fc9852b7cb21079ee1']
-- Building for: MinGW Makefiles
CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Error at CMakeLists.txt:17 (project):
Generator
MinGW Makefiles
does not support platform specification, but platform
x64
was specified.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "D:\Anaconda3\envs\veighna_studio_py36\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 349, in <module>
main()
File "D:\Anaconda3\envs\veighna_studio_py36\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 331, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "D:\Anaconda3\envs\veighna_studio_py36\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 249, in build_wheel
metadata_directory)
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 231, in build_wheel
wheel_directory, config_settings)
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 215, in _build_with_temp_dir
self.run_setup()
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 365, in <module>
"backend-test-tools = onnx.backend.test.cmd_tools:main",
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\setuptools\__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-build-env-na3v66kp\overlay\Lib\site-packages\wheel\bdist_wheel.py", line 299, in run
self.run_command('build')
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 236, in run
self.run_command("cmake_build")
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 222, in run
subprocess.check_call(cmake_args)
File "D:\Anaconda3\envs\veighna_studio_py36\lib\subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['D:\\Cmake\\bin\\cmake.exe', '-DPYTHON_INCLUDE_DIR=D:\\Anaconda3\\envs\\veighna_studio_py36\\include', '-DPYTHON_EXECUTABLE=D:\\Anaconda3\\envs\\veighna_studio_py36\\python.exe', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cp36-win_amd64.pyd', '-DCMAKE_BUILD_TYPE=Release', '-DPY_VERSION=3.6', '-A', 'x64', '-T', 'host=x64', '-DONNX_ML=1', 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\pip-install-j6mlkoam\\onnx_37f776d24b5240fc9852b7cb21079ee1']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for onnx
Failed to build onnx
ERROR: Could not build wheels for onnx which use PEP 517 and cannot be installed directly
```
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): windows 11
- ONNX version : 1.14.1
- Python version: 3.6.17
- GCC/Compiler version (if compiling from source): 11.2.0
- CMake version:3.27.1
- Protobuf version:3.19.6
- Visual Studio version (if applicable):--> have not use vs ,but use Pycharm
- Pycharm version : 2023.1.4 | closed | 2024-04-25T09:49:28Z | 2024-04-28T18:11:55Z | https://github.com/onnx/onnx/issues/6100 | [
"question",
"topic: build"
] | qd986692950 | 2 |
huggingface/transformers | python | 36,317 | DS3 zero3_save_16bit_model is not compatible with resume_from_checkpoint | ### System Info
- `transformers` version: 4.48.3
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@muellerzr
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. DS 3 Training with `zero3_save_16bit_model=True`
2. Trainer saves checkpoints with 16bit safetensor format (`model-xx-of-xx.safetensors`)
3. When try to resume using `resume_from_checkpoint`, https://github.com/huggingface/transformers/blob/e18f233f6c8cba029324e2868fb68abdaf6badf3/src/transformers/trainer.py#L2391 call `deepspeed_load_checkpoint`.
4. `deepspeed_load_checkpoint` only supports Deepspeed checkpoint format (`bf16_zero_pp_rank_0_xx.pt`), so unable to resume
### Expected behavior
Trainer should detect the checkpoint type when using `resume_from_checkpoint`. | open | 2025-02-21T05:56:30Z | 2025-03-23T08:03:25Z | https://github.com/huggingface/transformers/issues/36317 | [
"bug"
] | starmpcc | 1 |
ansible/awx | automation | 15,124 | RFE: Implement Maximum Execution Limit for Scheduled Jobs | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
New Feature
### Feature Summary
**Context:**
The current version of AWX allows users to schedule job executions, but it does not offer a way to automatically disable these schedules after a certain number of successful executions. This enhancement proposes adding a feature to limit the maximum number of executions for a schedule. For example, a user could set a schedule to run a job three times every day, but after a total of nine successful executions, the schedule should automatically disable itself. This feature would be particularly useful in managing resources and ensuring that tasks do not run indefinitely.
Consider a scenario where schedules are dynamically generated to perform specific checks a few times a day over several days. After the desired number of checks, it would be beneficial for the schedule to deactivate automatically.
Schedules in AWX function similarly to a distributed cron job. By implementing this feature, it would be akin to having a distributed version of the "at" command, enhancing the flexibility and control over task executions in AWX.
**Use Case:**
This feature would be beneficial in scenarios where a task is required to run only a limited number of times, such as:
- Temporary projects or jobs that are only relevant for a certain period or a specific number of executions.
- Compliance or policy requirements that mandate certain tasks not exceed a specified number of runs.
- Testing environments where jobs are needed for a finite number of runs to validate behavior under controlled repetitions.
**Impact:**
- Positive: Enhances control over job execution, prevents resource wastage, and improves manageability.
- Negative: Slight increase in the complexity of the scheduling interface and additional validation required to manage the execution count.
### Select the relevant components
- [X] UI
- [X] API
- [ ] Docs
- [X] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
RFE
### Current results
RFE
### Sugested feature result
RFE
### Additional information
_No response_ | closed | 2024-04-22T12:22:55Z | 2024-04-22T12:23:25Z | https://github.com/ansible/awx/issues/15124 | [
"type:enhancement",
"component:api",
"component:ui",
"component:awx_collection",
"needs_triage",
"community"
] | demystifyingk8s | 0 |
reloadware/reloadium | pandas | 181 | [Feature Request] Frame Transaction | Frame transaction is to ensure the mutable data got revert back during reloading frame or dropping it. This usefull when you have a function that mutate a list or dictionary and need to reload for some changes.
```py
my_list = [1,2]
def addition(target_list):
target_list.append(3)
print("Reload here")
addition(my_list)
```
In this example, if you reload after appending item to the list, the value stil exist on the outer frame.
One approach that come to my mind is by dumping every time we enter a stack like using pickle. Then if we reload that frame, we can just reload the state of that frame. But this approach have some problem on circular variable.
```py
class A:
def __init__(self):
self.b = None
class B:
def __init__(self):
self.a = None
a = A()
b = B()
a.b = b
b.a = a
```
| open | 2024-02-19T20:25:51Z | 2024-02-19T20:25:51Z | https://github.com/reloadware/reloadium/issues/181 | [] | MeGaNeKoS | 0 |
aminalaee/sqladmin | asyncio | 380 | get_model_objects is not using list_query (export data) | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Currently the exported data is using the generic `select()` query, and not the `list_query`.
See https://github.com/aminalaee/sqladmin/blob/main/sqladmin/models.py#L816
### Steps to reproduce the bug
The following unit test currently fails
```
@pytest.mark.asyncio
async def test_get_model_objects() -> None:
batman = User(name="batman")
bruce = User(name="bruce wayne")
superman = User(name="superman")
session.add(batman)
session.add(bruce)
session.add(superman)
session.commit()
session.refresh(batman)
session.refresh(bruce)
session.refresh(superman)
class HerosAdmin(ModelView, model=User):
async_engine = False
sessionmaker = LocalSession
list_query = select(User).filter(User.name.endswith("man"))
view = HerosAdmin()
hero_ids = {batman.id, superman.id}
heros = await view.get_model_objects()
assert len(heros) == 2
```
### Expected behavior
The view should use `list_query`
### Actual behavior
The view is not using `list_query`
### Debugging material
_No response_
### Environment
ubuntu / python / sqladmin 0.7.0
### Additional context
_No response_ | closed | 2022-11-16T11:31:57Z | 2022-11-17T11:31:54Z | https://github.com/aminalaee/sqladmin/issues/380 | [] | villqrd | 1 |
tox-dev/tox | automation | 3,045 | referenced deps from an environment defined by an optional combinator are not pulled in | ## Issue
<!-- Describe what's the expected behaviour and what you're observing. -->
Suppose we have two environments defined by the below:
```
[testenv:lint{,-ci}]
deps =
flake8
flake8-print
flake8-black
ci: flake8-junit-report
commands =
!ci: flake8
ci: flake8 --output-file flake8.txt --exit-zero
ci: flake8_junit flake8.txt flake8_junit.xml
```
When we run these environments explicitly, the dependencies of each is appropriately handled -- i.e.
`tox r -e lint` --> installs flake8, flake8-print, flake8-black
`tox r -e lint-ci` --> installs flake8, flake8-print, flake8-black, flake8-junit-report
However, when we refer to these dependencies in another environment, such as with:
```
[testenv:safety]
deps =
{[tox]requires}
{[testenv]deps}
{[testenv:lint-ci]deps}
safety
commands =
safety check
```
Only the deps not prefixed with `ci` are pulled into the safety env when ran. This is regardless of how we refer to it -- `{[testenv:lint]deps}` also does not pull in the `flake8-junit-report` dependency.
Workaround seems to be to additionally define the optional combinator in the calling environment, i.e. define `[testenv:safety{,-ci}]` , then when we run `tox r -e safety-ci` -- it will pull in the `ci` prefix definitions in the referenced dependencies (but conversely, those defined with `!ci` would not be pulled in, even if we use the reference `{[testenv:lint]deps}`).
This workaround is undesirable as there is intended to only be one such "safety" environment, and expectation is that by providing the correct environment name in the reference (`{[testenv:lint-ci]deps}`) that it will pull the resources defined by the reference and not by those of the calling test env.
## Environment
Provide at least:
- OS: Ubuntu18, 20
- tox: 3.21, 3.28, 4.6.2
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
(safety) ubuntu:~/GitHub/NewTemp/CommonPy$ pip list
Package Version
------------------ ---------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
bandit 1.7.5
black 22.12.0
boto3 1.20.11
botocore 1.23.11
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
cryptography 41.0.1
distlib 0.3.6
docker 4.4.4
dparse 0.6.2
exceptiongroup 1.1.1
filelock 3.12.2
flake8 6.0.0
flake8-black 0.3.6
flake8-isort 6.0.0
flake8-print 5.0.0
frozenlist 1.3.3
gitdb 4.0.10
GitPython 3.1.31
idna 3.4
iniconfig 2.0.0
isort 5.12.0
Jinja2 3.1.2
jinja2-cli 0.8.2
jmespath 0.10.0
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.14.1
mccabe 0.7.0
mdurl 0.1.2
moto 2.2.16
multidict 6.0.4
mypy-extensions 1.0.0
packaging 21.3
pathspec 0.11.1
pbr 5.11.1
pip 22.0.4
pipdeptree 2.9.3
platformdirs 3.6.0
pluggy 1.0.0
psycopg2-binary 2.9.2
py 1.11.0
pycodestyle 2.10.0
pycparser 2.21
pyflakes 3.0.1
Pygments 2.15.1
PyMySQL 1.0.2
pyparsing 3.1.0
pytest 7.3.2
python-dateutil 2.8.2
pytz 2023.3
PyYAML 5.4.1
requests 2.31.0
responses 0.23.1
rich 13.4.2
ruamel.yaml 0.17.32
ruamel.yaml.clib 0.2.7
s3transfer 0.5.2
safety 2.3.5
setuptools 68.0.0
six 1.16.0
slackclient 2.9.3
smmap 5.0.0
sort-requirements 1.3.0
stevedore 5.1.0
toml 0.10.2
tomli 2.0.1
tox 3.28.0
tox-docker 1.7.0
tox-venv 0.4.0
types-PyYAML 6.0.12.10
typing_extensions 4.6.3
urllib3 1.26.16
virtualenv 20.23.1
websocket-client 1.6.0
Werkzeug 2.3.6
wheel 0.40.0
xmltodict 0.13.0
yarl 1.9.2
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
ubuntu:~/GitHub/NewTemp/CommonPy$ tox r -vve safety
using tox.ini: /home/ubuntu/GitHub/NewTemp/CommonPy/tox.ini (pid 27634)
removing /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/log
python3.8 (/usr/bin/python3.8) is {'executable': '/usr/bin/python3.8', 'implementation': 'CPython', 'version_info': [3, 8, 16, 'final', 0], 'version': '3.8.16 (default, Dec 7 2022, 01:12:13) \n[GCC 7.5.0]', 'is_64': True, 'sysplatform': 'linux', 'os_sep': '/', 'extra_version_info': None}
safety uses /usr/bin/python3.8
unit uses /usr/bin/python3.8
coverage uses /usr/bin/python3.8
build uses /usr/bin/python3.8
dev uses /usr/bin/python3.8
release uses /usr/bin/python3.8
using tox-3.28.0 from /home/ubuntu/.local/lib/python3.6/site-packages/tox/__init__.py (pid 27634)
skipping sdist step
safety start: getenv /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety reusing: /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety finish: getenv /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety after 0.03 seconds
safety start: finishvenv
safety finish: finishvenv after 0.01 seconds
safety start: envreport
setting PATH=/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin:/home/ubuntu/GitHub/.vscode-server/bin/252e5463d60e63238250799aef7375787f68b4ee/bin/remote-cli:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[27757] /home/ubuntu/GitHub/NewTemp/CommonPy$ /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip freeze >.tox/safety/log/safety-3.log
safety finish: envreport after 0.42 seconds
safety installed: bandit==1.7.5,black==22.12.0,commonpy==3.2.2,certifi==2023.5.7,charset-normalizer==3.1.0,click==8.1.3,distlib==0.3.6,docker==4.4.4,dparse==0.6.2,exceptiongroup==1.1.1,filelock==3.12.2,flake8==6.0.0,flake8-black==0.3.6,flake8-isort==6.0.0,flake8-print==5.0.0,gitdb==4.0.10,GitPython==3.1.31,idna==3.4,iniconfig==2.0.0,isort==5.12.0,Jinja2==3.1.2,jinja2-cli==0.8.2,markdown-it-py==3.0.0,MarkupSafe==2.1.3,mccabe==0.7.0,mdurl==0.1.2,mypy-extensions==1.0.0,packaging==21.3,pathspec==0.11.1,pbr==5.11.1,pipdeptree==2.9.3,platformdirs==3.6.0,pluggy==1.0.0,py==1.11.0,pycodestyle==2.10.0,pyflakes==3.0.1,Pygments==2.15.1,pyparsing==3.1.0,pytest==7.3.2,PyYAML==6.0,requests==2.31.0,rich==13.4.2,ruamel.yaml==0.17.32,ruamel.yaml.clib==0.2.7,safety==2.3.5,six==1.16.0,smmap==5.0.0,sort-requirements==1.3.0,stevedore==5.1.0,toml==0.10.2,tomli==2.0.1,tox==3.28.0,tox-docker==1.7.0,tox-venv==0.4.0,typing_extensions==4.6.3,urllib3==1.26.16,virtualenv==20.23.1,websocket-client==1.6.0
removing /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/tmp
safety start: run-test-pre
safety run-test-pre: PYTHONHASHSEED='2925510828'
safety run-test-pre: commands[0] | /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install -r requirements.txt
setting PATH=/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin:/home/ubuntu/GitHub/.vscode-server/bin/252e5463d60e63238250799aef7375787f68b4ee/bin/remote-cli:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[27759] /home/ubuntu/GitHub/NewTemp/CommonPy$ /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install -r requirements.txt
...
Installing collected packages: types-PyYAML, pytz, xmltodict, werkzeug, PyYAML, python-dateutil, PyMySQL, pycparser, psycopg2-binary, multidict, marshmallow, jmespath, frozenlist, attrs, async-timeout, yarl, responses, cffi, botocore, aiosignal, s3transfer, cryptography, aiohttp, slackclient, boto3, moto
Attempting uninstall: PyYAML
Found existing installation: PyYAML 6.0
Uninstalling PyYAML-6.0:
Successfully uninstalled PyYAML-6.0
Successfully installed PyMySQL-1.0.2 PyYAML-5.4.1 aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 attrs-23.1.0 boto3-1.20.11 botocore-1.23.11 cffi-1.15.1 cryptography-41.0.1 frozenlist-1.3.3 jmespath-0.10.0 marshmallow-3.14.1 moto-2.2.16 multidict-6.0.4 psycopg2-binary-2.9.2 pycparser-2.21 python-dateutil-2.8.2 pytz-2023.3 responses-0.23.1 s3transfer-0.5.2 slackclient-2.9.3 types-PyYAML-6.0.12.10 werkzeug-2.3.6 xmltodict-0.13.0 yarl-1.9.2
WARNING: You are using pip version 22.0.4; however, version 23.1.2 is available.
You should consider upgrading via the '/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install --upgrade pip' command.
safety finish: run-test-pre after 21.28 seconds
safety start: run-test
safety run-test: commands[0]....
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
As defined above along with some others in the tox "requires" section (shouldn't be impactful, but if needed can provide full tox.ini), we can see `flake8-junit-report` is not listed to be installed:
```console
tox -e safety
safety create: /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety installdeps: urllib3<2, tox>=3.21.0,<4, tox-venv, tox-docker<2, setuptools>=65.5.1, wheel, pip>=21.3.1, flake8, flake8-print, flake8-black, flake8, flake8-isort, sort-requirements, bandit, setuptools>=65.5.1, wheel, pip>=21.3.1, pytest, jinja2-cli[yaml], pipdeptree, safety
```
| open | 2023-06-19T21:26:22Z | 2024-03-05T22:16:15Z | https://github.com/tox-dev/tox/issues/3045 | [
"bug:minor",
"help:wanted"
] | hans2520 | 1 |
hootnot/oanda-api-v20 | rest-api | 208 | Extend the pricing endpoints | 2 new endpoints were introduced
- /v3/accounts/{accountID}/candles/latest
- /v3/accounts/{accountID}/instruments/{instrument}/candles
The request classes for those endpoints will be added to oandapyv20.endpoints.pricing | open | 2023-12-11T10:54:02Z | 2023-12-11T14:53:08Z | https://github.com/hootnot/oanda-api-v20/issues/208 | [
"enhancement"
] | hootnot | 0 |
frappe/frappe | rest-api | 31,451 | Workflow bug | - Creating any work flow then choosing workflow builder
- the builder shows mistakes in stalkholders | open | 2025-02-27T16:18:36Z | 2025-02-27T16:18:36Z | https://github.com/frappe/frappe/issues/31451 | [
"bug"
] | m-aglan | 0 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 133 | chatbot outputs array of numbers ? | ```
> hi
[[-0.05387566 -2.2077456 -0.1335546 ... -0.3521466 0.15176542
0.4837527 ]]
> how are you?
[[-0.05443239 -2.2417731 -0.1325449 ... -0.35882115 0.15338464
0.49410236]]
> say something
[[-0.05385711 -2.2218084 -0.13375734 ... -0.3538314 0.15226512
0.48862678]]
``` | open | 2018-08-26T20:34:47Z | 2019-12-10T06:49:13Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/133 | [] | Marwan-Mostafa7 | 3 |
skypilot-org/skypilot | data-science | 4,952 | [Dependency] SkyPilot installed with `uv venv` does not work correctly with gcloud installed with wget on MacOS | To reproduce:
1. `uv venv ~/sky-env --python 3.10 --seed`
2. `source ~/sky-env/bin/activate`
3. `wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-darwin-arm.tar.gz; extract google-cloud-cli-darwin-arm.tar.gz; ./google-cloud-sdk/install.sh`
4. `sky check gcp` shows: `gcloud --version` failed | open | 2025-03-14T02:14:06Z | 2025-03-17T21:43:23Z | https://github.com/skypilot-org/skypilot/issues/4952 | [] | Michaelvll | 1 |
gee-community/geemap | streamlit | 610 | Output point locations from geemap.extract_values_to_points doesn’t match input | I am using geemap.extract_values_to_points to extract pixel values under points from a Shapefile (EPSG: 4326) and outputting to a Shapefile. This is the raster image I’m using:
ee.Image("projects/soilgrids-isric/clay_mean")
When I checked the results in QGIS the points in the output Shapefile did not align with the input Shapefile. They were displaced ~200m and therefore extracts data from a neighboring pixel. To fix that I specified scale = 10, projection = 'EPSG:4326'. This moved the output points so they were nearly identical, however, the values for the points correspond to the raster value of the original point location. In other words, the location is (more or less) correct the but data value is from the neighboring pixel.
I attached the Shapefile I'm using.
I’m running this on Ubuntu 20.04 with version 0.8.18 of geemap
[points_carbon_sequestration_gisel.zip](https://github.com/giswqs/geemap/files/6946758/points_carbon_sequestration_gisel.zip)
| closed | 2021-08-06T16:19:51Z | 2022-05-28T17:28:40Z | https://github.com/gee-community/geemap/issues/610 | [
"bug"
] | nedhorning | 5 |
Nekmo/amazon-dash | dash | 114 | First press does not work | Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Guideline for bug reports
You can delete this section if your report is not a bug
* amazon-dash version:
* Python version:
* Pip & Setuptools version:
* Operating System:
How to get your version:
```
amazon-dash --version
python --version
pip --version
easy_install --version
```
- [ ] The `pip install` or `setup install` command has been completed without errors
- [ ] The `python -m amazon_dash.install` command has been completed without errors
- [ ] The `amazon-dash discovery` command works without errors
- [ ] I have created/edited the configuration file
- [ ] *Amazon-dash service* or `amazon-dash --debug
I was wondering if anyone else has noticed a bug in which the first press of the button of each day does not work. Instead I receive an email regarding choosing a product from Amazon Replenishment. After that it works fine, then I’ll see the issue again the next day..
Thoughts?
| closed | 2019-01-12T17:57:26Z | 2019-05-15T18:17:54Z | https://github.com/Nekmo/amazon-dash/issues/114 | [] | mcgurdan | 7 |
pinry/pinry | django | 293 | Official Apple M1 Arm Processor Support | I know some form of Arm support was [recently added](https://github.com/pinry/pinry/pull/248) but I think it only supports the Raspberry Pi platform.
I got a warning (not a blocking error) when I ran this on the new M1 processor Macbook Pro. The app still launches so it isn't a huge deal but I thought I'd mention it so it can be addressed eventually.
```
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
cbcba37ce4b23d998748d2ee1529f961483631e88cc022f9616f724bfb7a515c
```
Looking at the other PR, I don't actually see anything that adds Arm support in it but I'm not too familiar with this codebase so I could be missing something. Otherwise I'd make an attempt to add it.
Thanks for the cool app! | closed | 2021-08-11T22:50:04Z | 2021-09-02T03:15:07Z | https://github.com/pinry/pinry/issues/293 | [] | dakotahp | 1 |
gradio-app/gradio | deep-learning | 10,267 | gradio 5.0 unable to load javascript file | ### Describe the bug
if I provide JavaScript code in a variable, it is executed perfectly well but when I put the same code in a file "app.js" and then pass the file path in `js` parameter in `Blocks`, it doesn't work. I have added the code in reproduction below. if the same code is put in a file, the block will be unable to execute that.
It was working fine in version 4. Now I am upgrading to 5.0.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
login_page_js = """
() => {
//handle launch
let reload = false;
let gradioURL = new URL(window.location.href);
if(
!gradioURL.searchParams.has('__theme') ||
(gradioURL.searchParams.has('__theme') && gradioURL.searchParams.get('__theme') !== 'dark')
) {
gradioURL.searchParams.delete('__theme');
gradioURL.searchParams.set('__theme', 'dark');
reload = true;
}
if(reload) {
window.location.replace(gradioURL.href);
}
}
"""
with gr.Blocks(
js = login_page_js
) as login_page:
gr.Button("Sign in with Microsoft", elem_classes="icon-button" ,link="/login")
if __name__ == "__main__":
login_page.launch()
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
linux 2204
```
### Severity
I can work around it | open | 2024-12-30T15:09:28Z | 2024-12-30T16:19:48Z | https://github.com/gradio-app/gradio/issues/10267 | [
"bug"
] | git-hamza | 2 |
gradio-app/gradio | deep-learning | 10,605 | Automatically adjust the page | ### Describe the bug
Does Gradio support automatic page adjustment for different devices, such as mobile phones and computers?
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Does Gradio support automatic page adjustment for different devices, such as mobile phones and computers?
```
### Severity
I can work around it | closed | 2025-02-17T13:57:40Z | 2025-02-18T01:07:12Z | https://github.com/gradio-app/gradio/issues/10605 | [
"bug"
] | nvliajia | 1 |
hatchet-dev/hatchet | fastapi | 474 | SSL error: Failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:7 | I am running from docker-compose.yml and got this error while trying to run the worker:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "src/worker/main.py", line 5, in start
worker.register_workflow(ProcessingWorkflow()) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "hatchet_sdk/worker.py", line 347, in register_workflow
self.client.admin.put_workflow(workflow.get_name(), workflow.get_create_opts())
File "hatchet_sdk/clients/admin.py", line 69, in put_workflow
raise ValueError(f"Could not put workflow: {e}")
ValueError: Could not put workflow: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:7077: Ssl handshake failed: SSL_ERROR_SSL: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"
debug_error_string = "UNKNOWN:Error received from peer {created_time:"2024-05-09T14:28:34.759853+03:00", grpc_status:14, grpc_message:"failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:7077: Ssl handshake failed: SSL_ERROR_SSL: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"}"
```
| closed | 2024-05-09T13:00:11Z | 2024-05-09T13:40:54Z | https://github.com/hatchet-dev/hatchet/issues/474 | [] | simjak | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 105 | My computer can't connect it via ssh though i upload my id_rsa and id_rsa.pub |


the ip address and the usrname are all right
i don't know why is that?
| closed | 2020-05-14T02:57:33Z | 2020-05-15T13:24:27Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/105 | [] | QYHcrossover | 2 |
keras-team/keras | pytorch | 20,603 | Request for multi backend support for timeseries data loading | Hi,
I wonder is it possible for you to implement keras.utils.timeseries_dataset_from_array() method by other backends (e.g. JAX)?
it would be nice to not have to add TF dependency just because of this module.
https://github.com/keras-team/keras/blob/v3.7.0/keras/src/utils/timeseries_dataset_utils.py#L7 | closed | 2024-12-06T08:35:40Z | 2025-01-21T07:02:07Z | https://github.com/keras-team/keras/issues/20603 | [
"type:support",
"stat:awaiting response from contributor"
] | linomi | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.