repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
shibing624/text2vec | nlp | 152 | loss function | i am trying to understand the loss function :
`def calc_loss(self, y_true, y_pred):
"""
矩阵计算batch内的cos loss
"""
y_true = y_true[::2]
norms = (y_pred ** 2).sum(axis=1, keepdims=True) ** 0.5
y_pred = y_pred / norms
y_pred = torch.sum(y_pred[::2] * y_pred[1::2], dim=1) * 20
y_pred = y_pred[:, None] - y_pred[None, :]
y_true = y_true[:, None] < y_true[None, :]
y_true = y_true.float()
y_pred = y_pred - (1 - y_true) * 1e12
y_pred = y_pred.view(-1)
y_pred = torch.cat((torch.tensor([0]).float().to(self.device), y_pred), dim=0)
return torch.logsumexp(y_pred, dim=0)`
1. why we are taking alternate values from true labels?
2. why we are taking dot product between alternate ypred?
if possible can you share any link or documentation of paper for this. thanks
| open | 2024-05-20T09:47:26Z | 2024-05-20T13:55:18Z | https://github.com/shibing624/text2vec/issues/152 | [
"question"
] | riyajatar37003 | 4 |
biolab/orange3 | pandas | 6,715 | Cannot select radio buttons in Hierarchical Clustering | In Hierarchical Clustering selecting None (in Pruning) or Manual (in Selection) only works in a narrow area under the radio button and its text (red area on image).

OS: Windows 10
Orange: 3.36.2
| closed | 2024-01-24T11:48:08Z | 2024-01-24T15:47:23Z | https://github.com/biolab/orange3/issues/6715 | [
"bug report"
] | processo | 2 |
reloadware/reloadium | flask | 15 | Issues with exceptions | **Describe the bug**
Reloadium does not handle correctly methods that raise exceptions.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a module with the following content
```python
def bar():
raise Exception('Some exception')
def foo():
try:
bar()
except Exception as e:
pass
foo()
pass
```
2. Place a breakpoint at the end of the module.
3. Debug the code using reloadium
**Expected behavior**
Application stops at the set breakpoint
**Actual behavior**
The message "**An exception occurred during reloading current frame. Fix your changes and save to reload**" appears. Reloadium waits for the user to fix the `bar()` method.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows
- OS version: 10
- Reloadium package version: 0.8.7
- PyCharm plugin version: 0.8.1
- Editor: PyCharm
- Run mode: Debug
**Additional context**
No problems will appear if you catch the exception in the method where the exception occurs. The following snippet will work:
```
def bar():
try:
raise Exception('Some exception')
except Exception:
pass
```
Previous versions of Reloadium handled such situations without any problems. | closed | 2022-05-23T04:40:48Z | 2022-05-25T16:55:17Z | https://github.com/reloadware/reloadium/issues/15 | [] | BusHero | 1 |
chaoss/augur | data-visualization | 3,041 | Add ability to view and manage api keys via the frontend | - Add "Worker Oauth Keys" section to the admin dashboard
- Display keys in a table
- Use the same table layout and formatting that is used for current frontend tables (IE: pagination, common theme, etc...)
- Make table filterable and sortable by each column
- Only display relevant columns: Key ID, Key platform, Token (hidden by default, click to show)
- Add delete button (with confirmation) to remove a key from the worker_oauth table and unpublish from the key orchestrator
- Display invalid keys in a table separately from the rest, with the same formatting as the primary table
- Keys must be gathered from the `KeyPublisher` interface, and joined on the data from the worker_oauth table in order to determine the IDs of the invalid keys at runtime.
- Additionally, any keys in the worker_oauth table that were not loaded into the key orchestrator interface at startup are also considered invalid.
- All requisite endpoints must be protected with `@admin_required`
- Add form to insert new keys
1. Add new keys to worker_oauth table
2. Check that the provided key is valid before inserting (via `is_bad_api_key()` in `GithubApiKeyHandler` or `GitlabApiKeyHandler`)
3. Publish new keys with the KeyPublisher so they are available for new collection requests
Prerequisites
---
The admin dashboard is being implemented in the [admin-changes](/chaoss/augur/tree/admin-changes) branch. All changes to the dashboard must be based on this branch.
This issue is dependent upon the merging of #3058, as that adds the INVALIDATE commands to the orchestration API and the KeyClient. | open | 2025-03-05T02:13:53Z | 2025-03-21T11:14:49Z | https://github.com/chaoss/augur/issues/3041 | [] | ABrain7710 | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 121 | What if model has multiple outputs? | Hi, I was working on a model which has 2 outputs (tuple of length 2). And for this model, the grad cam library gave the following error.
```Traceback (most recent call last):
File "cont_bach.py", line 1514, in <module>
generate_gradcam_vis(full_model, testloader, mean, std)
File "cont_bach.py", line 1501, in generate_gradcam_vis
grayscale_cam = cam(input_tensor=input_tensor, target_category=target_category)
File "/home/abhiraj/.conda/envs/clam/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py", line 129, in __call__
target_category, eigen_smooth)
File "/home/abhiraj/.conda/envs/clam/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py", line 70, in forward
loss = self.get_loss(output, target_category)
File "/home/abhiraj/.conda/envs/clam/lib/python3.7/site-packages/pytorch_grad_cam/base_cam.py", line 37, in get_loss
loss = loss + output[i, target_category[i]]
TypeError: tuple indices must be integers or slices, not tuple
```
to work around this problem I changed the following line
https://github.com/jacobgil/pytorch-grad-cam/blob/8842f19525fd9c74120c2ee66c31f0963c6c43b8/pytorch_grad_cam/activations_and_gradients.py#L35
to this
```
if type(self.model(x)) is tuple: #Manual Change herefor multiple outputs of model
return self.model(x)[1]
else:
return self.model(x)
```
This worked for me but **can this feature be added to the code which supports multiple outputs**?
Thanks for the great library! | closed | 2021-08-23T18:18:05Z | 2021-09-09T14:48:44Z | https://github.com/jacobgil/pytorch-grad-cam/issues/121 | [] | ASKanse | 2 |
psf/black | python | 4,403 | Black's treatment of trailing commas depends on previous statements. | **Describe the bug**
Black's treatment of trailing commas depends on previous statements.
**To Reproduce**
If `file0.py` is
```python
print(
*[],
"Once",
"there",
"were",
"four",
"children",
"whose",
"names",
"were",
"Peter,",
"Susan,",
"Edmund",
"and",
"Lucy."
)
```
running
```bash
black --diff file0.py
```
produces
```
All done! ✨ 🍰 ✨
1 file would be left unchanged.
```
If `file1.py` is
```python
f""
print(
*[],
"Once",
"there",
"were",
"four",
"children",
"whose",
"names",
"were",
"Peter,",
"Susan,",
"Edmund",
"and",
"Lucy."
)
```
running
```bash
black --diff file1.py
```
produces
```
--- file1.py 2024-07-13 22:34:48.876534+00:00
+++ file1.py 2024-07-13 22:35:00.795624+00:00
@@ -11,7 +11,7 @@
"were",
"Peter,",
"Susan,",
"Edmund",
"and",
- "Lucy."
+ "Lucy.",
)
would reformat file1.py
All done! ✨ 🍰 ✨
1 file would be reformatted.
```
**Expected behavior**
The previous statement `f""` does not affect the trailing comma after `"Lucy."`.
**Environment**
- Black's version: 24.4.2
- OS and Python version: Ubuntu 24.04, Python 3.12.3
The same behaviour is exhibited by `https://black.vercel.app/?version=main`. | closed | 2024-07-13T22:36:17Z | 2024-07-15T10:04:38Z | https://github.com/psf/black/issues/4403 | [
"T: bug"
] | JohnADawson | 2 |
d2l-ai/d2l-en | tensorflow | 1,778 | Unify hyperparameters of all frameworks in DCGAN | https://github.com/d2l-ai/d2l-en/blob/master/chapter_generative-adversarial-networks/dcgan.md
Currently the TF implementation (https://github.com/d2l-ai/d2l-en/pull/1760/files) uses a different set of hyperparameters:
#@tab mxnet, pytorch
latent_dim, lr, num_epochs = 100, 0.005, 20
train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
#@tab tensorflow
latent_dim, lr, num_epochs = 100, 0.0005, 40
train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
Increasing `num_epochs` to 40 doubles the execution time in TF. Let's unify hyperparameters across all the frameworks. | open | 2021-06-08T00:35:07Z | 2023-10-31T14:20:55Z | https://github.com/d2l-ai/d2l-en/issues/1778 | [
"tensorflow-adapt-track"
] | astonzhang | 3 |
coqui-ai/TTS | python | 2,690 | [Bug] Fine Tune YourTTS with Around 100 Audio Samples | ### Describe the bug
@Edresson I want to fine tun yourTTS with around 100 audio examples. However, the current results give me a not so good results. I have attached my train_yourtts file.
Could you please provide me some suggestions? Thank you.
### To Reproduce
```
import os
import torch
from trainer import Trainer, TrainerArgs
from TTS.bin.compute_embeddings import compute_embeddings
from TTS.bin.resample import resample_files
from TTS.config.shared_configs import BaseDatasetConfig
from TTS.tts.configs.vits_config import VitsConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.vits import CharactersConfig, Vits, VitsArgs, VitsAudioConfig
from TTS.utils.downloaders import download_vctk
torch.set_num_threads(24)
# pylint: disable=W0105
"""
This recipe replicates the first experiment proposed in the YourTTS paper (https://arxiv.org/abs/2112.02418).
YourTTS model is based on the VITS model however it uses external speaker embeddings extracted from a pre-trained speaker encoder and has small architecture changes.
In addition, YourTTS can be trained in multilingual data, however, this recipe replicates the single language training using the VCTK dataset.
If you are interested in multilingual training, we have commented on parameters on the VitsArgs class instance that should be enabled for multilingual training.
In addition, you will need to add the extra datasets following the VCTK as an example.
"""
CURRENT_PATH = os.path.dirname(os.path.abspath(__file__))
# Name of the run for the Trainer
RUN_NAME = "YourTTS-EN-VCTK-FT"
# Path where you want to save the models outputs (configs, checkpoints and tensorboard logs)
OUT_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "YourTTS_FT") # "/raid/coqui/Checkpoints/original-YourTTS/"
# If you want to do transfer learning and speedup your training you can set here the path to the original YourTTS model
RESTORE_PATH = "/root/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts/model_file.pth" # "/root/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts/model_file.pth"
# This paramter is useful to debug, it skips the training epochs and just do the evaluation and produce the test sentences
SKIP_TRAIN_EPOCH = False
# Set here the batch size to be used in training and evaluation
BATCH_SIZE = 32
# Training Sampling rate and the target sampling rate for resampling the downloaded dataset (Note: If you change this you might need to redownload the dataset !!)
# Note: If you add new datasets, please make sure that the dataset sampling rate and this parameter are matching, otherwise resample your audios
SAMPLE_RATE = 16000
# Max audio length in seconds to be used in training (every audio bigger than it will be ignored)
MAX_AUDIO_LEN_IN_SECONDS = 10
### Download VCTK dataset
VCTK_DOWNLOAD_PATH = os.path.join(CURRENT_PATH, "VCTK")
# Define the number of threads used during the audio resampling
#NUM_RESAMPLE_THREADS = 10
# Check if VCTK dataset is not already downloaded, if not download it
#if not os.path.exists(VCTK_DOWNLOAD_PATH):
#print(">>> Downloading VCTK dataset:")
#download_vctk(VCTK_DOWNLOAD_PATH)
#resample_files(VCTK_DOWNLOAD_PATH, SAMPLE_RATE, file_ext="flac", n_jobs=NUM_RESAMPLE_THREADS)
# init configs
# dataset config for one of the pre-defined datasets
vctk_config = BaseDatasetConfig(
formatter="ljspeech", meta_file_train="metadata.txt", language="en-us", path="./MyTTSDataset")
#vctk_config = BaseDatasetConfig(
# formatter="vctk",
# dataset_name="vctk",
# meta_file_train="",
# meta_file_val="",
# path=VCTK_DOWNLOAD_PATH,
# language="en",
# ignored_speakers=[
# "p261",
# "p225",
# "p294",
# "p347",
# "p238",
# "p234",
# "p248",
# "p335",
# "p245",
# "p326",
# "p302",
# ], # Ignore the test speakers to full replicate the paper experiment
#)
# Add here all datasets configs, in our case we just want to train with the VCTK dataset then we need to add just VCTK. Note: If you want to add new datasets, just add them here and it will automatically compute the speaker embeddings (d-vectors) for this new dataset :)
DATASETS_CONFIG_LIST = [vctk_config]
### Extract speaker embeddings
SPEAKER_ENCODER_CHECKPOINT_PATH = (
"https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/model_se.pth.tar"
)
SPEAKER_ENCODER_CONFIG_PATH = "https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/config_se.json"
D_VECTOR_FILES = [] # List of speaker embeddings/d-vectors to be used during the training
# Iterates all the dataset configs checking if the speakers embeddings are already computated, if not compute it
for dataset_conf in DATASETS_CONFIG_LIST:
# Check if the embeddings weren't already computed, if not compute it
embeddings_file = os.path.join(dataset_conf.path, "speakers.pth")
if not os.path.isfile(embeddings_file):
print(f">>> Computing the speaker embeddings for the {dataset_conf.dataset_name} dataset")
compute_embeddings(
SPEAKER_ENCODER_CHECKPOINT_PATH,
SPEAKER_ENCODER_CONFIG_PATH,
embeddings_file,
old_spakers_file=None,
config_dataset_path=None,
formatter_name=dataset_conf.formatter,
dataset_name=dataset_conf.dataset_name,
dataset_path=dataset_conf.path,
meta_file_train=dataset_conf.meta_file_train,
meta_file_val=dataset_conf.meta_file_val,
disable_cuda=False,
no_eval=False,
)
D_VECTOR_FILES.append(embeddings_file)
# Audio config used in training.
audio_config = VitsAudioConfig(
sample_rate=SAMPLE_RATE,
hop_length=256,
win_length=1024,
fft_size=1024,
mel_fmin=0.0,
mel_fmax=None,
num_mels=80,
)
# Init VITSArgs setting the arguments that are needed for the YourTTS model
model_args = VitsArgs(
d_vector_file=D_VECTOR_FILES,
use_d_vector_file=True,
d_vector_dim=512,
num_layers_text_encoder=10,
speaker_encoder_model_path=SPEAKER_ENCODER_CHECKPOINT_PATH,
speaker_encoder_config_path=SPEAKER_ENCODER_CONFIG_PATH,
resblock_type_decoder="2", # In the paper, we accidentally trained the YourTTS using ResNet blocks type 2, if you like you can use the ResNet blocks type 1 like the VITS model
# Useful parameters to enable the Speaker Consistency Loss (SCL) described in the paper
# use_speaker_encoder_as_loss=True,
# Useful parameters to enable multilingual training
use_language_embedding=True,
embedded_language_dim=4,
)
# General training config, here you can change the batch size and others useful parameters
config = VitsConfig(
lr=0.00001,
output_path=OUT_PATH,
model_args=model_args,
run_name=RUN_NAME,
project_name="YourTTS",
run_description="""
- Original YourTTS trained using VCTK dataset
""",
dashboard_logger="tensorboard",
logger_uri=None,
audio=audio_config,
batch_size=BATCH_SIZE,
batch_group_size=48,
eval_batch_size=BATCH_SIZE,
num_loader_workers=8,
eval_split_max_size=256,
print_step=50,
plot_step=100,
log_model_step=1000,
save_step=5000,
save_n_checkpoints=2,
save_checkpoints=True,
target_loss="loss_1",
print_eval=False,
use_phonemes=False,
phonemizer="espeak",
phoneme_language="en",
compute_input_seq_cache=True,
add_blank=True,
text_cleaner="multilingual_cleaners",
characters=CharactersConfig(
characters_class="TTS.tts.models.vits.VitsCharacters",
pad="_",
eos="&",
bos="*",
blank=None,
characters="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\u00af\u00b7\u00df\u00e0\u00e1\u00e2\u00e3\u00e4\u00e6\u00e7\u00e8\u00e9\u00ea\u00eb\u00ec\u00ed\u00ee\u00ef\u00f1\u00f2\u00f3\u00f4\u00f5\u00f6\u00f9\u00fa\u00fb\u00fc\u00ff\u0101\u0105\u0107\u0113\u0119\u011b\u012b\u0131\u0142\u0144\u014d\u0151\u0153\u015b\u016b\u0171\u017a\u017c\u01ce\u01d0\u01d2\u01d4\u0430\u0431\u0432\u0433\u0434\u0435\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\u0451\u0454\u0456\u0457\u0491\u2013!'(),-.:;? ",
punctuations="!'(),-.:;? ",
phonemes="",
is_unique=True,
is_sorted=True,
),
phoneme_cache_path=None,
precompute_num_workers=12,
start_by_longest=True,
datasets=DATASETS_CONFIG_LIST,
cudnn_benchmark=False,
max_audio_len=SAMPLE_RATE * MAX_AUDIO_LEN_IN_SECONDS,
mixed_precision=False,
test_sentences=[
[
"It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
#"VCTK_p277",
"ljspeech",
None,
"en-us",
],
[
"Be a voice, not an echo.",
#"VCTK_p239",
"ljspeech",
None,
"en-us",
],
[
"I'm sorry Dave. I'm afraid I can't do that.",
#"VCTK_p258",
"ljspeech",
None,
"en-us",
],
[
"This cake is great. It's so delicious and moist.",
#"VCTK_p244",
"ljspeech",
None,
"en-us",
],
[
"Prior to November 22, 1963.",
#"VCTK_p305",
"ljspeech",
None,
"en-us",
],
],
# Enable the weighted sampler
use_weighted_sampler=True,
# Ensures that all speakers are seen in the training batch equally no matter how many samples each speaker has
weighted_sampler_attrs={"speaker_name": 1.0},
weighted_sampler_multipliers={},
# It defines the Speaker Consistency Loss (SCL) α to 9 like the paper
speaker_encoder_loss_alpha=9.0,
)
# Load all the datasets samples and split traning and evaluation sets
#train_samples, eval_samples = load_tts_samples(
# config.datasets,
# eval_split=True,
# eval_split_max_size=config.eval_split_max_size,
# eval_split_size=config.eval_split_size,
#)
train_samples, eval_samples = load_tts_samples(
config.datasets,
eval_split=True,
#eval_split_max_size=config.eval_split_max_size,
#eval_split_size=config.eval_split_size,
eval_split_size=32,
)
# Init the model
model = Vits.init_from_config(config)
# Init the trainer and 🚀
trainer = Trainer(
TrainerArgs(restore_path=RESTORE_PATH, skip_train_epoch=SKIP_TRAIN_EPOCH),
config,
output_path=OUT_PATH,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
)
trainer.fit()
```
### Expected behavior
Better quality on TTS for new identity.
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A10G"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "1.13.1",
"TTS": "0.14.3",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
""
],
"processor": "x86_64",
"python": "3.7.0",
"version": "#40~20.04.1-Ubuntu SMP Mon Apr 24 00:21:13 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-06-19T18:14:39Z | 2023-06-25T08:04:50Z | https://github.com/coqui-ai/TTS/issues/2690 | [] | ZhichaoWang970201 | 3 |
lucidrains/vit-pytorch | computer-vision | 138 | Little doubts about the 'hard' and 'soft' distillation | Hi, Phil:
I noticed your implementation of the `hard` and `soft` distillation loss.
https://github.com/lucidrains/vit-pytorch/blob/e5324242be61bcbf433e129e914aa4b4fa1a79a0/vit_pytorch/distill.py#L142-L151
But, compared to what facebook published:
https://github.com/facebookresearch/deit/blob/e6b10b554d17c25c083eda5d5d7505608c6981f8/losses.py#L50-L61
I found in their `hard-distillation loss`, they use student `distillation logits` as input instead of `student logits`. I wonder what's the difference here.
Best
| closed | 2021-08-12T11:48:34Z | 2021-08-12T15:41:55Z | https://github.com/lucidrains/vit-pytorch/issues/138 | [] | CiaoHe | 1 |
deepfakes/faceswap | machine-learning | 1,016 | How can I unistall faceswap and all the libraries it installed/extracted on my system (Amounting almost 9GB)? | There is no uninstaller available | closed | 2020-05-01T13:35:36Z | 2020-05-01T14:43:55Z | https://github.com/deepfakes/faceswap/issues/1016 | [] | skd1993 | 0 |
xinntao/Real-ESRGAN | pytorch | 38 | Training time | Hello and thank you for your great work!
You trained ESRNET for 1,000K and ESRGAN for 400K iterations. I was wondering how long did training take in your case with 4 V100 GPU?
I am training with 2 RTX 3090 GPU and training only ESRNET shows 10days :confused: . My training dataset includes FFHQ dataset also (i.e. DIV2K+Flickr2K+FFHQ). Maybe training on FFHQ improves human face result.
Thank you. | open | 2021-08-17T06:19:15Z | 2022-08-05T12:08:53Z | https://github.com/xinntao/Real-ESRGAN/issues/38 | [] | cs20162004 | 17 |
alteryx/featuretools | scikit-learn | 2,688 | fix release notes version for 1.3.0 release | closed | 2024-02-26T17:01:26Z | 2024-02-26T17:23:14Z | https://github.com/alteryx/featuretools/issues/2688 | [] | tamargrey | 0 | |
httpie/cli | python | 852 | .netrc not honored if auth-type is used | Once `--auth-type` switch is used, `.netrc` is not honored. Authentication details must be provided via `--auth` switch.
Details:
```
# http --debug --auth-type=basic example.org
HTTPie 0.9.8
Requests 2.19.1
Pygments 2.2.0
Python 2.7.13 (default, Sep 26 2018, 18:42:22)
[GCC 6.3.0 20170516]
/usr/bin/python
Linux 4.9.0-9-amd64
<Environment {
"colors": 8,
"config": {
"__meta__": {
"about": "u'HTTPie configuration file'",
"help": "u'https://httpie.org/docs#config'",
"httpie": "u'0.9.8'"
},
"default_options": "[]"
},
"config_dir": "/root/.httpie",
"is_windows": false,
"stderr": "<open file '<stderr>', mode 'w' at 0x7fd6490091e0>",
"stderr_isatty": true,
"stdin": "<open file '<stdin>', mode 'r' at 0x7fd6490090c0>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<open file '<stdout>', mode 'w' at 0x7fd649009150>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: --auth required
``` | closed | 2020-02-14T10:00:45Z | 2020-06-16T09:14:10Z | https://github.com/httpie/cli/issues/852 | [
"bug",
"help wanted"
] | pszlazak | 6 |
dgtlmoon/changedetection.io | web-scraping | 2,275 | Notifications not respecting filters | **Describe the bug**
I keep receiving notifications for changes that should be filtered out.
**Version**
v0.45.16
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new watch with a discord notifications and add filters
2. Wait for a change of the website that includes something in the filters
3. Receive the notification
My instance is self-hosted so I don't have a link to share.
**Expected behavior**
I would expect the change to be filtered out and a notification not be triggered.
**Screenshots**
<img width="908" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/63423828/1c28bba8-5b19-4cf8-8735-c258da950710">
<img width="725" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/63423828/1e39dc9a-823a-41f6-85f8-508c043523f7">
<img width="547" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/63423828/f507c698-ecd7-4293-abbe-13db9a1f91a5">
<img width="674" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/63423828/2c51df36-75db-4572-846e-976145e858cc">
**Additional context**
Not sure if it's a bug or if I'm understanding something incorrectly.
I have also tried to use filters without the regex pattern. | closed | 2024-03-25T13:24:55Z | 2024-03-27T01:24:20Z | https://github.com/dgtlmoon/changedetection.io/issues/2275 | [
"triage"
] | guipace | 2 |
alpacahq/alpaca-trade-api-python | rest-api | 434 | add 'qty' and 'percentage' arguments to close_position() | The function alpaca_trade_api.rest.close_position() only accepts the 'symbol' argument. Since fractional shares are now available, the 'qty' and 'percentage' options should also be implemented.
I am happy to do it if this enhancement is OK? | closed | 2021-05-24T16:49:16Z | 2021-06-21T04:54:04Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/434 | [] | batmaxx | 1 |
iperov/DeepFaceLab | deep-learning | 5,653 | How to swap a picture to a picture | I want to swap a picture to a picture,what should i do? | closed | 2023-03-30T07:09:04Z | 2023-06-09T05:53:11Z | https://github.com/iperov/DeepFaceLab/issues/5653 | [] | awsdecvr | 2 |
PedroBern/django-graphql-auth | graphql | 69 | User partial update? | Is there a good way to partially update the user?
I have my `UPDATE_MUTATION_FIELDS` defined like this
```
'UPDATE_MUTATION_FIELDS': ['email', 'username', 'nickname', 'profile_pic',
'first_name', 'last_name', 'date_of_birth'],
```
I want the client to able to update those fields one by one.
But when I ran such query:
```
mutation {
updateAccount(username: "foo-bar"){
errors
success
}
}
```
Only the username field gets a value, all the other fields are set to empty string or None.
What is a good way to achieve it? | open | 2020-09-26T14:32:29Z | 2020-09-26T14:32:48Z | https://github.com/PedroBern/django-graphql-auth/issues/69 | [] | bloodwithmilk25 | 0 |
Kanaries/pygwalker | matplotlib | 456 | [DEV-688] [BUG] pygwalker with snowflake import error | ImportError: cannot import name 'string_types' from 'sqlalchemy.util.compat' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sqlalchemy/util/compat.py)
<sub>[DEV-688](https://linear.app/kanaries/issue/DEV-688/[bug]-pygwalker-with-snowflake-import-error)</sub> | closed | 2024-03-03T15:23:52Z | 2024-03-05T06:51:04Z | https://github.com/Kanaries/pygwalker/issues/456 | [
"bug"
] | ObservedObserver | 1 |
iperov/DeepFaceLab | machine-learning | 941 | Error using Xseg trainer | **Please help, i have no idea what this means, bad installation perhaps??**
**I thought tensorflow was already included**
**Following error when using Xseg trainer:**
Running trainer.
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1080 Ti
[0] Which GPU indexes to choose? : 0
0
[h] Face type ( h/mf/f/wf/head ?:help ) :
h
[4] Batch_size ( 2-16 ?:help ) : 2
2
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\__init__.py", line 1, in <module>
from .nn import nn
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\nn.py", line 26, in <module>
from core.interact import interact as io
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\interact\__init__.py", line 1, in <module>
from .interact import interact
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\interact\interact.py", line 9, in <module>
import cv2
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\cv2\__init__.py", line 3, in <module>
from .cv2 import *
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
File "imp.py", line 243, in load_module
ImportError File "imp.py", line 343, in load_dynamic
: ImportErrorDLL load failed: The paging file is too small for this operation to complete.:
DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 115, in _main
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
File "multiprocessing\spawn.py", line 115, in _main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_opsfrom tensorflow.python.pywrap_tensorflow_internal import *
from tensorflow.python.ops import init_ops File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import_pywrap_tensorflow_internal = swig_import_helper()
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
from tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)ImportError
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
: ImportError File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
raise ImportError(msg) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
DLL load failed: The paging file is too small for this operation to complete.:
from tensorflow.python.pywrap_tensorflow_internal import *from tensorflow.python.pywrap_tensorflow_internal import *
ImportErrorfrom tensorflow.python.pywrap_tensorflow_internal import *
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
:
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. File "<string>", line 1, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "multiprocessing\spawn.py", line 105, in spawn_main
_pywrap_tensorflow_internal = swig_import_helper()
File "multiprocessing\spawn.py", line 115, in _main
_pywrap_tensorflow_internal = swig_import_helper() File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
from tensorflow.python.ops import init_ops File "imp.py", line 243, in load_module
File "imp.py", line 243, in load_module
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "imp.py", line 343, in load_dynamic
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
File "imp.py", line 343, in load_dynamic
ImportError : File "imp.py", line 243, in load_module
ImportErrorfrom tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importDLL load failed: The paging file is too small for this operation to complete. File "imp.py", line 343, in load_dynamic
:
ImportErrorDLL load failed: The paging file is too small for this operation to complete. File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
During handling of the above exception, another exception occurred:
:
Traceback (most recent call last):
DLL load failed: The paging file is too small for this operation to complete.
During handling of the above exception, another exception occurred:
from tensorflow.python import pywrap_tensorflow File "<string>", line 1, in <module>
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
During handling of the above exception, another exception occurred:
File "<string>", line 1, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
File "multiprocessing\spawn.py", line 115, in _main
Traceback (most recent call last):
File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 115, in _main
raise ImportError(msg) File "multiprocessing\spawn.py", line 105, in spawn_main
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops File "multiprocessing\spawn.py", line 115, in _main
ImportError
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\initializers\__init__.py", line 2, in <module>
from tensorflow.python.ops import init_ops: File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. from tensorflow.python.ops import init_ops File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-importfrom tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
from tensorflow.python import pywrap_tensorflow
from tensorflow.python import pywrap_tensorflow File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg) File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg) ImportError
raise ImportError(msg): ImportError
: ImportErrorTraceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.: Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Traceback (most recent call last):
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Deepfake\DeepFaceLab\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The paging file is too small for this operation to complete.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. | open | 2020-11-05T11:05:05Z | 2023-06-08T21:38:09Z | https://github.com/iperov/DeepFaceLab/issues/941 | [] | LukeU123 | 3 |
Teemu/pytest-sugar | pytest | 207 | Markdown on PyPI | When I go to the [pytest-sugar page on PyPI](https://pypi.org/project/pytest-sugar), I see that markdown is not rendered. The version there is 0.9.4 and if [there](https://github.com/Teemu/pytest-sugar/blob/master/pytest_sugar.py#L31) were no other changes, this version was introduced with 92ae9dee - the latest commit.
I'm uncertain why markdown is not rendered, though. I have a similar setup for my packages and they look fine. Does anybody here have an idea? | closed | 2020-08-23T07:37:06Z | 2022-11-09T11:39:58Z | https://github.com/Teemu/pytest-sugar/issues/207 | [
"dont-know"
] | MartinThoma | 7 |
holoviz/colorcet | plotly | 10 | ECCN number for colorcet | hi Team,
could you p[lease help us with the ECCN number of the software colorcet version.
If you do not have your software classified with an ECCN, please kindly answer the following questions so that we may self-assess:
| NO | YES
-- | -- | --
Does the Software perform any encryption or utilize any encryption processes? | |
If the answer is YES to the above, please indicate if the encryption is coded into the application or separately called (such as using SSL) | |
If the answer is YES to the above, please indicate what function(s) the cryptography/encryption serves | |
A, Copyright protection purposes (Includes using a license key/code) | |
B, User authentication purposes | |
C, A core part of the functionality such as to encrypt databases D, To encrypt communications between the software and a host system | |
D, To encrypt communications between the software and a host system?
Regards,
Kriti Bhatnagar
Software analyst
New Products & Complex Team
EMIT | IT OPS | CES | WDS | SAM
HCL Technologies Limited
(CIN: L74140DL1991PLC046369)
10th Floor, ODC-IV, Software Tower 6, Sector 126
Noida SEZ, Uttar Pradesh – 201301, India
Phone: +1-4088093746 (ext.4144395)
Email:- kriti.bhatnagar@exxonmobil.com
for
ExxonMobil Global Services Company
22777 Springwoods Village Parkway
Spring, TX 77389
United States of America
| closed | 2018-05-14T07:58:00Z | 2018-08-24T13:40:03Z | https://github.com/holoviz/colorcet/issues/10 | [] | kbhatna1 | 1 |
kymatio/kymatio | numpy | 758 | Preallocation for speed | I found a x1.7 speed gain via simple reused preallocation:
```python
U_1_c0 = zeros_like(U_0_hat)
for n1 in range(len(psi1)):
cdgmm(U_0_hat, psi1[n1][0], out=U_1_c0)
U_1_hat = subsample_fourier(U_1_c0, 2**K1)
# etc
```
Runtime: `901ms -> 540ms`. This can repeat for other basic ops to speed up the entire pipeline.
The caveat is [breaking differentiability](https://stackoverflow.com/q/68043831/10133797) for torch. Workarounds include:
- A) `differentiable=True` kwarg
- B) `try-except` to detect differentiable context at runtime and set internal flag
@lostanlen raises the concerns of (correct me if I'm wrong) 1) added kwarg; 2) increased testing complexity, requiring additional branches akin to additional backends. My response:
1. Non-differentiability is a common use case and a kwarg is a small price to pay for 50%+ overall speedup. Worst case, there's B)
2. Only tests explicitly for differentiability are affected, and there's only one in all of Kymatio: `test_differentiability_scattering()` in `test_torch_scattering1d.py`. All that's needed is setting a kwarg for this one test (or nothing with B) - and easy enough if there are more tests. This does not require additional testing branches, workflows, etc
- Can add a `test_outputs_agree()` but that's again simple
What are your takes @janden @eickenberg ?
| closed | 2021-07-01T22:50:22Z | 2022-05-30T15:20:35Z | https://github.com/kymatio/kymatio/issues/758 | [] | OverLordGoldDragon | 6 |
piskvorky/gensim | machine-learning | 3,302 | don't install *.c *.cpp *.pxd *.pyx files | #### Problem description
I am installing gensim with `setup.py install`. I expected that this would only install needed files. I noticed that this installs files such as *.c *.cpp *.pxd *.pyx that aren't needed when importing and using the gensim Python modules.
#### Steps/code/corpus to reproduce
These are the files that are installed but should not be. They are source code and are not used by the gensim Python modules, so should not get installed by `setup.py install` or in the wheels or elsewhere.
```
gensim/_matutils.c: C source, ASCII text
gensim/_matutils.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/corpora/_mmreader.c: C source, ASCII text
gensim/corpora/_mmreader.pyx: Python script, ASCII text executable
gensim/models/doc2vec_corpusfile.cpp: C source, ASCII text
gensim/models/doc2vec_corpusfile.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/models/doc2vec_inner.cpp: C source, ASCII text
gensim/models/doc2vec_inner.pxd: a /usr/bin/env cython script, ASCII text executable
gensim/models/doc2vec_inner.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/models/fast_line_sentence.h: C++ source, ASCII text
gensim/models/fasttext_corpusfile.cpp: C source, ASCII text
gensim/models/fasttext_corpusfile.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/models/fasttext_inner.c: C source, ASCII text
gensim/models/fasttext_inner.pxd: a /usr/bin/env cython script, ASCII text executable
gensim/models/fasttext_inner.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/models/nmf_pgd.c: C source, ASCII text
gensim/models/nmf_pgd.pyx: Python script, ASCII text executable
gensim/models/stdint_wrapper.h: C source, ASCII text
gensim/models/voidptr.h: C source, ASCII text
gensim/models/word2vec_corpusfile.cpp: C source, ASCII text
gensim/models/word2vec_corpusfile.pxd: ASCII text
gensim/models/word2vec_corpusfile.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/models/word2vec_inner.c: C source, ASCII text
gensim/models/word2vec_inner.pxd: ASCII text
gensim/models/word2vec_inner.pyx: a /usr/bin/env cython script, ASCII text executable
gensim/similarities/fastss.c: C source, ASCII text
```
These files are also in the official wheels gensim distributes on PyPI:
```
$ wget https://files.pythonhosted.org/packages/06/66/e875156aca2edf0416a8739894dc97b05429ebfa4ada934774361fbf25c7/gensim-4.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
$ unzip -l gensim-4.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl | grep -v test_data | grep -v \\.py$ | grep -v cpython-39 | grep -v dist-info
Archive: gensim-4.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Length Date Time Name
--------- ---------- ----- ----
1042135 2021-09-16 23:04 gensim/_matutils.c
8966 2021-09-16 23:03 gensim/_matutils.pyx
10796 2021-09-16 23:03 gensim/models/fasttext_corpusfile.pyx
527 2021-09-16 23:03 gensim/models/stdint_wrapper.h
31383 2021-09-16 23:03 gensim/models/doc2vec_inner.pyx
448071 2021-09-16 23:04 gensim/models/doc2vec_corpusfile.cpp
542851 2021-09-16 23:04 gensim/models/fasttext_inner.c
334069 2021-09-16 23:04 gensim/models/fasttext_corpusfile.cpp
310 2021-09-16 23:03 gensim/models/voidptr.h
38384 2021-09-16 23:03 gensim/models/word2vec_inner.pyx
3621 2021-09-16 23:03 gensim/models/doc2vec_inner.pxd
780658 2021-09-16 23:04 gensim/models/nmf_pgd.c
1842 2021-09-16 23:03 gensim/models/nmf_pgd.pyx
17740 2021-09-16 23:03 gensim/models/word2vec_corpusfile.pyx
24392 2021-09-16 23:03 gensim/models/fasttext_inner.pyx
590839 2021-09-16 23:04 gensim/models/doc2vec_inner.cpp
2166 2021-09-16 23:03 gensim/models/word2vec_corpusfile.pxd
24883 2021-09-16 23:03 gensim/models/doc2vec_corpusfile.pyx
606169 2021-09-16 23:04 gensim/models/word2vec_inner.c
1200 2021-09-16 23:03 gensim/models/fast_line_sentence.h
5310 2021-09-16 23:03 gensim/models/word2vec_inner.pxd
630097 2021-09-16 23:04 gensim/models/word2vec_corpusfile.cpp
4873 2021-09-16 23:03 gensim/models/fasttext_inner.pxd
7365 2021-09-16 23:03 gensim/corpora/_mmreader.pyx
448463 2021-09-16 23:04 gensim/corpora/_mmreader.c
335535 2021-09-16 23:04 gensim/similarities/fastss.c
```
#### Versions
```
Linux-5.16.0-2-amd64-x86_64-with-glibc2.33
Python 3.9.10 (main, Feb 22 2022, 13:54:07)
[GCC 11.2.0]
Bits 64
NumPy 1.21.5
SciPy 1.7.3
gensim 4.1.2
FAST_VERSION 1
``` | open | 2022-02-28T08:12:20Z | 2022-04-02T06:09:12Z | https://github.com/piskvorky/gensim/issues/3302 | [
"bug",
"reach HIGH",
"impact LOW",
"housekeeping"
] | pabs3 | 7 |
python-restx/flask-restx | flask | 33 | Exception data is returned instead of custom error handle data | Hi,
I'm having an issue with the error handler which does not return to the client the correct data. Here's a simple test case that currently fail:
### **Code**
```python
def test_errorhandler_for_custom_exception_with_data(self, app, client):
api = restx.Api(app)
class CustomException(RuntimeError):
data = "Foo Bar"
@api.route('/test/', endpoint='test')
class TestResource(restx.Resource):
def get(self):
raise CustomException('error')
@api.errorhandler(CustomException)
def handle_custom_exception(error):
return {'message': str(error), 'test': 'value'}, 400
response = client.get('/test/')
assert response.status_code == 400
assert response.content_type == 'application/json'
data = json.loads(response.data.decode('utf8'))
assert data == {
'message': 'error',
'test': 'value',
}
```
```
E AssertionError: assert 'Foo Bar' == {'message': 'error', 'test': 'value'}
```
### **Repro Steps**
1. Register an error handler
2. Raise an exception having the attribute **data** (such as Marshmallow ValidationError)
3. The client gets back the value of the data attribute of the exception, not the one from the error handler
### **Expected Behavior**
Should return to the client the return value of the error handler
### **Actual Behavior**
Output the exception attribute instead
### **Environment**
- Python version: 3.6 and 3.7
- Flask version: 1.1.1
- Flask-RESTX version: 0.1.0
### **Additional Context**
The issue appears when the exception has an attribute **data**, because in **api.py** line 655 (*handle_error*) there's a ```data = getattr(e, 'data', default_data)```, so the handler returns the **data** attribute of the exception **e** instead of the custom handler data located in **default_data**
Thanks!
| open | 2020-02-06T14:23:00Z | 2020-02-12T17:43:59Z | https://github.com/python-restx/flask-restx/issues/33 | [
"bug"
] | AchilleAsh | 2 |
huggingface/transformers | tensorflow | 36,846 | Tansfomers_model | ### Model description
nothing
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | open | 2025-03-20T10:09:23Z | 2025-03-20T11:17:19Z | https://github.com/huggingface/transformers/issues/36846 | [
"New model"
] | abdul-muhmin | 1 |
ml-tooling/opyrator | fastapi | 5 | Finalize pex export capabilities | **Feature description:**
Finalize capabilities to export an Opyrator to a PEX file.
[PEX](https://github.com/pantsbuild/pex) is a tool to create self-contained executable Python environments that contain all relevant python dependencies.
The export can be executed via command line:
```bash
opyrator export my_opyrator:hello_world --format=pex my-opyrator.pex
``` | closed | 2021-04-19T10:03:51Z | 2021-11-02T02:12:12Z | https://github.com/ml-tooling/opyrator/issues/5 | [
"feature",
"stale"
] | lukasmasuch | 2 |
falconry/falcon | api | 1,607 | Errors in response serialization are not handled | I found this when testing @kgriffs' ASGI branch, but this is an issue in all (recent) Falcon version, including the (at the time of writing) stable 2.0.
Errors in response serialization are not handled, for example:
* Missing media handler for the given [response content-type](https://falcon.readthedocs.io/en/stable/api/request_and_response.html#falcon.Response.content_type) results in an [HTTPUnsupportedMediaType](https://falcon.readthedocs.io/en/stable/api/errors.html#falcon.HTTPUnsupportedMediaType).
* Any exceptions raised in [media serialization](https://falcon.readthedocs.io/en/stable/api/media.html#falcon.media.BaseHandler.serialize).
This is somewhat more important now with the recently merged https://github.com/falconry/falcon/issues/1507. The user might expect that at least the generic handler catches the cases above. They would be instead propagated to the WSGI server. | closed | 2019-11-11T19:16:46Z | 2020-12-27T18:00:22Z | https://github.com/falconry/falcon/issues/1607 | [
"bug"
] | vytas7 | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 210 | Plot ranked images on a single figure | closed | 2019-07-24T10:43:29Z | 2019-10-22T21:40:45Z | https://github.com/KaiyangZhou/deep-person-reid/issues/210 | [
"new_feature"
] | KaiyangZhou | 3 | |
PrefectHQ/prefect | data-science | 17,143 | Flow run labels missing from Kubernetes job | ### Bug summary
We're running a self-hosted Prefect on EKS, deployed via the official Helm charts. We want to add custom labels to our Kubernetes jobs, to allow for easier identification of flow run jobs in our analytics and metrics tools. However, when specifying `labels={"example": "example"}` in `create_flow_run_from_deployment`, the labels do not appear in the resulting Kubernetes job.
#### Expected behavior
When I run:
```py
#!/usr/bin/env python
from prefect import get_client
with get_client(sync_client=True) as client:
deployment = client.read_deployment_by_name("my-flow/my-deployment")
flow_run = client.create_flow_run_from_deployment(
deployment_id=deployment.id,
labels={"example": "example"},
)
```
I expect the resulting job’s metadata to include `example=example` under `metadata.labels`.
#### Actual behavior
Inspecting the job with shows that the label `example=example` is not present:
```
$ kubectl get jobs --show-labels
NAME STATUS COMPLETIONS DURATION AGE LABELS
aloof-tench-k7l6s Complete 1/1 15s 36s prefect.io/deployment-id=b4e7151f-de9e-4c00-9424-61bc3b838f1c,prefect.io/deployment-name=my-deployment,prefect.io/deployment-updated=2025-02-13t16-30-44.632196z,prefect.io/flow-id=8046cc02-d1ca-4022-8720-1ee7f579df64,prefect.io/flow-name=my-flow,prefect.io/flow-run-id=8bc3e898-1826-4ff3-84f4-b9dc4df01d8d,prefect.io/flow-run-name=aloof-tench,prefect.io/version=3.2.1
```
#### Temporary solution
Passing the labels through the `job_variables` parameter instead:
```py
#!/usr/bin/env python
from prefect import get_client
with get_client(sync_client=True) as client:
deployment = client.read_deployment_by_name("my-flow/my-deployment")
flow_run = client.create_flow_run_from_deployment(
deployment_id=deployment.id,
job_variables={"labels": {"example": "example"}},
)
```
does apply the label, but it may override any default labels set in `job_variables`, if they're not replicated here.
```
$ kubectl get jobs --show-labels
NAME STATUS COMPLETIONS DURATION AGE LABELS
accurate-petrel-gq7z7 Running 0/1 12s 12s example=example,...
```
#### Proposed solution
In my opinion, it would be ideal if Prefect merged top-level `flow_run.labels` with the job configuration labels. In particular, I believe merging `flow_run.labels` into this dict would resolve the issue: https://github.com/PrefectHQ/prefect/blob/cd03e6f3c1de7e85d9edd6ca568de7223d6b205e/src/prefect/workers/base.py#L253
This change would allow clients to apply additional flow run labels on the infrastructure, in addition to default ones set in `job_variables` on deployments/job templates.
### Version info
```Text
Version: 3.2.1
API version: 0.8.4
Python version: 3.11.6
Git commit: f8b15dfb
Built: Mon, Feb 10, 2025 3:20 PM
OS/Arch: linux/x86_64
Profile: local
Server type: server
Pydantic version: 2.10.6
Integrations:
prefect-kubernetes: 0.5.3
```
### Additional context
_No response_ | open | 2025-02-14T17:24:02Z | 2025-02-14T17:24:17Z | https://github.com/PrefectHQ/prefect/issues/17143 | [
"bug"
] | janesch97 | 0 |
holoviz/panel | matplotlib | 7,219 | build-docs fails because of missing xserver | I was trying to build the docs by running `build-docs`. I get
```bash
Successfully converted examples/gallery/streaming_videostream.ipynb to pyodide-worker target and wrote output to streaming_videostream.html.
/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/site-packages/pyvista/plotting/plotter.py:159: UserWarning:
This system does not appear to be running an xserver.
PyVista will likely segfault when rendering.
Try starting a virtual frame buffer with xvfb, or using
``pyvista.start_xvfb()``
warnings.warn(
2024-09-01 04:06:18.005 ( 1.726s) [ 7F029815D740]vtkXOpenGLRenderWindow.:456 ERR| vtkXOpenGLRenderWindow (0x5579fc8fd490): bad X server connection. DISPLAY=
ERROR:root:bad X server connection. DISPLAY=
Traceback (most recent call last):
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/bin/panel", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/jovyan/repos/private/panel/panel/command/__init__.py", line 101, in main
ret = Convert(parser).invoke(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/command/convert.py", line 113, in invoke
convert_apps(
File "/home/jovyan/repos/private/panel/panel/io/convert.py", line 583, in convert_apps
files = _convert_process_pool(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/panel/io/convert.py", line 483, in _convert_process_pool
result = future.result()
^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/.pixi/envs/docs/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
```
I'm running on linux inside a docker container on a JupyterHub. Probably something needs to be installed or configured? But what.
A solution or workaround for me to be able to build the docs and work with them would be highly appreciated. | closed | 2024-09-01T04:14:49Z | 2024-09-09T10:32:49Z | https://github.com/holoviz/panel/issues/7219 | [
"type: docs"
] | MarcSkovMadsen | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 411 | Add wrapper for self supervised loss | A common use case is to have ```embeddings``` and ```ref_emb``` be augmented versions of each other. For most losses right now you have to create labels to indicate which ```embeddings``` correspond with which ```ref_emb```. A wrapper that does this for the user would be nice.
Something like:
```python
loss_fn = SelfSupervisedWrapper(TripletMarginLoss())
loss = loss_fn(embeddings, ref_emb)
``` | closed | 2022-01-01T04:45:23Z | 2023-01-30T00:10:22Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/411 | [
"enhancement"
] | KevinMusgrave | 3 |
Integuru-AI/Integuru | automation | 21 | style: Unstructured website | Why is the website so unstructured and looking like just a simple HTML file?? | closed | 2024-11-02T07:02:32Z | 2024-11-04T14:58:41Z | https://github.com/Integuru-AI/Integuru/issues/21 | [] | PredictiveManish | 1 |
replicate/cog | tensorflow | 1,235 | Cog build fails on Ubuntu | Ubuntu : 22.04.2 LTS
Docker : 20.10.21, build 20.10.21-0ubuntu1~22.04.3
Cog : cog version 0.8.3 (built 2023-07-27T21:48:28Z)
GPU : false
Failed to build getting-started python environment with `cog init`.
Output is :-
```
abdullah@abdullah-HP-EliteBook-840-G4:~/Desktop/Github/cog-getting-started$ cog run python
Building Docker image from environment in cog.yaml...
unknown flag: --file
See 'docker --help'.
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default
"/home/abdullah/.docker")
-c, --context string Name of the context to use to connect to the
daemon (overrides DOCKER_HOST env var and
default context set with "docker context use")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level
("debug"|"info"|"warn"|"error"|"fatal")
(default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default
"/home/abdullah/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default
"/home/abdullah/.docker/cert.pem")
--tlskey string Path to TLS key file (default
"/home/abdullah/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Run 'docker COMMAND --help' for more information on a command.
To get more help with docker, check out our guides at https://docs.docker.com/go/guides/
ⅹ Failed to build Docker image: exit status 125
```
| closed | 2023-07-29T15:19:55Z | 2024-01-22T22:45:22Z | https://github.com/replicate/cog/issues/1235 | [] | AbdullahMakhdoom | 13 |
amidaware/tacticalrmm | django | 1,719 | Fresh install cannot finish - right denied core_coresettings table | **Server Info (please complete the following information):**
- OS: [Debian 12]
- Browser: [nothing]
- RMM Version (nothing because the installation failed):
**Installation Method:**
- [ x] Standard
- [ ] Docker
**Describe the bug**
when I install tactical rmm with a self-signed certificate (./install --insecure) the script gets stuck with this message (Mesh Central not ready yet...).
When I ran the troubleshooting script, it told me that nats, nats-api and mesh were not running. I then used the systemctl status command to check what was happening. I noticed that the configuration file (nats-rmm.conf) didn't exist, so I ran the following command, which solved the problem (/rmm/api/env/bin/python /rmm/api/tacticalrmm/manage.py reload_nats) and I was able to start nats. As for nats-api, it just wasn't started.
Then I looked at mesh and did the following command as indicated in the documentation (/rmm/api/env/bin/python /rmm/api/tacticalrmm/manage.py check_mesh). This is where it gets tricky: I get the error i put in attachement file
[tactical_rmm_django_error.txt](https://github.com/amidaware/tacticalrmm/files/13788650/tactical_rmm_django_error.txt)
**To Reproduce**
Steps to reproduce the behavior:
1. Launch the install script with --insecure
2. Do the configuration
3. See this message "Mesh Central not ready yet..."
**Expected behavior**
The result of this command (/rmm/api/env/bin/python /rmm/api/tacticalrmm/manage.py check_mesh) on the documentation
**Additional context**
I just want to install tactical rmm. I just have ssh on the server.
It is a fresh install
| closed | 2023-12-28T18:27:20Z | 2023-12-28T19:37:50Z | https://github.com/amidaware/tacticalrmm/issues/1719 | [] | nightwolf-1 | 0 |
flairNLP/flair | pytorch | 3,380 | [Bug]: Model double sizes after training. Ho to make FP16 for prediction? | ### Describe the bug
I have flair in productuion but problem is that it is working quite slow. I was trying ONNX, but unfortuanally it doesn't work with deberta-v3. I am still investigating, maybe I do something wrong.
But today's problem that I came across a thing that doubles my model size after training. It is common thing for deberta (https://discuss.huggingface.co/t/why-is-uploaded-model-twice-the-size-of-actual-model/18782/7) because it was trained with mixed precision and should be initialized as:
`model = AutoModelForSequenceClassification.from_pretrained(model_name, torch_dtype=torch.float16).to(device)`
with `torch_dtype=torch.float16` but flair doesn't do so. So I am facing issue that hugging face `microsoft/deberta-v3-small` weights **286 MB** and the same model but fine-tuned with flair on NER task weights **574 MB**.
So my question is:
1) How do I get model in normal size? With fp16 training and prediction.
2) And maybe you have some advice about how can I get the maximum speed out of a model?
### To Reproduce
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "3" #,2,3"
os.environ['TRANSFORMERS_CACHE'] = '/home/user/cache'
os.environ['TRANSFORMERS_OFFLINE'] = "1"
import flair
flair.set_seed(2)
from flair.datasets import CONLL_03
from flair.embeddings import WordEmbeddings, FlairEmbeddings, StackedEmbeddings, CharacterEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
from flair.datasets import ColumnCorpus
from flair.trainers import ModelTrainer
from flair.data import MultiCorpus
from flair.data import Sentence
from flair.embeddings import TransformerWordEmbeddings
import datetime
import torch
import json
column_format={0: "text",3: "ner"}
corpus_03 = CONLL_03(base_path='data/conll-2003', column_format = column_format, label_name_map={
'PER': 'PER',
'LOC': 'O',
'ORG': 'ORG',
'MISC': 'O'
# by renaming to 'O' this tag gets ignored
})
corpus = MultiCorpus([corpus_03, ...])
label_type = 'ner'
label_dict = corpus.make_label_dictionary(label_type=label_type)
embeddings = TransformerWordEmbeddings(model='microsoft/deberta-v3-small',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=False,
)
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=label_dict,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
trainer = ModelTrainer(tagger, corpus)
trainer.fine_tune('models/0_13_deberta/',
learning_rate=5.0e-5,
mini_batch_size=32,
embeddings_storage_mode='cpu',
monitor_test = True,
train_with_dev = True,
max_epochs=3)
```
### Expected behavior
Model with the same size as a hugging face one.
### Logs and Stack traces
_No response_
### Screenshots


### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.0
##### Pytorch
2.1.0.dev20230807+cu121
##### Transformers
4.21.0
#### GPU
True | closed | 2023-11-30T14:27:02Z | 2023-12-05T10:59:47Z | https://github.com/flairNLP/flair/issues/3380 | [
"bug"
] | iliaNecrov | 7 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,750 | [Feature Request]: 请问下,webui是放弃更新了吗?感觉已经好久没有更新了! | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
请问下,webui是放弃更新了吗?感觉已经好久没有更新了!
### Proposed workflow
请问下,webui是放弃更新了吗?感觉已经好久没有更新了!
### Additional information
请问下,webui是放弃更新了吗?感觉已经好久没有更新了! | open | 2024-12-25T07:35:32Z | 2024-12-31T01:31:27Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16750 | [
"enhancement"
] | BannyLon | 1 |
plotly/dash | dash | 2,725 | Allow background callback tasks to programmatically retry later. | **Is your feature request related to a problem? Please describe.**
Background callbacks running in a distributed environment (Openshift or Kubernetes) can fail for reasons that are recoverable via application logic. e.g. a data resource isn't available at a point in time, but will be available in the future.
A bad solution is to have the background callback task check for a resource and `sleep` for some amount of time, then check again later, and repeat. This consumes the `Celery Worker` thread for no reason, and in our app, leads to worker pool exhaustion.
**Describe the solution you'd like**
It'd make sense for a background callback task to:
1. check whether it can execute given the current state,
2. proceed if it can,
3. re-enqueue itself if it can't, yielding the worker thread to be used by another task.
Since background callbacks are Celery tasks, the features to enable programatic retries are already available with the `bind` argument: a task receives a `self` parameter that can be instructed to retry.
This might look like the following pseudocode:
```python
@dash.callback(
... # Inputs and Outputs
background=True,
celery_bind=True, # first param to func must be for 'self'
retry_on_exceptions=[DBNotAvailableRetry],
)
def func(self, conn):
val = conn.get_value() # raises DBNotAvailableRetry exception
if not val:
self.retry(after="5s", exponential_backoff=True, jitter=True)
return val
```
**Describe alternatives you've considered**
Since Dash controls the context of the executing tasks when it's enqueued in Celery, the functionality of pushing the `self` parameter into the background callback arguments could be avoided if Dash instead implemented exception handling that would trigger retries when caught.
```python
@celery_app.task(
bind=True
)
def dash_bg_callback_wrapper(self, user_func, args):
try:
results = user_func(*args)
return results
except dash.BG_RETRY_EXCEPTION as e:
self.retry(
after=e.args["after"] # user could set this, knowing their app- would default to 0 time before retry.
)
```
| open | 2024-01-12T17:30:21Z | 2024-08-13T19:44:46Z | https://github.com/plotly/dash/issues/2725 | [
"feature",
"P3"
] | JamesKunstle | 10 |
Avaiga/taipy | automation | 1,733 | [🐛 BUG] Continuous Slider change does not work with Lov | ### What went wrong? 🤔
I want to make a [slider](https://docs.taipy.io/en/release-3.1/manuals/gui/viselements/slider/) that updates the value of a text each time I slide to a different value, even if I do not release the slider
My code:
```python
from taipy.gui import builder as tgb
from taipy.gui import Gui
text = "Dio"
with tgb.Page() as page:
tgb.text("It was me {text} !!!")
tgb.slider("{text}", lov="Dio;Jotaro;Giorno", continuous=True)
Gui(page,).run(debug=True, use_reloader=True, port=5111)
```
I expect `continuous=True` to do the trick. I know that with `lov` it is disabled by default but setting it to True should still apply it no? Otherwise it should throw an error for me
taipy==3.1.1
taipy-gui==3.1.4
I got the same with taipy dev1 4
### Expected Behavior
Setting `continuous=True` should change the value when cursor position change, not only when it is released
### Browsers
Firefox
### OS
Linux
### Version of Taipy
3.1.1 and develop
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-08-31T10:33:00Z | 2024-09-05T16:15:01Z | https://github.com/Avaiga/taipy/issues/1733 | [
"🖰 GUI",
"💥Malfunction",
"🟨 Priority: Medium"
] | ShootingStarD | 1 |
xorbitsai/xorbits | numpy | 30 | [Feature] mars.remote adaption | closed | 2022-11-17T04:29:54Z | 2022-12-09T04:58:18Z | https://github.com/xorbitsai/xorbits/issues/30 | [] | UranusSeven | 0 | |
cvat-ai/cvat | computer-vision | 8,260 | CVAT doesn't show the label returned by nuclio (custom yolov3 model) | Actions before raising this issue
I searched the existing issues and did not find anything similar.
I read/searched [the docs](https://docs.cvat.ai/docs/)
Hi,
I have deployed a custom yolov3 model for automatic annotations. CVAT does show that the annotation task has been successful but no label is displayed on the image. The labels are returned by the serverless function (checked by ensuring logging for the handler function). The CVAT server log is
"
2024-08-05 15:38:29,647 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/projects/4 HTTP/1.0" 200 OK
2024-08-05 15:38:29,656 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/labels?project_id=4&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:29,674 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/projects/4 HTTP/1.0" 200 OK
2024-08-05 15:38:29,684 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/labels?project_id=4&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:29,710 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/labels?project_id=4&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:29,737 DEBG 'uvicorn-1' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-08-05 15:38:29,871 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/users?search=bsequeira&limit=10&is_active=true HTTP/1.0" 200 OK
2024-08-05 15:38:37,621 DEBG 'uvicorn-0' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-08-05 15:38:39,779 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "POST /api/events?org= HTTP/1.0" 201 Created
2024-08-05 15:38:39,788 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/jobs/6303 HTTP/1.0" 200 OK
2024-08-05 15:38:39,817 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/labels?job_id=6303&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:39,842 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/jobs?org=&type=ground_truth&task_id=12&page_size=12 HTTP/1.0" 200 OK
2024-08-05 15:38:39,874 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/jobs/6303/data/meta?org= HTTP/1.0" 200 OK
2024-08-05 15:38:39,901 DEBG 'uvicorn-0' stderr output:
[2024-08-05 15:38:39,901] INFO cvat.apps.engine.cache: Starting to get chunk from cache: key 12_0_Quality.COMPRESSED
2024-08-05 15:38:39,902 DEBG 'uvicorn-0' stderr output:
[2024-08-05 15:38:39,902] INFO cvat.apps.engine.cache: Ending to get chunk from cache: key 12_0_Quality.COMPRESSED, is_cached True
2024-08-05 15:38:39,903 DEBG 'uvicorn-0' stdout output:
INFO: 192.168.1.67:0 - "GET /api/jobs/6303/data?org=&quality=compressed&type=chunk&number=0 HTTP/1.0" 200 OK
2024-08-05 15:38:39,981 DEBG 'uvicorn-1' stdout output:
INFO: 192.168.1.67:0 - "GET /api/jobs/6303/annotations?org= HTTP/1.0" 200 OK
2024-08-05 15:38:40,004 DEBG 'uvicorn-1' stdout output:
INFO: 192.168.1.67:0 - "GET /api/issues?job_id=6303&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:40,027 DEBG 'uvicorn-1' stdout output:
INFO: 192.168.1.67:0 - "GET /api/comments?job_id=6303&org=&page_size=500&page=1 HTTP/1.0" 200 OK
2024-08-05 15:38:46,145 DEBG 'uvicorn-1' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-08-05 15:38:53,507 DEBG 'uvicorn-0' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-08-05 15:39:03,993 DEBG 'uvicorn-0' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-08-05 15:39:09,055 DEBG 'uvicorn-1' stderr output:
[2024-08-05 15:39:09,055] INFO cvat.apps.engine.cache: Starting to get chunk from cache: key 12_0_Quality.ORIGINAL
2024-08-05 15:39:09,056 DEBG 'uvicorn-1' stderr output:
[2024-08-05 15:39:09,056] INFO cvat.apps.engine.cache: Ending to get chunk from cache: key 12_0_Quality.ORIGINAL, is_cached True
2024-08-05 15:39:09,211 DEBG 'uvicorn-1' stdout output:
INFO: 192.168.1.67:0 - "POST /api/lambda/functions/cvat-yolov3-tiny-detector?org= HTTP/1.0" 200 OK
2024-08-05 15:39:10,282 DEBG 'uvicorn-1' stdout output:
INFO: 172.28.0.3:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
"
and the nuclio container log is
"
24.08.05 15:37:30.310 (I) cessor.healthcheck.server Listening {"listenAddress": ":8082"}
24.08.05 15:37:30.310 (D) processor.http Creating worker pool {"num": 2}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Creating listener socket {"path": "/tmp/nuclio-rpc-cqof3elgolns739tfvf0.sock"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Creating listener socket {"path": "/tmp/nuclio-rpc-cqof3elgolns739tfvfg.sock"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Creating listener socket {"path": "/tmp/nuclio-rpc-cqof3elgolns739tfvg0.sock"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Creating listener socket {"path": "/tmp/nuclio-rpc-cqof3elgolns739tfvgg.sock"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Using Python wrapper script path {"path": "/opt/nuclio/_nuclio_wrapper.py"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Using Python wrapper script path {"path": "/opt/nuclio/_nuclio_wrapper.py"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Using Python handler {"handler": "main:handler"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Using Python handler {"handler": "main:handler"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Using Python executable {"path": "/usr/local/bin/python3"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Setting PYTHONPATH {"value": "PYTHONPATH=/opt/nuclio:/opt/nuclio/opencv/python/cv2/python-3.8"}
24.08.05 15:37:30.310 (D) sor.http.w1.python.logger Running wrapper {"command": "/usr/local/bin/python3 -u /opt/nuclio/_nuclio_wrapper.py --handler main:handler --event-socket-path /tmp/nuclio-rpc-cqof3elgolns739tfvf0.sock --control-socket-path /tmp/nuclio-rpc-cqof3elgolns739tfvg0.sock --platform-kind local --namespace nuclio --worker-id 1 --trigger-kind http --trigger-name myHttpTrigger --decode-event-strings"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Using Python executable {"path": "/usr/local/bin/python3"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Setting PYTHONPATH {"value": "PYTHONPATH=/opt/nuclio:/opt/nuclio/opencv/python/cv2/python-3.8"}
24.08.05 15:37:30.310 (D) sor.http.w0.python.logger Running wrapper {"command": "/usr/local/bin/python3 -u /opt/nuclio/_nuclio_wrapper.py --handler main:handler --event-socket-path /tmp/nuclio-rpc-cqof3elgolns739tfvfg.sock --control-socket-path /tmp/nuclio-rpc-cqof3elgolns739tfvgg.sock --platform-kind local --namespace nuclio --worker-id 0 --trigger-kind http --trigger-name myHttpTrigger --decode-event-strings"}
24.08.05 15:37:30.419 (I) sor.http.w0.python.logger Wrapper connected {"wid": 0, "pid": 34}
24.08.05 15:37:30.419 (D) sor.http.w0.python.logger Creating control connection {"wid": 0}
24.08.05 15:37:30.419 (D) sor.http.w0.python.logger Control connection created {"wid": 0}
24.08.05 15:37:30.419 (D) sor.http.w0.python.logger Waiting for start
24.08.05 15:37:30.419 (I) sor.http.w0.python.logger Init context... 0% {"worker_id": "0"}
24.08.05 15:37:30.433 (I) sor.http.w1.python.logger Wrapper connected {"wid": 1, "pid": 33}
24.08.05 15:37:30.434 (D) sor.http.w1.python.logger Creating control connection {"wid": 1}
24.08.05 15:37:30.434 (D) sor.http.w1.python.logger Control connection created {"wid": 1}
24.08.05 15:37:30.434 (D) sor.http.w1.python.logger Waiting for start
24.08.05 15:37:30.434 (I) sor.http.w1.python.logger Init context... 0% {"worker_id": "1"}
24.08.05 15:37:30.434 (I) sor.http.w0.python.logger Using GPU for inference {"worker_id": "0"}
24.08.05 15:37:30.434 (I) sor.http.w0.python.logger Init context...100% {"worker_id": "0"}
24.08.05 15:37:30.434 (D) sor.http.w0.python.logger Started
24.08.05 15:37:30.434 (D) sor.http.w0.python.logger Sending data on control socket {"data_length": 2, "worker_id": "0"}
24.08.05 15:37:30.434 (D) sor.http.w0.python.logger Received control message {"messageKind": "wrapperInitialized"}
24.08.05 15:37:30.449 (I) sor.http.w1.python.logger Using GPU for inference {"worker_id": "1"}
24.08.05 15:37:30.449 (I) sor.http.w1.python.logger Init context...100% {"worker_id": "1"}
24.08.05 15:37:30.449 (D) sor.http.w1.python.logger Started
24.08.05 15:37:30.449 (I) processor Starting event timeout watcher {"timeout": "30s"}
24.08.05 15:37:30.449 (D) .webadmin.server.triggers Registered custom route {"routeName": "triggers", "stream": false, "pattern": "/{id}/stats", "method": "GET"}
24.08.05 15:37:30.449 (D) sor.http.w1.python.logger Sending data on control socket {"data_length": 2, "worker_id": "1"}
24.08.05 15:37:30.449 (D) processor.webadmin.server Registered resource {"name": "triggers"}
24.08.05 15:37:30.449 (W) processor No metric sinks configured, metrics will not be published
24.08.05 15:37:30.449 (D) sor.http.w1.python.logger Received control message {"messageKind": "wrapperInitialized"}
24.08.05 15:37:30.449 (D) processor Starting triggers {"triggersError": "json: unsupported value: encountered a cycle via *http.http"}
24.08.05 15:37:30.450 (I) processor.http Starting {"listenAddress": ":8080", "readBufferSize": 16384, "maxRequestBodySize": 33554432, "reduceMemoryUsage": false, "cors": null}
24.08.05 15:37:30.450 (I) processor.webadmin.server Listening {"listenAddress": ":8081"}
24.08.05 15:37:30.450 (D) processor Processor started
24.08.05 15:38:12.427 (I) sor.http.w0.python.logger Run YOLOv3-tiny model {"worker_id": "0"}
24.08.05 15:38:12.427 (I) sor.http.w0.python.logger Input data type: <class 'dict'> {"worker_id": "0"}
24.08.05 15:38:12.428 (I) sor.http.w0.python.logger Input data preview: {'image': '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJB... {"worker_id": "0"}
[ WARN:0@42.029] global net_impl.cpp:178 setUpNet DNN module was not built with CUDA backend; switching to CPU
24.08.05 15:38:12.587 (I) sor.http.w0.python.logger Number of detections: 1 {"worker_id": "0"}
24.08.05 15:38:12.588 (I) sor.http.w0.python.logger Results preview: [{'confidence': '0.9934592843055725', 'label': 'M0', 'points': [69, 153, 611, 597], 'type': 'rectangle'}] {"worker_id": "0"}
24.08.05 15:39:09.060 (I) sor.http.w1.python.logger Run YOLOv3-tiny model {"worker_id": "1"}
24.08.05 15:39:09.060 (I) sor.http.w1.python.logger Input data type: <class 'dict'> {"worker_id": "1"}
24.08.05 15:39:09.060 (I) sor.http.w1.python.logger Input data preview: {'image': '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJB... {"worker_id": "1"}
[ WARN:0@98.641] global net_impl.cpp:178 setUpNet DNN module was not built with CUDA backend; switching to CPU
24.08.05 15:39:09.208 (I) sor.http.w1.python.logger Number of detections: 1 {"worker_id": "1"}
24.08.05 15:39:09.208 (I) sor.http.w1.python.logger Results preview: [{'confidence': '0.9934592843055725', 'label': 'M0', 'points': [69, 153, 611, 597], 'type': 'rectangle'}] {"worker_id": "1"}
"
My python file is
"
import cv2
import numpy as np
import json
import base64
import io
def init_context(context):
context.logger.info("Init context... 0%")
net = cv2.dnn.readNet("/path/to/weights", "/path/tp/config")
# Optimize for GPU if available, otherwise optimize for CPU
try:
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
context.logger.info("Using GPU for inference")
except:
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
context.logger.info("Using CPU for inference")
with open("obj_names.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
context.user_data.model = net
context.user_data.classes = classes
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run YOLOv3-tiny model")
data = event.body
context.logger.info(f"Input data type: {type(data)}")
context.logger.info(f"Input data preview: {str(data)[:100]}...") # Log first 100 characters
# Parse the input data
if isinstance(event.body, dict):
data = event.body
else:
data = json.loads(event.body.decode('utf-8'))
# Decode the base64 image
image_data = base64.b64decode(data["image"])
buf = np.frombuffer(image_data, dtype=np.uint8)
# Decode the image
image = cv2.imdecode(buf, cv2.IMREAD_COLOR)
# Get the threshold from the request, or use a default value
threshold = float(data.get("threshold", 0.1)) # Using 0.1 as default, adjust as needed
# Prepare image for the network
blob = cv2.dnn.blobFromImage(image, 1/255.0, (416, 416), swapRB=True, crop=False)
context.user_data.model.setInput(blob)
# Get output layer names
layer_names = context.user_data.model.getLayerNames()
output_layers = [layer_names[i - 1] for i in context.user_data.model.getUnconnectedOutLayers()]
# Run forward pass
outs = context.user_data.model.forward(output_layers)
# Process detections
class_ids = []
confidences = []
boxes = []
width, height = image.shape[1], image.shape[0]
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > threshold:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
# Apply non-maximum suppression
indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
#context.logger.info(f"Indexes after NMS: {indexes}")
#context.logger.info(f"box is after NMS: {boxes[indexes[0]]}")
# Prepare results
results = []
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
results.append({
"confidence": str(confidences[i]),
"label": context.user_data.classes[class_ids[i]],
"points": [x, y, x + w, y + h],
"type": "rectangle"
})
context.logger.info(f"Number of detections: {len(results)}")
context.logger.info(f"Results preview: {results[:]}") # Log first 2 results
return context.Response(body=json.dumps(results), headers={},
content_type='application/json', status_code=200)
" | closed | 2024-08-05T15:57:36Z | 2024-08-14T14:40:16Z | https://github.com/cvat-ai/cvat/issues/8260 | [] | brenton95 | 0 |
JoeanAmier/TikTokDownloader | api | 248 | tiktok下载不了 | https://www.tiktok.com/@deliverdeals/video/7336013156684188959 下载不了,可以录个屏怎么用这个下载tiktok视频吗 谢谢 | open | 2024-07-13T15:21:39Z | 2024-07-13T15:21:39Z | https://github.com/JoeanAmier/TikTokDownloader/issues/248 | [] | shan-ge-cpu | 0 |
comfyanonymous/ComfyUI | pytorch | 7,255 | When running | When starting, it disconnects. | closed | 2025-03-15T13:55:53Z | 2025-03-15T15:47:57Z | https://github.com/comfyanonymous/ComfyUI/issues/7255 | [] | salimb23 | 1 |
automagica/automagica | automation | 70 | After upgrade to 1.0.8, Bots remain offline on Dev portal | Issue: Bots will not connect following upgrade to 1.0.8
Expected behavior: Bots connect
Steps to reproduce issue:
1) Logout: "automagica --logout"
2) Upgrade from 1.0.7 to 1.0.8 via "pip install automagica -G"
3) Login: "automagica --login "16286023-9838-4948-987f-7b35356041b0"
4) Sign in to "https://portal.automagica.dev/"
Following these steps, Bots never connect.
Firewall has all rules for "python.exe", "pythonw.exe" and "automagica.exe" set to Allow.
No error messages appear during login process. Cmd.exe window disappears following apparently completed login process.
| closed | 2019-10-01T17:30:36Z | 2019-10-03T15:52:20Z | https://github.com/automagica/automagica/issues/70 | [] | burque505 | 1 |
tensorpack/tensorpack | tensorflow | 1,216 | OOM error when use the lastest code. | OOM in memory, no gpu. And the memory used grow all the time, no drop. | closed | 2019-05-28T02:46:51Z | 2019-05-28T03:02:45Z | https://github.com/tensorpack/tensorpack/issues/1216 | [
"unrelated"
] | realwill | 2 |
serengil/deepface | deep-learning | 478 | Expose Bounding Boxes of Detected Faces | None of the methods provided by the DeepFace module return the bounding boxes of the detected faces, but each of the detectors do return this data. I'd like to see this data returned wherever multiple faces can be returned by a DeepFace method. | closed | 2022-05-11T17:48:57Z | 2022-05-11T19:53:29Z | https://github.com/serengil/deepface/issues/478 | [
"question"
] | buckeye17 | 1 |
ludwig-ai/ludwig | computer-vision | 3,659 | Export computer-vision model to ONNX | **Is your feature request related to a problem? Please describe.**
I would like to be able to export computer-vision models to ONNX
**Describe the use case**
[ONNX](https://onnx.ai/) is a format that would be an addition to torchscript. It runs in many environments including, iOS, Android, web, and many more.
**Describe the solution you'd like**
I wrote most of the code. I just need to test it and create a PR for it:
```
class LudwigWrapper(torch.nn.Module):
def __init__(self, model):
super(LudwigWrapper, self).__init__()
self.model = model
def forward(self, x):
return self.model({"image_path": x})
def _export_classifier_onnx(model_path, export_path):
ludwig_model = LudwigModel.load(model_path)
model = LudwigWrapper(ludwig_model.model) # Wrap the model
model.eval() # inference mode, is this needed.. I think onnx export does this for us
width = ludwig_model.config["input_features"][0]["preprocessing"]["width"]
height = ludwig_model.config["input_features"][0]["preprocessing"]["height"]
example_input = torch.randn(1, 3, width, height, requires_grad=True)
torch.onnx.export(
model,
example_input,
export_path,
opset_version=18,
export_params=True,
do_constant_folding=True,
input_names=["input"],
output_names=["combiner_hidden_1", "output", "combiner_hidden_2"],
)
def _quantize(path_fp32, path_int8):
from onnxruntime.quantization import quantize_dynamic
quantize_dynamic(path_fp32, path_int8) # type: ignore
```
**Describe alternatives you've considered**
The alternative is to use other formats like CoreML
**Additional context**
Join our slack channel: `#computer-vision`
| closed | 2023-09-23T17:48:43Z | 2024-10-18T17:03:12Z | https://github.com/ludwig-ai/ludwig/issues/3659 | [
"feature",
"help wanted"
] | saad-palapa | 3 |
mirumee/ariadne | api | 180 | Update setup.py to include html and py.typed files in published package | Ariadne now includes `graphql_playground.html` django template and `py.typed` file for enabling typing. We should make sure those two get published together with rest of the project. | closed | 2019-05-20T11:38:25Z | 2019-05-23T13:14:38Z | https://github.com/mirumee/ariadne/issues/180 | [
"roadmap",
"meta"
] | rafalp | 0 |
miguelgrinberg/Flask-Migrate | flask | 257 | --sql option for migrate doesn't make sense | ```
$ flask db migrate --sql
```
yields:
```
Error: Using --sql with --autogenerate does not make any sense
``` | closed | 2019-02-27T03:10:33Z | 2019-06-08T08:45:20Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/257 | [
"question"
] | cancan101 | 3 |
jwkvam/bowtie | plotly | 211 | Initial Update | Hi 👊
This is my first visit to this fine repo, but it seems you have been working hard to keep all dependencies updated so far.
Once you have closed this issue, I'll create separate pull requests for every update as soon as I find one.
That's it for now!
Happy merging! 🤖
| closed | 2018-02-24T01:43:56Z | 2018-02-24T01:50:27Z | https://github.com/jwkvam/bowtie/issues/211 | [] | pyup-bot | 0 |
OFA-Sys/Chinese-CLIP | nlp | 284 | image_b64为空 | Traceback (most recent call last):
File "/root/Chinese-CLIP/cn_clip/training/main.py", line 350, in <module>
main()
File "/root/Chinese-CLIP/cn_clip/training/main.py", line 298, in main
num_steps_this_epoch = train(model, data, epoch, optimizer, scaler, scheduler, args, steps)
File "/root/Chinese-CLIP/cn_clip/training/train.py", line 165, in train
batch = next(data_iter)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/_utils.py", line 722, in reraise
raise exception
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/Taidi/Chinese-CLIP/cn_clip/training/data.py", line 109, in __getitem__
image_b64 = self.txn_imgs.get("{}".format(image_id).encode('utf-8')).tobytes()
AttributeError: 'NoneType' object has no attribute 'tobytes'
Exception in thread [2024-04-08 00:26:44,250] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 114557) of binary: /root/miniconda3/envs/ML/bin/python3
Traceback (most recent call last):
File "/root/miniconda3/envs/ML/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/ML/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/launch.py", line 198, in <module>
main()
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/launch.py", line 194, in main
launch(args)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/launch.py", line 179, in launch
run(args)
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/ML/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
cn_clip/training/main.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-04-08_00:26:44
host : localhost
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 114557)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
但是我的image_id能后获取base64的编码,且编码正常 | open | 2024-04-07T16:30:17Z | 2024-04-14T23:46:31Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/284 | [] | erlan-11 | 7 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,471 | OSError: Caught OSError in DataLoader worker process 3 | Hello, I was training my model it was working until epoch 148 when I got theses Errors: <<OSError: Caught OSError in DataLoader worker process 3>> <<OSError: [Errno 5] Input/output error>>.
I'm training the model on a linux VM.
learning rate 0.0001050 -> 0.0001030
(epoch: 148, iters: 50, time: 5.328, data: 0.004) G_GAN: 1.660 G_L1: 21.545 D_real: 0.006 D_fake: 0.244 G: 23.206 D: 0.125
saving the latest model (epoch 148, total_iters 60000)
(epoch: 148, iters: 150, time: 1.322, data: 0.003) G_GAN: 1.076 G_L1: 34.955 D_real: 0.000 D_fake: 0.642 G: 36.031 D: 0.321
(epoch: 148, iters: 250, time: 1.316, data: 0.004) G_GAN: 2.841 G_L1: 17.667 D_real: 0.607 D_fake: 0.061 G: 20.508 D: 0.334
(epoch: 148, iters: 350, time: 1.338, data: 0.004) G_GAN: 1.837 G_L1: 25.288 D_real: 0.050 D_fake: 0.239 G: 27.126 D: 0.144
(epoch: 148, iters: 450, time: 2.624, data: 0.003) G_GAN: 5.915 G_L1: 23.653 D_real: 0.006 D_fake: 0.003 G: 29.568 D: 0.005
(epoch: 148, iters: 550, time: 1.307, data: 0.004) G_GAN: 1.869 G_L1: 35.894 D_real: 0.004 D_fake: 0.292 G: 37.763 D: 0.148
(epoch: 148, iters: 650, time: 1.308, data: 0.003) G_GAN: 1.511 G_L1: 21.548 D_real: 0.095 D_fake: 0.382 G: 23.059 D: 0.238
(epoch: 148, iters: 750, time: 1.338, data: 0.003) G_GAN: 3.447 G_L1: 22.605 D_real: 0.088 D_fake: 0.038 G: 26.052 D: 0.063
(epoch: 148, iters: 850, time: 2.473, data: 0.004) G_GAN: 3.026 G_L1: 22.714 D_real: 0.017 D_fake: 0.063 G: 25.740 D: 0.040
Traceback (most recent call last):
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/train.py", line 44, in <module>
for i, data in enumerate(dataset): # inner loop within one epoch
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/__init__.py", line 90, in __iter__
for i, data in enumerate(self.dataloader):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
data.reraise()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/_utils.py", line 461, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/aligned_dataset.py", line 45, in __getitem__
A = AB.crop((0, 0, w2, h))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 1146, in crop
self.load()
File "/usr/lib/python3/dist-packages/PIL/ImageFile.py", line 235, in load
s = read(self.decodermaxblock)
File "/usr/lib/python3/dist-packages/PIL/JpegImagePlugin.py", line 402, in load_read
s = self.fp.read(read_bytes)
OSError: [Errno 5] Input/output error
Traceback (most recent call last):
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/train.py", line 44, in <module>
for i, data in enumerate(dataset): # inner loop within one epoch
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/__init__.py", line 90, in __iter__
for i, data in enumerate(self.dataloader):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
data.reraise()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/_utils.py", line 461, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/aligned_dataset.py", line 45, in __getitem__
A = AB.crop((0, 0, w2, h))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 1146, in crop
May I ask help to understand where this come from? | open | 2022-08-19T07:30:10Z | 2022-09-20T20:48:18Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1471 | [] | FlorianRegisBamb | 3 |
zihangdai/xlnet | tensorflow | 262 | Is Next Sentence Prediction implemented in the code ? | Hi,
You mention in the paper that you have excluded the next-sentence prediction objective from XLNet since it didn't introduce any improvements, However in the Ablation study you also report the performance in case of using NSP.
My question is : is NSP implemnted here in your github repo or not?
Thanks a lot | open | 2020-04-21T11:54:33Z | 2020-04-21T11:54:33Z | https://github.com/zihangdai/xlnet/issues/262 | [] | GhaliaRehawi | 0 |
HumanSignal/labelImg | deep-learning | 33 | Question / Fast Annotation | Hi,
I have already one folder with a database, where the images only contain the object of interest to classify.
Is there exist any fast method to create the xml file to all the images of that folder without selecting the BB on each one of them ? (the object is represented in the entire image)
Thanks | closed | 2016-12-19T20:17:45Z | 2016-12-21T00:03:43Z | https://github.com/HumanSignal/labelImg/issues/33 | [] | Pessanha24 | 0 |
vimalloc/flask-jwt-extended | flask | 236 | How to redirect to login when getting "Signature verification failed"? | When testing tampering with access token, the page shows the error "msg": "Signature verification failed"
We'd like to redirect the user to login page instead, we tried using the below decorators as described in the [Changing Default Behaviors docs](https://flask-jwt-extended.readthedocs.io/en/latest/changing_default_behavior.html):
@jwt.**invalid_token_loader** _...raises: "TypeError: invalid_token_callback() takes 0 positional arguments but 1 was given"_
@jwt.**expired_token_loader**
@jwt.**handle_expired_error**
@jwt.**claims_verification_failed_loader**
But non worked
| closed | 2019-03-23T03:45:35Z | 2019-03-26T20:54:37Z | https://github.com/vimalloc/flask-jwt-extended/issues/236 | [] | kwagdy | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,089 | Attribute Error: 'str' object has no attribute 'tobytes' | i was trying to use the localhost version of this synthesis ai by following the tutorial to get it working on a local server, but every time i try to synthesize text in the command prompt, this is is the error that pops up and it doesnt generate the audio
line 39, in generate
fwav = io.StringIO(output.tobytes())
AttributeError: 'str' object has no attribute 'tobytes' | closed | 2022-06-27T16:19:35Z | 2022-06-27T16:50:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1089 | [] | cringegaming64 | 0 |
lorien/grab | web-scraping | 142 | Proxylist proxy format | Почему формат проксей ( c логином и паролем) в прокси листе во первых не описан в доках, пришлось лезть в код чтобы увидеть что это host:port:username:password. А во вторых почему он такой? Почему не стандартный в виде урла - username:password@proxy.server.url.com:80 ?
| closed | 2015-09-03T20:33:52Z | 2015-11-22T19:47:58Z | https://github.com/lorien/grab/issues/142 | [] | aldarund | 3 |
tqdm/tqdm | pandas | 620 | Drop Py 2.6 support? | https://www.python.org/dev/peps/pep-0361/#release-lifespan
Py 2.6 ceased to have support just shy of five years ago. The build is presently broken. Over a year ago in #411 it was suggested to drop support. I'd suggest one of dropping support, fixing the build, or marking the build as allowed to fail so that PR's don't get marked as failed when they haven't introduced any issues.
I get that there's only been one broken build so it's not like this has been a long term issue. But, I figured I'd put a ticket out here for discussion anyways.
Cheers,
-kyle
| closed | 2018-09-28T15:38:43Z | 2018-09-28T22:11:33Z | https://github.com/tqdm/tqdm/issues/620 | [
"duplicate 🗐",
"question/docs ‽"
] | altendky | 1 |
keras-team/keras | pytorch | 20,573 | as_list() is not defined on an unknown TensorShape. | ```
Ubuntu 24.04
Python 3.12.3
keras 3.7.0
keras-core 0.1.7
keras-cv 0.9.0
```
Following this example the data set visualizes/displays the bounding boxes properly. However the augmenter fails.
train_ds = train_ds.map(augmenter, num_parallel_calls=tf.data.AUTOTUNE)
https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/yolov8.ipynb
```
augmenter = keras.Sequential(
layers=[
keras_cv.layers.AutoContrast((0, 255)),
]
)
train_ds = train_data.map(load_dataset, num_parallel_calls=tf.data.AUTOTUNE)
train_ds = train_ds.shuffle(BATCH_SIZE * 4)
train_ds = train_ds.ragged_batch(BATCH_SIZE, drop_remainder=True)
train_ds = train_ds.map(augmenter, num_parallel_calls=tf.data.AUTOTUNE)
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[14], line 4
2 train_ds = train_ds.shuffle(BATCH_SIZE * 4)
3 train_ds = train_ds.ragged_batch(BATCH_SIZE, drop_remainder=True)
----> 4 train_ds = train_ds.map(augmenter, num_parallel_calls=tf.data.AUTOTUNE)
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/dataset_ops.py:2341, in DatasetV2.map(self, map_func, num_parallel_calls, deterministic, synchronous, use_unbounded_threadpool, name)
2336 # Loaded lazily due to a circular dependency (dataset_ops -> map_op ->
2337 # dataset_ops).
2338 # pylint: disable=g-import-not-at-top,protected-access
2339 from tensorflow.python.data.ops import map_op
-> 2341 return map_op._map_v2(
2342 self,
2343 map_func,
2344 num_parallel_calls=num_parallel_calls,
2345 deterministic=deterministic,
2346 synchronous=synchronous,
2347 use_unbounded_threadpool=use_unbounded_threadpool,
2348 name=name,
2349 )
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/map_op.py:57, in _map_v2(input_dataset, map_func, num_parallel_calls, deterministic, synchronous, use_unbounded_threadpool, name)
51 if synchronous:
52 raise ValueError(
53 "`synchronous` is not supported with `num_parallel_calls`, but"
54 " `num_parallel_calls` was set to ",
55 num_parallel_calls,
56 )
---> 57 return _ParallelMapDataset(
58 input_dataset,
59 map_func,
60 num_parallel_calls=num_parallel_calls,
61 deterministic=deterministic,
62 preserve_cardinality=True,
63 use_unbounded_threadpool=use_unbounded_threadpool,
64 name=name)
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/map_op.py:202, in _ParallelMapDataset.__init__(self, input_dataset, map_func, num_parallel_calls, deterministic, use_inter_op_parallelism, preserve_cardinality, use_legacy_function, use_unbounded_threadpool, name)
200 self._input_dataset = input_dataset
201 self._use_inter_op_parallelism = use_inter_op_parallelism
--> 202 self._map_func = structured_function.StructuredFunctionWrapper(
203 map_func,
204 self._transformation_name(),
205 dataset=input_dataset,
206 use_legacy_function=use_legacy_function)
207 if deterministic is None:
208 self._deterministic = "default"
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/structured_function.py:265, in StructuredFunctionWrapper.__init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
258 warnings.warn(
259 "Even though the `tf.config.experimental_run_functions_eagerly` "
260 "option is set, this option does not apply to tf.data functions. "
261 "To force eager execution of tf.data functions, please use "
262 "`tf.data.experimental.enable_debug_mode()`.")
263 fn_factory = trace_tf_function(defun_kwargs)
--> 265 self._function = fn_factory()
266 # There is no graph to add in eager mode.
267 add_to_graph &= not context.executing_eagerly()
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:1251, in Function.get_concrete_function(self, *args, **kwargs)
1249 def get_concrete_function(self, *args, **kwargs):
1250 # Implements PolymorphicFunction.get_concrete_function.
-> 1251 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1252 concrete._garbage_collector.release() # pylint: disable=protected-access
1253 return concrete
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:1221, in Function._get_concrete_function_garbage_collected(self, *args, **kwargs)
1219 if self._variable_creation_config is None:
1220 initializers = []
-> 1221 self._initialize(args, kwargs, add_initializers_to=initializers)
1222 self._initialize_uninitialized_variables(initializers)
1224 if self._created_variables:
1225 # In this case we have created variables on the first call, so we run the
1226 # version which is guaranteed to never create variables.
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:696, in Function._initialize(self, args, kwds, add_initializers_to)
691 self._variable_creation_config = self._generate_scoped_tracing_options(
692 variable_capturing_scope,
693 tracing_compilation.ScopeType.VARIABLE_CREATION,
694 )
695 # Force the definition of the function for these arguments
--> 696 self._concrete_variable_creation_fn = tracing_compilation.trace_function(
697 args, kwds, self._variable_creation_config
698 )
700 def invalid_creator_scope(*unused_args, **unused_kwds):
701 """Disables variable creation."""
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:178, in trace_function(args, kwargs, tracing_options)
175 args = tracing_options.input_signature
176 kwargs = {}
--> 178 concrete_function = _maybe_define_function(
179 args, kwargs, tracing_options
180 )
182 if not tracing_options.bind_graph_to_function:
183 concrete_function._garbage_collector.release() # pylint: disable=protected-access
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:283, in _maybe_define_function(args, kwargs, tracing_options)
281 else:
282 target_func_type = lookup_func_type
--> 283 concrete_function = _create_concrete_function(
284 target_func_type, lookup_func_context, func_graph, tracing_options
285 )
287 if tracing_options.function_cache is not None:
288 tracing_options.function_cache.add(
289 concrete_function, current_func_context
290 )
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:310, in _create_concrete_function(function_type, type_context, func_graph, tracing_options)
303 placeholder_bound_args = function_type.placeholder_arguments(
304 placeholder_context
305 )
307 disable_acd = tracing_options.attributes and tracing_options.attributes.get(
308 attributes_lib.DISABLE_ACD, False
309 )
--> 310 traced_func_graph = func_graph_module.func_graph_from_py_func(
311 tracing_options.name,
312 tracing_options.python_function,
313 placeholder_bound_args.args,
314 placeholder_bound_args.kwargs,
315 None,
316 func_graph=func_graph,
317 add_control_dependencies=not disable_acd,
318 arg_names=function_type_utils.to_arg_names(function_type),
319 create_placeholders=False,
320 )
322 transform.apply_func_graph_transforms(traced_func_graph)
324 graph_capture_container = traced_func_graph.function_captures
File ~/test/lib/python3.12/site-packages/tensorflow/python/framework/func_graph.py:1059, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, create_placeholders)
1056 return x
1058 _, original_func = tf_decorator.unwrap(python_func)
-> 1059 func_outputs = python_func(*func_args, **func_kwargs)
1061 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
1062 # TensorArrays and `None`s.
1063 func_outputs = variable_utils.convert_variables_to_tensors(func_outputs)
File ~/test/lib/python3.12/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:599, in Function._generate_scoped_tracing_options.<locals>.wrapped_fn(*args, **kwds)
595 with default_graph._variable_creator_scope(scope, priority=50): # pylint: disable=protected-access
596 # __wrapped__ allows AutoGraph to swap in a converted function. We give
597 # the function a weak reference to itself to avoid a reference cycle.
598 with OptionalXlaContext(compile_with_xla):
--> 599 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
600 return out
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/structured_function.py:231, in StructuredFunctionWrapper.__init__.<locals>.trace_tf_function.<locals>.wrapped_fn(*args)
230 def wrapped_fn(*args): # pylint: disable=missing-docstring
--> 231 ret = wrapper_helper(*args)
232 ret = structure.to_tensor_list(self._output_structure, ret)
233 return [ops.convert_to_tensor(t) for t in ret]
File ~/test/lib/python3.12/site-packages/tensorflow/python/data/ops/structured_function.py:161, in StructuredFunctionWrapper.__init__.<locals>.wrapper_helper(*args)
159 if not _should_unpack(nested_args):
160 nested_args = (nested_args,)
--> 161 ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
162 ret = variable_utils.convert_variables_to_tensors(ret)
163 if _should_pack(ret):
File ~/test/lib/python3.12/site-packages/tensorflow/python/autograph/impl/api.py:690, in convert.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
688 try:
689 with conversion_ctx:
--> 690 return converted_call(f, args, kwargs, options=options)
691 except Exception as e: # pylint:disable=broad-except
692 if hasattr(e, 'ag_error_metadata'):
File ~/test/lib/python3.12/site-packages/tensorflow/python/autograph/impl/api.py:377, in converted_call(f, args, kwargs, caller_fn_scope, options)
374 return _call_unconverted(f, args, kwargs, options)
376 if not options.user_requested and conversion.is_allowlisted(f):
--> 377 return _call_unconverted(f, args, kwargs, options)
379 # internal_convert_user_code is for example turned off when issuing a dynamic
380 # call conversion from generated code while in nonrecursive mode. In that
381 # case we evidently don't want to recurse, but we still have to convert
382 # things like builtins.
383 if not options.internal_convert_user_code:
File ~/test/lib/python3.12/site-packages/tensorflow/python/autograph/impl/api.py:459, in _call_unconverted(f, args, kwargs, options, update_cache)
456 return f.__self__.call(args, kwargs)
458 if kwargs is not None:
--> 459 return f(*args, **kwargs)
460 return f(*args)
File ~/test/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/test/lib/python3.12/site-packages/optree/ops.py:747, in tree_map(func, tree, is_leaf, none_is_leaf, namespace, *rests)
745 leaves, treespec = _C.flatten(tree, is_leaf, none_is_leaf, namespace)
746 flat_args = [leaves] + [treespec.flatten_up_to(r) for r in rests]
--> 747 return treespec.unflatten(map(func, *flat_args))
ValueError: as_list() is not defined on an unknown TensorShape.
```
| open | 2024-12-01T12:50:32Z | 2024-12-04T18:41:21Z | https://github.com/keras-team/keras/issues/20573 | [
"type:Bug"
] | apiszcz | 2 |
tqdm/tqdm | jupyter | 639 | tensorflow/core/kernels/mkl_concat_op.cc:363] Check failed: dnn Concat Create_F32(&mkl_context.prim_concat, __null, N, &mkl_context.lt_inputs[0]) == E_SUCCESS (-1 vs. 0) | I am a freshman to the tensorflow, when I ran a deep nerualnetwork program, an error happen, I donot known, what can I do? Can you help me? | closed | 2018-11-11T13:36:54Z | 2018-11-14T01:35:26Z | https://github.com/tqdm/tqdm/issues/639 | [
"invalid ⛔"
] | yjyGo | 3 |
guohongze/adminset | django | 119 | 如何修改设备类型 | 如何修改设备类型 在哪里改 | open | 2019-07-23T03:37:34Z | 2019-07-23T03:37:34Z | https://github.com/guohongze/adminset/issues/119 | [] | smartqu | 0 |
proplot-dev/proplot | matplotlib | 210 | Figure size/aspect for projections is not working anymore with the last version of Matplotlib | ### Description
I was trying to make a new environment with updated packages, but I faced a colorbar **extend** warning with the new version but the biggest problem is the size/aspects of the figures that are not good anymore for projections. I think it is coming from the last version of **Matplotlib 3.3**. Here is the problem with a random example:
### Steps to reproduce
```python
import proplot as plot
import xarray as xr
da = xr.tutorial.open_dataset('air_temperature').air - 273.15
clim = da.groupby(da['time.season']).mean('time')
f, axs = plot.subplots(proj='cyl', ncols=2, nrows=2)
for i, ax in enumerate(axs):
m = ax.contourf(clim.isel(season=i), levels=plot.arange(-30,30,5), extend='both', cmap='CoolWarm')
ax.format(
labels = True, coast = True, borders = True, lonlines=30, latlines=15,
latlim=(clim.lat.min().values, clim.lat.max().values),
lonlim=(clim.lon.min().values, clim.lon.max().values),
title=clim.isel(season=i).season.values
)
f.colorbar(m, label='Near-Surface Air Temperature [°C]')
```
> /data/mlalande/miniconda3/envs/phd_v2/lib/python3.8/site-packages/proplot/figure.py:1158: MatplotlibDeprecationWarning: The 'extend' parameter to Colorbar has no effect because it is overridden by the mappable; it is deprecated since 3.3 and will be removed two minor releases later.
> return super().colorbar(*args, cax=cax, **kwargs)
**Actual behavior**:

**Expected behavior**
And with my previous environment I was having this:

### Proplot version
I passed from: ([work.txt](https://github.com/lukelbd/proplot/files/4948286/work.txt))
- proplot 0.5.0
- matplotlib 3.1.3
to: ([phd_v2.txt](https://github.com/lukelbd/proplot/files/4948288/phd_v2.txt))
- proplot 0.6.4
- matplotlib 3.3.0
I tried to downgrade matplotlib to 3.2, it actually does solve the problem but I still have another issue with the colorbar extend that doesn't work in certain cases (I didn't succeed to reproduce with a simple example...).
| closed | 2020-07-20T14:36:19Z | 2021-07-04T02:56:28Z | https://github.com/proplot-dev/proplot/issues/210 | [
"bug"
] | mickaellalande | 2 |
cvat-ai/cvat | computer-vision | 9,217 | 500 error after login | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Login
Immediately after Login the following 500 error appears in a popup:
```
[2025-03-17 07:45:32,385] ERROR django.request: Internal Server Error: /api/requests
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
response = view_func(request, *args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3779, in wrapper
return func(*args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3803, in list
user_jobs = self._get_rq_jobs(user_id)
File "/home/django/cvat/apps/engine/views.py", line 3745, in _get_rq_jobs
jobs = self._get_rq_jobs_from_queue(queue, user_id)
File "/home/django/cvat/apps/engine/views.py", line 3722, in _get_rq_jobs_from_queue
if job and is_rq_job_owner(job, user_id):
File "/home/django/cvat/apps/engine/rq.py", line 315, in is_rq_job_owner
return BaseRQMeta.for_job(rq_job).user.id == user_id
File "/home/django/cvat/apps/engine/rq.py", line 196, in user
return UserMeta(self.meta[RQJobMetaField.USER])
KeyError: 'user'
```
### Expected Behavior
No error message
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.31.0
UI version: 2.31.0
``` | closed | 2025-03-17T07:50:50Z | 2025-03-17T15:43:54Z | https://github.com/cvat-ai/cvat/issues/9217 | [
"bug"
] | eporsche | 2 |
tensorly/tensorly | numpy | 470 | CP function | CP via ALS is probably the most used function in TensorLy and comes with lots of options. One issue is that due to these successive additions, bugs (see e.g. this [commit](d66110f7c961ce896a051b446a23d69bd54ecc8e)) and undue complexity are slowly creeping in while the code is becoming increasingly hard to read.
Another thing is efficiency: previously it was possible to have a fast version by setting the tolerance to 0 (i.e. no convergence test) but now the decomposition has become increasingly slow.
It might be good to have a review of where we're at, what features are actually needed and simplify the code a little. | open | 2022-12-29T14:57:34Z | 2023-09-14T13:53:42Z | https://github.com/tensorly/tensorly/issues/470 | [] | JeanKossaifi | 8 |
MaartenGr/BERTopic | nlp | 1,587 | Comparing tf-idf and mmr | I want to compare the difference between tf-idf represenations and mmr represenations but when I add mmr as a representation model it replaces the tf-idf represenations. How can i see them side by side like in 6C. Multiple Representations | closed | 2023-10-21T03:22:16Z | 2024-02-28T21:09:22Z | https://github.com/MaartenGr/BERTopic/issues/1587 | [] | maticar92 | 1 |
numpy/numpy | numpy | 27,909 | ENH: Enhanced sliding window functionality | ### Proposed new feature or change:
Hello NumPy team,
When analyzing grouped time series data with rolling windows, libraries like pandas or polars are pretty efficient for standard aggregations.
However, for custom functions applied to grouped rolling contexts, efficiency can suffer dramatically up to >100x.
A simple and reasonably efficient workaround can be using NumPy's sliding_window_view where you evaluate sliding windows for each group and stitch them together. Then you can apply custom UDFs like gufuncs rowwise to the rolling windows.
This works, but current behavior of sliding_window_view trims incomplete windows (which can still be valuable). The resulting array has a different length than the original.
Proposed Enhancements:
1) Add a "trimmed" Argument to sliding_window_view to allow users to control whether incomplete windows are included.
With trimmed=False, the resulting array could match the original shape i.e. by padding incomplete windows with NaN or a similar mechanism.
Add a "step" argument to allow analysis of various granularities.
2) A specialized grouped_sliding_window_view could be used to address these issues for grouped time series data.
This function could stitch groups of sliding_window_view results into a single "grouped sliding window".
Does adding a "trimmed" and "step" argument to sliding_window_view align with the library's goals?
Would a dedicated grouped_sliding_window_view function be valuable for this type of use cases?
Here is an output example:
```
┌───────┬───────┬─────────────────────────┬────────────────────────────┬────────────────────┐
│ group ┆ value ┆ grouped_sliding_windows ┆ remark ┆ grouped_rolling │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ array[f64, 3] ┆ str ┆ list[f64] │
╞═══════╪═══════╪═════════════════════════╪════════════════════════════╪════════════════════╡
│ 1 ┆ 0.0 ┆ [NaN, NaN, 0.0] ┆ <- incomplete window added ┆ [0.0] │
│ 1 ┆ 1.0 ┆ [NaN, NaN, 1.0] ┆ <- incomplete window added ┆ [1.0] │
│ 1 ┆ 2.0 ┆ [NaN, 0.0, 2.0] ┆ <- incomplete window added ┆ [0.0, 2.0] │
│ 1 ┆ 3.0 ┆ [NaN, 1.0, 3.0] ┆ <- incomplete window added ┆ [1.0, 3.0] │
│ 1 ┆ 4.0 ┆ [0.0, 2.0, 4.0] ┆ <- sliding_window_view ┆ [0.0, 2.0, 4.0] │
│ 1 ┆ 5.0 ┆ [1.0, 3.0, 5.0] ┆ <- sliding_window_view ┆ [1.0, 3.0, 5.0] │
│ 1 ┆ 6.0 ┆ [2.0, 4.0, 6.0] ┆ <- sliding_window_view ┆ [2.0, 4.0, 6.0] │
│ 1 ┆ 7.0 ┆ [3.0, 5.0, 7.0] ┆ <- sliding_window_view ┆ [3.0, 5.0, 7.0] │
│ 1 ┆ 8.0 ┆ [4.0, 6.0, 8.0] ┆ <- sliding_window_view ┆ [4.0, 6.0, 8.0] │
│ 2 ┆ 9.0 ┆ [NaN, NaN, 9.0] ┆ <- incomplete window added ┆ [9.0] │
│ 2 ┆ 10.0 ┆ [NaN, NaN, 10.0] ┆ <- incomplete window added ┆ [10.0] │
│ 2 ┆ 11.0 ┆ [NaN, 9.0, 11.0] ┆ <- incomplete window added ┆ [9.0, 11.0] │
│ 2 ┆ 12.0 ┆ [NaN, 10.0, 12.0] ┆ <- incomplete window added ┆ [10.0, 12.0] │
│ 2 ┆ 13.0 ┆ [9.0, 11.0, 13.0] ┆ <- sliding_window_view ┆ [9.0, 11.0, 13.0] │
│ 2 ┆ 14.0 ┆ [10.0, 12.0, 14.0] ┆ <- sliding_window_view ┆ [10.0, 12.0, 14.0] │
│ 2 ┆ 15.0 ┆ [11.0, 13.0, 15.0] ┆ <- sliding_window_view ┆ [11.0, 13.0, 15.0] │
│ 2 ┆ 16.0 ┆ [12.0, 14.0, 16.0] ┆ <- sliding_window_view ┆ [12.0, 14.0, 16.0] │
└───────┴───────┴─────────────────────────┴────────────────────────────┴────────────────────┘
window_size=3, step=2
```
The primary performance difference might be that you can apply a gufunc to grouped_sliding_windows (a 2D array) directly, whereas the grouped rolling context must evaluate the function over a list of arrays with varying lengths. | closed | 2024-12-05T09:55:37Z | 2024-12-10T19:57:54Z | https://github.com/numpy/numpy/issues/27909 | [] | Olobir | 5 |
miguelgrinberg/Flask-SocketIO | flask | 1,189 | Where have I to store sqlalchemy object | Hello,
I want to create a chat application using sqlalchemy (without the flask-sqlachemy extension).
So in my code, i want to store an object for the time of a websocket session and destroy it when the user disconnect. Because i don't want to reload the object from database each times.
If i understand i have to set manage_session to true and use the flask session normally, haven't i ?
like (in socket.py) :
import flask
flask.session["current_thread"] = thread_object
another thing, i have to create a session and close it manualy each time i use sqlachemy, haven't i ?
sorry for my english cause i'm not english lol | closed | 2020-02-14T16:35:27Z | 2020-02-14T18:53:15Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1189 | [
"question"
] | ivan-fr | 2 |
huggingface/transformers | deep-learning | 36,055 | PaliGemma processor should also accept tuples in addition to lists | https://github.com/huggingface/transformers/blob/0de15c988b0d27758ce360adb2627e9ea99e91b3/src/transformers/models/paligemma/processing_paligemma.py#L263
tuples are also valid here, the validation code is quite aggressive | closed | 2025-02-05T17:59:23Z | 2025-02-10T08:35:14Z | https://github.com/huggingface/transformers/issues/36055 | [
"VLM"
] | doctorpangloss | 2 |
axnsan12/drf-yasg | django | 674 | Issue with swagger_auto_schema and inherited class methods | Hello.
I'm having an issue that seems to be related with CPython memory allocation. I am wondering if there's a way to work around my issue with drf_yasg.
I have a series of APIViews that all have the same template :
```
class MyClass(rest_framework.views.APIView):
@swagger_auto_schema(
operation_id='graph_my_class',
request_body=GraphFiltersSerializer, # always the same
responses={200: MyClassResponseSerializer(many=True)}, # may differs for each class
)
def post(self, request):
filters = deserialize_filters(request)
graph_data = self.graph_as_json(filters)
serialized_data = MyClassResponseSerializer(graph_data, many=True).data
return Response(serialized_data)
def graph_as_json(self, filters):
return something
```
Since it's a bit painfull to always duplicate the post/swagger_auto_schema within each class.
I tried so create a common mixin :
```
class GraphApiView(rest_framework.views.APIView):
serializer = None
@classmethod
def get_name(cls):
name_pattern = re.compile(r'(?<!^)(?=[A-Z])')
return f"graph_{name_pattern.sub('_', cls.__name__).lower()}"
@classmethod
def as_view(cls, **kwargs):
return swagger_auto_schema(
method='post',
operation_id=cls.get_name(),
request_body=GraphFiltersSerializer,
responses={200: cls.serializer(many=True)}, # pylint: disable=not-callable
)(super().as_view(**kwargs))
def post(self, request):
filters = deserialize_filters(request)
graph_data = self.graph_as_json(filters)
serialized_data = self.serializer(graph_data, many=True).data # pylint: disable=not-callable
return Response(serialized_data)
class MyClass(GraphApiView):
serializer = MyClassResponseSerializer
def graph_as_json(self, filters):
return something
```
Sadly all endpoints have the same name and schema in the swagger.yml file generated with generate_swagger command.
After digging in the code, I found that for all classes, we have the same object in memory for `callback.cls.post` (with callback been the objects that we can find [here](https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/generators.py#L92) and [here](https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/generators.py#L320)).
It's the same object with the same memory allocation, so whenever the code set the `_swagger_auto_schema` attribute to on callback.cls.post, it's set on all callbacks.cls.post.
So all endpoints will have the swagger schema of the last class found in the url pattern.
I found a work around :
```
class GraphApiView(rest_framework.views.APIView):
serializer = None
@classmethod
def get_name(cls):
[unchanged]
@classmethod
def as_view(cls, **kwargs):
[unchanged]
def post(self, request):
raise NotImplementedError
def _post(self, request):
filters = deserialize_filters(request)
graph_data = self.graph_as_json(filters)
serialized_data = self.serializer(graph_data, many=True).data # pylint: disable=not-callable
return Response(serialized_data)
class MyClass(GraphApiView):
serializer = MyClassResponseSerializer
def post(self, request):
return self._post(request)
def graph_as_json(self, filters):
return something
```
I think it's possible to fix this issue by changing where drf_yasg sets the _swagger_auto_schema attribute.
I'm interested in submitting a PR (or at least to try)
My questions are :
- Is it really worth it ? I couldn't find a related issue so I would say probably not...
- Is there something I missed and that would fixe my issue in a more in a more satisfactory manner (without the post override in each sub-class) | open | 2020-11-20T09:00:11Z | 2025-03-07T12:13:23Z | https://github.com/axnsan12/drf-yasg/issues/674 | [
"triage"
] | tonial | 2 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,811 | [Bug]: ControlNet IP-Adapter error(ModuleNotFoundError: No module named 'onnx') | ### Checklist
- [ ] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
After upgrading to the latest version, I started getting an error with the Controlnet IP-Adapter.
ModuleNotFoundError: No module named 'onnx'
It says this and I can't load images. Is there a solution?
After a clean install, I tried running the following code before running [Start Stable-Diffusion]:
!pip install onnx
Then when I run [Start Stable-Diffusion] I get the following error:
ImportError: cannot import name 'mesh_core_cython' from 'insightface.thirdparty.face3d.mesh.cython' (unknown location)
### Steps to reproduce the problem
1. Start the WebUI.
2. txt2img Enable ControlNet
3. Check Pixel Perfect
4. Select IP-Adapter for Control Type
5. Select ip-adapter_face_id_plus for Preprocessor
6. Select ip-adapter-faceid-plus_sd15 for Model
7. Specify image and prompt and run generation
### What should have happened?
It worked fine until January 14, 2025.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2025-01-27-01-56.json](https://github.com/user-attachments/files/18552646/sysinfo-2025-01-27-01-56.json)
### Console logs
```Shell
ControlNet preprocessor location: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2025-01-27 01:44:52,743 - ControlNet - INFO - ControlNet v1.1.455
Loading weights [1a17bcd93d] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/beautifulRealistic_v7.safetensors
2025-01-27 01:44:53,714 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: /content/gdrive/MyDrive/sd/stable-diffusion-webui/configs/v1-inference.yaml
Running on public URL: https://6deb096a25f20cdf17.gradio.live
✔ Connected
Startup time: 19.9s (import torch: 8.0s, import gradio: 1.1s, setup paths: 1.6s, initialize shared: 0.2s, other imports: 0.8s, list SD models: 1.8s, load scripts: 2.9s, create ui: 0.8s, gradio launch: 2.0s, add APIs: 0.4s).
Loading VAE weights specified in settings: /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 5.5s (load weights from disk: 0.6s, create model: 1.3s, apply weights to model: 2.4s, load VAE: 0.4s, load textual inversion embeddings: 0.4s, calculate empty prompt: 0.2s).
2025-01-27 01:55:45,799 - ControlNet - INFO - unit_separate = False, style_align = False
2025-01-27 01:55:46,219 - ControlNet - INFO - Loading model: ip-adapter-faceid-plus_sd15 [d86a490f]
2025-01-27 01:55:46,398 - ControlNet - INFO - Loaded state_dict from [/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/models/ip-adapter-faceid-plus_sd15.bin]
2025-01-27 01:55:47,018 - ControlNet - INFO - ControlNet model ip-adapter-faceid-plus_sd15 [d86a490f](ControlModelType.IPAdapter) loaded.
2025-01-27 01:55:47,027 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2025-01-27 01:55:47,027 - ControlNet - INFO - preprocessor resolution = 512
*** Error running process: /content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 832, in process
script.process(p, *script_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1228, in process
self.controlnet_hack(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1213, in controlnet_hack
self.controlnet_main_entry(p)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 941, in controlnet_main_entry
controls, hr_controls, additional_maps = get_control(
^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 290, in get_control
controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 290, in <listcomp>
controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 242, in preprocess_input_image
result = preprocessor.cached_call(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/supported_preprocessor.py", line 198, in cached_call
result = self._cached_call(input_image, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 82, in decorated_func
return cached_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 66, in cached_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/supported_preprocessor.py", line 211, in _cached_call
return self(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/legacy/legacy_preprocessors.py", line 105, in __call__
result, is_image = self.call_function(
^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/legacy/processor.py", line 768, in face_id_plus
face_embed, _ = g_insight_face_model.run_model(img)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/legacy/processor.py", line 696, in run_model
self.load_model()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/preprocessor/legacy/processor.py", line 686, in load_model
from insightface.app import FaceAnalysis
File "/usr/local/lib/python3.11/dist-packages/insightface/__init__.py", line 16, in <module>
from . import model_zoo
File "/usr/local/lib/python3.11/dist-packages/insightface/model_zoo/__init__.py", line 1, in <module>
from .model_zoo import get_model
File "/usr/local/lib/python3.11/dist-packages/insightface/model_zoo/model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "/usr/local/lib/python3.11/dist-packages/insightface/model_zoo/arcface_onnx.py", line 10, in <module>
import onnx
ModuleNotFoundError: No module named 'onnx'
---
100% 46/46 [00:03<00:00, 11.95it/s]
```
### Additional information
I downloaded ip-adapter-faceid-plusv2_sd15.bin from the link below.
https://huggingface.co/h94/IP-Adapter-FaceID/tree/main | closed | 2025-01-27T02:12:01Z | 2025-02-03T08:50:44Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16811 | [
"bug-report"
] | foobar-san | 1 |
yunjey/pytorch-tutorial | deep-learning | 89 | question about recurrent neural network | Hi,
I'am a fresh man in pytorch . I notice that you used "batch_first =True" in LSTM . I just delete "batch_first=True" and change the dimension respectly . However , the accuracy drops to 11% ,which confuses me a lot of days.
The code is `short` , would you mind spending a few minutes to point out my fault ?
Thanks,
```
import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable
# Hyper Parameters
sequence_length = 28
input_size = 28
hidden_size = 128
num_layers = 2
num_classes = 10
batch_size = 100
num_epochs = 20
learning_rate = 0.01
# MNIST Dataset
train_dataset = dsets.MNIST(root='./data/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./data/',
train=False,
transform=transforms.ToTensor())
# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# RNN Model (Many-to-One)
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
# self.lstm = nn.LSTM(input_size, hidden_size, num_layers,batch_first=True)
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# Set initial states
# h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda())
h0 = Variable(torch.zeros(self.num_layers, x.size(1), self.hidden_size).cuda())
# c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size).cuda())
c0 = Variable(torch.zeros(self.num_layers, x.size(1), self.hidden_size).cuda())
# Forward propagate RNN
out, _ = self.lstm(x, (h0, c0))
# Decode hidden state of last time step
# out = self.fc(out[:,-1, :])
out = self.fc(out[-1,:, :])
return out
rnn = RNN(input_size, hidden_size, num_layers, num_classes)
rnn.cuda()
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# images = Variable(images.view(-1,sequence_length, input_size)).cuda()
images = Variable(images.view(sequence_length,-1, input_size)).cuda()
labels = Variable(labels).cuda()
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = rnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))
# Test the Model
correct = 0
total = 0
for images, labels in test_loader:
# images = Variable(images.view(-1,sequence_length, input_size)).cuda()
images = Variable(images.view(sequence_length,-1, input_size)).cuda()
outputs = rnn(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted.cpu() == labels).sum()
print('Test Accuracy of the model on the 10000 test images: %d %%' % (100 * correct / total))
# Save the Model
torch.save(rnn.state_dict(), 'rnn.pkl')
```
| closed | 2017-12-22T15:42:02Z | 2017-12-25T01:25:22Z | https://github.com/yunjey/pytorch-tutorial/issues/89 | [] | 978749951 | 2 |
plotly/dash | plotly | 2,445 | plotly figure subplot grid lost after converting back from JSON | **Describe your context**
```
dash==2.8.1
dash-bootstrap-components==1.4.0
dash-core-components==2.0.0
dash-extensions==0.1.13
dash-html-components==2.0.0
dash-leaflet==0.1.23
dash-table==5.0.0
plotly==5.13.1
```
**Describe the bug**
I am trying to update a plotly figure containing subplots within a dash callback but get the error "Exception: Use plotly.tools.make_subplots to create a subplot grid".
The `Figure` object is passed as a `State` to the callback function. Because the figure is passed to the callback function as a dict, I use `fig =pio.from_json(json.dumps(fig))` to convert it back to a proper `Figure` object. However, after conversion, this `Figure` object does not seem to know about its subplots anymore, raising the following exception whenever I try to reference anything subplot-related, e.g. `fig.print_grid()` or `fig.add_trace([...], row=2, col=1)`:
> Exception: Use plotly.tools.make_subplots to create a subplot grid
Minimal code to reproduce:
```python
fig = plotly.subplots.make_subplots(
rows=3,
cols=1,
)
fig.print_grid() # this works fine
fig2 = pio.from_json(fig.to_json()) # convert to json and back
fig2.print_grid() # this raises an Exception
```
Full example app:
```python
import json
from dash import Dash, html, dcc, Input, Output, State, callback_context
import plotly.graph_objects as go
import plotly.io as pio
import plotly.subplots
app = Dash(
name=__name__,
title="Test app",
)
# create the initial empty figure
fig = plotly.subplots.make_subplots(
rows=3,
cols=1,
)
# create the layout
app.layout = html.Div(children=[
html.Button("Test me", id="button"),
dcc.Graph(id="graph", figure=fig)
])
@app.callback(
Output("graph", "figure"),
Input("button", "n_clicks"),
State("graph", "figure"),
prevent_initial_call=True
)
def update_graph(n_clicks, figure):
# update the figure
figure = pio.from_json(json.dumps(figure))
figure.print_grid() # this raises an Exception
figure.add_trace(
go.Scatter(
x=[0, 1, 2, 3],
y=[10, 20, 10, 0],
name=f"line",
),
row=2,
col=1,
) # this also raises an Exception
return figure
if __name__ == "__main__":
app.run_server(debug=True)
```
**Expected behavior**
I expect to be able to access the subplots of the figure in the callback method. | open | 2023-03-06T09:36:41Z | 2024-08-13T19:28:46Z | https://github.com/plotly/dash/issues/2445 | [
"bug",
"P3"
] | jamaa | 5 |
httpie/http-prompt | api | 111 | [BUG] AttributeError: 'unicode' object has no attribute 'items' | As title says, I'm getting exception when trying to fetch/save xml response. Same request works fine with raw httpie. Response example attached at the very end.
env:
```
(http_prompt) ~/Projects/plex_debug$ python --version
Python 3.6.0
(http_prompt) ~/Projects/plex_debug$ pip freeze | grep http
http-prompt==0.9.2
httpie==0.9.9
```
httpie options:
```
http://127.0.0.1:32400/video/SVR-DEV> httpie
http http://127.0.0.1:32400/video/SVR-DEV Cookie:com.plexapp.plugins.svr=Y2VyZWFsMQozCmRpY3QKbGlzdApkaWN0CjIKcjEKczcKY29va2llc3IyCnM3CnNlc3Npb24wCjAKcjAK X-Plex-Token:%SOMETOKENHERE%
```
http-prompt get trace:
```
http://127.0.0.1:32400/video/SVR-DEV> get
HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: Keep-Alive
Content-Encoding: gzip
Content-Length: 438
Content-Type: application/xml
Date: Wed, 01 Mar 2017 23:02:50 GMT
Etag: "1f5714a7cd820e8a1043cff95a0d10ac63d3760b"
Keep-Alive: timeout=20
Set-Cookie: com.plexapp.plugins.svr=Y2VyZWFsMQozCmRpY3QKbGlzdApkaWN0CjIKcjEKczcKY29va2llc3IyCnM3CnNlc3Npb24wCjAKcjAK
X-Plex-Content-Compressed-Length: 438
X-Plex-Content-Original-Length: 1665
X-Plex-Protocol: 1.0
<?xml version='1.0' encoding='utf-8'?>
<MediaContainer title1="SVR-DEV" art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" size="6" identifier="com.plexapp.plugins.svr" sourceTitle="SVR-DEV" mediaTagPrefix="/system/bundle/media/flags/" prefsKey="/:/plugins/com.plexapp.plugins.svr/prefs" searchesKey="/system/services/searches?identifier=com.plexapp.plugins.svr">
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/latest_update_menu" title="LATEST_UPDATES"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/filter_menu" title="FILTER"/>
<Directory prompt="SEARCH?" key="/video/SVR-DEV/search_input" title="SEARCH" search="1"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/bookmarks_menu" title="BOOKMARKS"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/history_menu" title="HISTORY"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/advanced_menu" title="ADVANCED_MENU"/>
</MediaContainer>
AttributeError: 'unicode' object has no attribute 'items'
Parse tree:
<Node called "action" matching "get"> <-- *** We were here. ***
<RegexNode called "_" matching "">
<Node called "method" matching "get">
<RegexNode matching "get">
<RegexNode called "_" matching "">
<Node matching "">
<Node matching "">
<Node matching "">
<RegexNode called "_" matching "">
```
http-prompt get&save trace:
```
http://127.0.0.1:32400/video/SVR-DEV> get > ./test.xml
AttributeError: 'unicode' object has no attribute 'items'
Parse tree:
<Node called "action" matching "get > ./test.xml"> <-- *** We were here. ***
<RegexNode called "_" matching "">
<Node called "method" matching "get">
<RegexNode matching "get">
<RegexNode called "_" matching " ">
<Node matching "">
<Node matching "">
<Node matching "> ./test.xml">
<Node called "redir_out" matching "> ./test.xml">
<Node called "redir_write" matching "> ./test.xml">
<RegexNode called "_" matching "">
<Node matching ">">
<RegexNode called "_" matching " ">
<Node called "string" matching "./test.xml">
<Node called "unquoted_string" matching "./test.xml">
<Node called "unquoted_stringitem" matching ".">
<RegexNode called "unquoted_stringchar" matching ".">
<Node called "unquoted_stringitem" matching "/">
<RegexNode called "unquoted_stringchar" matching "/">
<Node called "unquoted_stringitem" matching "t">
<RegexNode called "unquoted_stringchar" matching "t">
<Node called "unquoted_stringitem" matching "e">
<RegexNode called "unquoted_stringchar" matching "e">
<Node called "unquoted_stringitem" matching "s">
<RegexNode called "unquoted_stringchar" matching "s">
<Node called "unquoted_stringitem" matching "t">
<RegexNode called "unquoted_stringchar" matching "t">
<Node called "unquoted_stringitem" matching ".">
<RegexNode called "unquoted_stringchar" matching ".">
<Node called "unquoted_stringitem" matching "x">
<RegexNode called "unquoted_stringchar" matching "x">
<Node called "unquoted_stringitem" matching "m">
<RegexNode called "unquoted_stringchar" matching "m">
<Node called "unquoted_stringitem" matching "l">
<RegexNode called "unquoted_stringchar" matching "l">
<RegexNode called "_" matching "">
<RegexNode called "_" matching "">
```
Same requests with raw httpie:
```
(http_prompt) ~/Projects/plex_debug$ http http://127.0.0.1:32400/video/SVR-DEV Cookie:com.plexapp.plugins.svr=Y2VyZWFsMQozCmRpY3QKbGlzdApkaWN0CjIKcjEKczcKY29va2llc3IyCnM3CnNlc3Npb24wCjAKcjAK X-Plex-Token:%SOMETOKENHERE%
HTTP/1.1 200 OK
Cache-Control: no-cache
Connection: Keep-Alive
Content-Encoding: gzip
Content-Length: 438
Content-Type: application/xml
Date: Wed, 01 Mar 2017 23:09:18 GMT
Etag: "1f5714a7cd820e8a1043cff95a0d10ac63d3760b"
Keep-Alive: timeout=20
Set-Cookie: com.plexapp.plugins.svr=Y2VyZWFsMQozCmRpY3QKbGlzdApkaWN0CjIKcjEKczcKY29va2llc3IyCnM3CnNlc3Npb24wCjAKcjAK
X-Plex-Content-Compressed-Length: 438
X-Plex-Content-Original-Length: 1665
X-Plex-Protocol: 1.0
<?xml version='1.0' encoding='utf-8'?>
<MediaContainer title1="SVR-DEV" art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" size="6" identifier="com.plexapp.plugins.svr" sourceTitle="SVR-DEV" mediaTagPrefix="/system/bundle/media/flags/" prefsKey="/:/plugins/com.plexapp.plugins.svr/prefs" searchesKey="/system/services/searches?identifier=com.plexapp.plugins.svr">
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/latest_update_menu" title="LATEST_UPDATES"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/filter_menu" title="FILTER"/>
<Directory prompt="SEARCH?" key="/video/SVR-DEV/search_input" title="SEARCH" search="1"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/bookmarks_menu" title="BOOKMARKS"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/history_menu" title="HISTORY"/>
<Directory art="/:/plugins/com.plexapp.plugins.svr/resources/art-default.jpg?t=1316888148" thumb="/:/plugins/com.plexapp.plugins.svr/resources/icon-default.png?t=1316888148" key="/video/SVR-DEV/advanced_menu" title="ADVANCED_MENU"/>
</MediaContainer>
```
[response_example.zip](https://github.com/eliangcs/http-prompt/files/812280/response_example.zip) | closed | 2017-03-01T23:15:01Z | 2017-03-09T12:37:19Z | https://github.com/httpie/http-prompt/issues/111 | [
"bug"
] | byg0n3 | 5 |
ading2210/poe-api | graphql | 44 | AttributeError: 'NoneType' object has no attribute 'group' | Seems like the static variables in the HTML page are constantly changing


However, a minor change in the regex would solve this issue. https://github.com/ading2210/poe-api/blob/91a43003d25725fe958e58600325da2bb1216dbb/poe-api/src/poe.py#L83
```
script_regex = r'<script>if\(.+\)throw new Error;(.+)</script>'
script_text = re.search(script_regex, html).group(1)
key_regex = r'var .="([0-9a-f]+)",'
key_text = re.search(key_regex, script_text).group(1)
cipher_regex = r'.\[(\d+)\]=.\[(\d+)\]'
cipher_pairs = re.findall(cipher_regex, script_text)
``` | closed | 2023-04-16T00:17:07Z | 2023-04-16T01:43:09Z | https://github.com/ading2210/poe-api/issues/44 | [
"bug"
] | aqasemi | 4 |
marshmallow-code/apispec | rest-api | 88 | marshmallow/swagger.py - custom field mapping | Hi.
Assuming a model uses custom Marshmallow fields, those do not appear in `FIELD_MAPPING` and therefore are not documented properly: default (type, format) is ('string', None).
If `_get_json_type_for_field` used `isinstance` rather than `type`, fields inheriting from Marshmallow fields would at least be treated like their parent, but this wouldn't be totally satisfying.
OpenAPI spec allows custom formats for fields, like apispec does `Email`, for instance,
```
marshmallow.fields.Email: ('string', 'email'),
```
Is there a way to pass custom field mappings to apispec?
If not, would this be considered a relevant feature request?
A list of custom types ((`CUSTOM_FIELD_MAPPING`) could be appended to `FIELD_MAPPING`. Or better, `CUSTOM_FIELD_MAPPING` would be checked first, so as to allow overriding existing mappings in `FIELD_MAPPING`.
| closed | 2016-08-26T22:35:29Z | 2017-03-03T14:32:37Z | https://github.com/marshmallow-code/apispec/issues/88 | [] | lafrech | 3 |
microsoft/qlib | deep-learning | 1,408 | Issue about speed up dataloading when rolling trainning models. | ## ❓ Questions and Help
When using RollingTask.task_training(tasks), it will try to load data everytime it train a new model.
I wonder if I can load all data at once and then chop it when I need some segment of the whole dataset.
It always takes long time in every episode.


| closed | 2023-01-04T03:37:37Z | 2023-07-08T15:02:00Z | https://github.com/microsoft/qlib/issues/1408 | [
"question",
"stale"
] | chenkigba | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,221 | How to use custom pre-trained model? | Hi,
I trained a CycleGAN model and wish to use it as a pre-trained model for a new model with a new training dataset etc. How do I go about using the first trained model of mine as pretrained model for the new one?
Do I take latest_net_G_A.pth, or do I take latest_net_G_B.pth? When should I take A and when should I take B?
After deciding which one, do I make a new folder in .checkpoints called "(new session name)_pretrained" and put the model I chose from above into this folder, and remove the A/B from the suffix? I'm wondering whether doing this then initiating a new training session with --name (new session name) then will make it automatically look for pretrained model in "(new_session_name)_pretrained" folder in .checkpoints.
Thanks! | open | 2021-01-05T18:59:57Z | 2021-01-05T18:59:57Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1221 | [] | randingo3 | 0 |
slackapi/bolt-python | fastapi | 1,163 | Circular Dependency when trying to run example code | Hi,
I'm trying to run a socket connection app, but I'm getting a circular dependency error.
Would appreciate help with this, adding here necessary info
### Reproducible in:
#### The `slack_bolt` version
* slack_bolt==1.20.1
* slack_sdk==3.33.0
#### Python runtime version
* Python 3.12.5
#### OS info
ProductName: macOS
ProductVersion: 14.6.1
BuildVersion: 23G93
Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Copy example code for socket connection:
```python
import os
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
# Install the Slack app and get xoxb- token in advance
app = App(token=os.environ["SLACK_BOT_TOKEN"])
if __name__ == "__main__":
handler = SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"])
handler.start()
```
3. Try running the code
### Expected result:
The code should be running, expecting connections.
### Actual result:
Getting a `circular import` error
```bash
Traceback (most recent call last):
File "/.../src/socket.py", line 2, in <module>
from slack_bolt import App
File "/.../venv/lib/python3.12/site-packages/slack_bolt/__init__.py", line 9, in <module>
from .app import App
File "/.../venv/lib/python3.12/site-packages/slack_bolt/app/__init__.py", line 10, in <module>
from .app import App
File "/.../venv/lib/python3.12/site-packages/slack_bolt/app/app.py", line 9, in <module>
from http.server import SimpleHTTPRequestHandler, HTTPServer
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/server.py", line 92, in <module>
import email.utils
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/email/utils.py", line 29, in <module>
import socket
File "/.../src/socket.py", line 2, in <module>
from slack_bolt import App
ImportError: cannot import name 'App' from partially initialized module 'slack_bolt' (most likely due to a circular import)
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-09-18T11:05:39Z | 2024-09-18T12:45:49Z | https://github.com/slackapi/bolt-python/issues/1163 | [
"question",
"need info"
] | jacoblElementor | 3 |
httpie/cli | python | 786 | support get cookie from a website login page? | closed | 2019-05-28T08:53:07Z | 2019-06-24T11:02:15Z | https://github.com/httpie/cli/issues/786 | [] | DavidWang666 | 1 | |
napari/napari | numpy | 7,372 | Always pass `List[Path]` to plugin readers, together with `stack` keyword | ## 🧰 Task
This work was originally started in #4107 and got most of the way there. Plugins now receive a single path, unless the user chose one of the `stack` options when opening files (either holding `Shift` via drag'n'drop or using the `File -> Open Files as Stack` menu), in which case the full list of paths is sent to the reader.
The final steps to simplify and standardize the opening of lists of files would be:
- always pass a list to plugins, even if there's just one path in the list
- always pass the `stack` keyword
- optionally, update the `reader` manifest schema to include a `stack` keyword that allows plugins to declare whether they implement stacking in their readers
The last two points may/will require changes in npe2, but I've opened here at least to track, since I closed #1883. | open | 2024-11-13T05:38:44Z | 2024-11-13T05:38:44Z | https://github.com/napari/napari/issues/7372 | [
"topic:plugins",
"task"
] | DragaDoncila | 0 |
pallets-eco/flask-sqlalchemy | flask | 704 | how to create database with flask-sqlalchemy? | how to create database with flask-sqlalchemy? Use only flask-sqlalchemy,not sqlalchemy! | closed | 2019-03-16T11:54:23Z | 2020-12-05T20:37:26Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/704 | [] | my3188 | 1 |
holoviz/panel | plotly | 7,535 | Markdown pane does not collapse line breaks by default | By default, the Markdown Pane does not collapse line breaks. This is when you write on multiple lines:
```
My name
is Bond,
James Bond
```
And it is displayed on a single line:
```
My name is Bond, James Bond
```
This is the default behavior on VSCode or in JupyterLab.

It doesn't seem to be the default everywhere, at least not on Github.

It looks like this used to be the default behavior in Panel but was changed in https://github.com/holoviz/panel/pull/5376/. I'm opening this issue as I was updating the Exoplanets example which has this multiline string displayed in a Markdown pane. The string is written in a way to make it easy to read in a notebook, this would also be valid for code written in a `.py` file. Unfortunately, the line breaks are all preserved and the end result doesn't look great. Removing these line breaks while keeping those desired isn't trivial, it requires either refactoring the multiline string or processing it (e.g. `txt.replace('\n\n', 'XXXXX').replace('\n', ' ').replace('XXXXX', '\n\n')`.
```python
background_text = """
For the past 25+ years, NASA has used ground- and space-based methods
to [identify exoplanets](https://exoplanets.nasa.gov/exep/about/missions-instruments) (planets outside of our solar system).
In the past ten years in particular, campaigns like Kepler, K2, and TESS have
produced an explosion of results.
To date, approximately 4,400 exoplanets have been identified, and over 3,000 potential exoplanet candidates have
been discovered.
This dashboard uses [Holoviews](https://holoviews.org/) and [Panel](https://panel.holoviz.org)
to visualize the discovery of confirmed and candidate exoplanets over the years.
Also included is a scatterplot that reveals details about the relationship among mass, radius, and temperature of exoplanets,
as well as controls to filter the data based on whether the planets could support life, and if so,
whether chemical rockets could be used to escape the planet.
See [examples.holoviz.org](https://examples.holoviz.org/exoplanets) for details on the data used here and how to interpret it.
"""
```

The current workaround is to instantiate the pane with `pn.pane.Markdown(background_text, renderer_options={'breaks': False})`, as `breaks=True` is the default.
https://github.com/holoviz/panel/blob/4e4c82b0426615dcfcdfe0dbc2f52c45a9278a5e/panel/pane/markup.py#L418-L419
The other two renderers have the opposite behavior.

I think my preference would be to default to `breaks=False` (revert the change made to the markdown-it renderer). If not, we should try to align the behavior across all the renderers and make it very easy to switch that behavior (via doc and/or code), as I believe displaying a multiline string in a Markdown pane is a pretty common thing.
@ahuang11 I understand your use case had to do with Markdown generated from an LLM? In which case, yes, I understand that in this context a line break should be considered a line break. | open | 2024-12-04T12:42:23Z | 2025-01-20T19:18:39Z | https://github.com/holoviz/panel/issues/7535 | [] | maximlt | 2 |
axnsan12/drf-yasg | rest-api | 149 | Support setting summary in swagger_auto_schema | The Swagger `summary` property is a short form of the description of a route. There should be a way to set this property with `swagger_auto_schema`, in a similar way to `description`. | closed | 2018-06-21T12:13:41Z | 2018-08-07T22:09:42Z | https://github.com/axnsan12/drf-yasg/issues/149 | [] | phihag | 2 |
polarsource/polar | fastapi | 5,277 | OAT Scope: Only show scopes for public API endpoints | Currently, we list them all including scopes such as `webhook:read` and `external_organizations:*` etc that are used by our dashboard, but not documented or intended for public consumption really.
So would be nice to have a whitelist of public OAT scopes to narrow down the list to avoid confusion. | open | 2025-03-15T18:49:46Z | 2025-03-15T18:58:34Z | https://github.com/polarsource/polar/issues/5277 | [
"dx"
] | birkjernstrom | 1 |
supabase/supabase-py | fastapi | 850 | User session not always present | # Bug report
## Describe the bug
This is a regression from 2.4.3 where the user's session token is sometimes present whilst not at other times due to the client not triggering an `on_auth_state_change`.
This regression happened here https://github.com/supabase-community/supabase-py/pull/766
## System information
- Version of supabase-py: 2.4.3+
| closed | 2024-07-07T10:26:36Z | 2024-07-16T11:53:54Z | https://github.com/supabase/supabase-py/issues/850 | [
"bug"
] | silentworks | 0 |
netbox-community/netbox | django | 18,289 | Module Types cannot be sorted by "Last updated" | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.10
### Python Version
3.12
### Steps to Reproduce
1. On https://demo.netbox.dev/dcim/module-types/ click on "Configure Table".
### Expected Behavior
"Last updated" being available as column.
### Observed Behavior
"Last updated" not being available as column. | closed | 2025-01-02T13:56:56Z | 2025-01-03T17:35:05Z | https://github.com/netbox-community/netbox/issues/18289 | [
"type: bug",
"status: accepted",
"severity: low"
] | ypid | 0 |
miguelgrinberg/flasky | flask | 502 | psycopg2.ProgrammingError: | Hello Miguel, Thanks for your contribution in my programming career. I'm having problem with my app i'm currently building though it's a clone application. So, whenever i tried to add a new user i usually get this error : psycopg2.ProgrammingError: can't adapt type 'builtin_function_or_method' . Below is my model code and config for database url
confi.py
```python
pd_str = 'postgresql://postgres:Olayinka1?@localhost:5432/snakeeyes'
SQLALCHEMY_DATABASE_URI=pd_str
```
Model.py
```python
from snakeeyes.extensions import db
import datetime
from werkzeug.security import check_password_hash, generate_password_hash
from flask_login import UserMixin
from snakeeyes.extensions import login_manager
from flask import current_app
from itsdangerous import TimedJSONWebSignatureSerializer
@login_manager.user_loader
def load_user(user_id):
return User.query.get(user_id)
class User(UserMixin, db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(24), nullable=True, unique=True)
email = db.Column(db.String(128), nullable=False, unique = True)
active = db.Column(db.Boolean, default = True, nullable=False)
hash_password = db.Column(db.String(240), nullable=False)
confirmed = db.Column(db.Boolean(), default = False, nullable=False)
sign_in_count = db.Column(db.Integer, default=0)
current_sign_in_on = db.Column(db.DateTime(), default=datetime.datetime.utcnow)
current_sign_in_ip = db.Column(db.String(24))
last_sign_in_on = db.Column(db.DateTime(), default=datetime.datetime.utcnow)
last_sign_in_ip = db.Column(db.String(24))
@property
def password(self):
raise AttributeError('Password is not a readable attribute')
@password.setter
def password(self, password):
self.hash_password = generate_password_hash(password)
def verify_password(self, password):
return check_password_hash(self.hash_password, password)
def is_active(self):
return self.active
def track_user_activities(self, ip_address):
self.sign_in_count = +1
self.last_sign_in_on = self.current_sign_in_on
self.last_sign_in_ip = self.current_sign_in_ip
self.current_sign_in_ip = ip_address
self.current_sign_in_on = datetime.datetime.utcnow
return True
def generate_token(self, expiration=3600):
s = TimedJSONWebSignatureSerializer('current_app.config["SECRET_KEY"]', expires_in=expiration)
return s.dumps({"user_id": self.id})
def verify_token(self,token):
s = Serializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return False
if data.get('user_id') != self.id:
return False
self.confirmed = True
db.session.add(self)
return True
def generate_reset_token(self):
s = TimedJSONWebSignatureSerializer(current_app.config['SECRET_KEY'])
return s.dumps({'user_id': self.user.id})
@staticmethod
def confirm_reset_token(token):
s = TimedJSONWebSignatureSerializer(current_app.config['SECRET_KEY'])
try:
data = s.loads(token)
except:
return False
user = User.query.get(data.get('user_id'))
return user
``` | closed | 2021-02-21T15:04:09Z | 2021-02-21T16:21:08Z | https://github.com/miguelgrinberg/flasky/issues/502 | [
"question"
] | Makeem49 | 4 |
gee-community/geemap | streamlit | 1,718 | Can't display raster | ### Environment Information
--------------------------------------------------------------------------------
Date: Wed Sep 20 11:17:17 2023 CEST
OS : Linux
CPU(s) : 24
Machine : x86_64
Architecture : 64bit
RAM : 62.5 GiB
Environment : Python
File system : ext4
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
geemap : 0.26.0
ee : 0.1.369
ipyleaflet : 0.17.3
folium : 0.14.0
jupyterlab : 4.0.6
notebook : 7.0.3
ipyevents : 2.0.1
geopandas : 0.13.2
localtileserver : 0.7.1
--------------------------------------------------------------------------------
### Description
I was trying to display a raster image on a map.
Instead, I get the error "Error displaying widget"
I am using JupyterLab 4.0.6
I have `jupyter-server-proxy` installed
I also have `mamba` and `xarray_leaflet` installed.
I am not using conda, but using pip with virtualenv
### What I Did
The image is a GeoTIFF obtained from IPCC website ([link here](https://interactive-atlas.ipcc.ch/regional-information#eyJ0eXBlIjoiQVRMQVMiLCJjb21tb25zIjp7ImxhdCI6MTgyMTUxOCwibG5nIjotMTgzMjgxOCwiem9vbSI6NSwicHJvaiI6IkVQU0c6NTQwMzAiLCJtb2RlIjoiY29tcGxldGVfYXRsYXMifSwicHJpbWFyeSI6eyJzY2VuYXJpbyI6InJjcDI2IiwicGVyaW9kIjoibmVhciIsInNlYXNvbiI6InllYXIiLCJkYXRhc2V0IjoiQ0FNLTIyIiwidmFyaWFibGUiOiJSeDVkYXkiLCJ2YWx1ZVR5cGUiOiJSRUxBVElWRV9BTk9NQUxZIiwiaGF0Y2hpbmciOiJTSU1QTEUiLCJyZWdpb25TZXQiOiJhcjYiLCJiYXNlbGluZSI6IkFSNiIsInJlZ2lvbnNTZWxlY3RlZCI6W119LCJwbG90Ijp7ImFjdGl2ZVRhYiI6InBsdW1lIiwibWFzayI6Im5vbmUiLCJzY2F0dGVyWU1hZyI6bnVsbCwic2NhdHRlcllWYXIiOm51bGwsInNob3dpbmciOmZhbHNlfX0=)). I put the[ image in a repository](https://mega.nz/file/LeJQWCjb#ZUW31wJ8JuzdruJLxZvdzXNdXB8smia-zC2HB3LYZ1o) so you can download and test.
```
import geemap
rasterFile = 'path/to/IPCC_image.tiff'
Map = geemap.Map()
# Add raster as layer in a map
Map.add_raster(rasterFile, colormap = 'terrain', layer_name='IPCC - test')
Map
```
With that code I don't get a map and also no errors, but just the message "Error displaying widget"

### I also tried
Using folium:
```
import geemap.foliumap as geemap
rasterFile = 'path/to/IPCC_image.tiff'
Map = geemap.Map()
# Add raster as layer in a map
Map.add_raster(rasterFile, colormap = 'terrain', layer_name='IPCC - test')
Map
```
With that code I get an error message:
`
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 10
7 rasterFile = 'path/to/IPCC_image.tiff'
9 # Add raster as layer in a map
---> 10 Map.add_raster(rasterFile, colormap = 'terrain', layer_name='IPCC - test')
12 Map
File ~/.local/lib/python3.10/site-packages/geemap/foliumap.py:603, in Map.add_raster(self, source, band, palette, vmin, vmax, nodata, attribution, layer_name, **kwargs)
571 """Add a local raster dataset to the map.
572 If you are using this function in JupyterHub on a remote server (e.g., Binder, Microsoft Planetary Computer) and
573 if the raster does not render properly, try installing jupyter-server-proxy using `pip install jupyter-server-proxy`,
(...)
587 layer_name (str, optional): The layer name to use. Defaults to 'Local COG'.
588 """
590 tile_layer, tile_client = get_local_tile_layer(
591 source,
592 band=band,
(...)
601 **kwargs,
602 )
--> 603 self.add_layer(tile_layer)
605 bounds = tile_client.bounds() # [ymin, ymax, xmin, xmax]
606 bounds = (
607 bounds[2],
608 bounds[0],
609 bounds[3],
610 bounds[1],
611 ) # [minx, miny, maxx, maxy]
File ~/.local/lib/python3.10/site-packages/geemap/foliumap.py:236, in Map.add_layer(self, ee_object, vis_params, name, shown, opacity, **kwargs)
217 def add_layer(
218 self,
219 ee_object,
(...)
224 **kwargs,
225 ):
226 """Adds a given EE object to the map as a layer.
227
228 Args:
(...)
233 opacity (float, optional): The layer's opacity represented as a number between 0 and 1. Defaults to 1.
234 """
--> 236 layer = EEFoliumTileLayer(ee_object, vis_params, name, shown, opacity, **kwargs)
237 layer.add_to(self)
238 arc_add_layer(layer.url_format, name, shown, opacity)
File ~/.local/lib/python3.10/site-packages/geemap/ee_tile_layers.py:97, in EEFoliumTileLayer.__init__(self, ee_object, vis_params, name, shown, opacity, **kwargs)
79 def __init__(
80 self,
81 ee_object,
(...)
86 **kwargs,
87 ):
88 """Initialize the folium tile layer.
89
90 Args:
(...)
95 opacity (float, optional): The layer's opacity represented as a number between 0 and 1. Defaults to 1.
96 """
---> 97 self.url_format = _get_tile_url_format(
98 ee_object, _validate_vis_params(vis_params)
99 )
100 super().__init__(
101 tiles=self.url_format,
102 attr="Google Earth Engine",
(...)
109 **kwargs,
110 )
File ~/.local/lib/python3.10/site-packages/geemap/ee_tile_layers.py:17, in _get_tile_url_format(ee_object, vis_params)
16 def _get_tile_url_format(ee_object, vis_params):
---> 17 image = _ee_object_to_image(ee_object, vis_params)
18 map_id_dict = ee.Image(image).getMapId(vis_params)
19 return map_id_dict["tile_fetcher"].url_format
File ~/.local/lib/python3.10/site-packages/geemap/ee_tile_layers.py:57, in _ee_object_to_image(ee_object, vis_params)
55 elif isinstance(ee_object, ee.ImageCollection):
56 return ee_object.mosaic()
---> 57 raise AttributeError(
58 f"\n\nCannot add an object of type {ee_object.__class__.__name__} to the map."
59 )
AttributeError:
Cannot add an object of type FoliumTileLayer to the map.
` | closed | 2023-09-20T09:33:39Z | 2023-09-20T21:37:47Z | https://github.com/gee-community/geemap/issues/1718 | [
"bug"
] | rodrigo-j-goncalves | 2 |
SciTools/cartopy | matplotlib | 1,524 | Segmentation fault when import cartopy.crs after import matplotlib.pyplot | ### Description
I get a segmentation fault when I import `cartopy.crs` after I import `matplotlib.pyplot`.
#### Code to reproduce
Cartopy and Matplotlib are installed from a simple conda environment
```yml
name: testCartopy
channels:
- conda-forge
- defaults
dependencies:
- python 3.7
- matplotlib
- cartopy
```
The versions are Matplotlib 3.2.1 and Cartopy 0.17.0 (though, I have run into this issue with other matplotlib versions as well).
Importing `matplotlib.pyplot` before `cartopy.crs` results in a **segmentation fault**
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
```
Importing cartopy first does not.
```python
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
```
Some additional notes:
Usually I can work around this, but I need to import a function that also imports cartopy.crs, and that causes the segmentation fault in my script. Also, this hasn't been a problem when running similar expressions in a Jupyter notebook; it only happens when I run the script from the command line.
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Linux
### Cartopy version
0.17.0 (Installed via conda-forge
### conda list
```
# packages in environment at /p/home/blaylock/anaconda3/envs/testCartopy:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_llvm conda-forge
asn1crypto 1.3.0 py37_0 conda-forge
bzip2 1.0.8 h516909a_2 conda-forge
ca-certificates 2020.4.5.1 hecc5488_0 conda-forge
cartopy 0.17.0 py37h6078e7d_1013 conda-forge
certifi 2020.4.5.1 py37hc8dfbb8_0 conda-forge
cffi 1.14.0 py37hd463f26_0 conda-forge
chardet 3.0.4 py37hc8dfbb8_1006 conda-forge
cryptography 2.5 py37hb7f436b_1 conda-forge
cycler 0.10.0 py_2 conda-forge
dbus 1.13.6 he372182_0 conda-forge
expat 2.2.9 he1b5a44_2 conda-forge
fontconfig 2.13.1 h86ecdb6_1001 conda-forge
freetype 2.10.1 he06d7ca_0 conda-forge
geos 3.8.1 he1b5a44_0 conda-forge
gettext 0.19.8.1 hc5be6a0_1002 conda-forge
glib 2.58.3 py37he00f558_1004 conda-forge
gst-plugins-base 1.14.5 h0935bb2_2 conda-forge
gstreamer 1.14.5 h36ae1b5_2 conda-forge
icu 64.2 he1b5a44_1 conda-forge
idna 2.9 py_1 conda-forge
jpeg 9c h14c3975_1001 conda-forge
kiwisolver 1.2.0 py37h99015e2_0 conda-forge
libblas 3.8.0 16_openblas conda-forge
libcblas 3.8.0 16_openblas conda-forge
libclang 9.0.1 default_hde54327_0 conda-forge
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.2.0 h24d8f2e_2 conda-forge
libgfortran-ng 7.3.0 hdf63c60_5 conda-forge
libiconv 1.15 h516909a_1006 conda-forge
liblapack 3.8.0 16_openblas conda-forge
libllvm9 9.0.1 hc9558a2_0 conda-forge
libopenblas 0.3.9 h5ec1e0e_0 conda-forge
libpng 1.6.37 hed695b0_1 conda-forge
libstdcxx-ng 9.2.0 hdf63c60_2 conda-forge
libtiff 4.1.0 hc7e4089_6 conda-forge
libuuid 2.32.1 h14c3975_1000 conda-forge
libwebp-base 1.1.0 h516909a_3 conda-forge
libxcb 1.13 h14c3975_1002 conda-forge
libxkbcommon 0.10.0 he1b5a44_0 conda-forge
libxml2 2.9.10 hee79883_0 conda-forge
llvm-openmp 10.0.0 hc9558a2_0 conda-forge
lz4-c 1.9.2 he1b5a44_0 conda-forge
matplotlib 3.2.1 0 conda-forge
matplotlib-base 3.2.1 py37h30547a4_0 conda-forge
ncurses 6.1 hf484d3e_1002 conda-forge
nspr 4.25 he1b5a44_0 conda-forge
nss 3.47 he751ad9_0 conda-forge
numpy 1.18.1 py37h8960a57_1 conda-forge
olefile 0.46 py_0 conda-forge
openssl 1.0.2u h516909a_0 conda-forge
owslib 0.19.2 py_1 conda-forge
pcre 8.44 he1b5a44_0 conda-forge
pillow 7.1.1 py37h718be6c_0 conda-forge
pip 20.0.2 py_2 conda-forge
proj 6.3.1 hc80f0dc_1 conda-forge
pthread-stubs 0.4 h14c3975_1001 conda-forge
pycparser 2.20 py_0 conda-forge
pyepsg 0.4.0 py_0 conda-forge
pykdtree 1.3.1 py37h03ebfcd_1003 conda-forge
pyopenssl 19.0.0 py37_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyproj 2.6.0 py37heba2c01_0 conda-forge
pyqt 5.12.3 py37hcca6a23_1 conda-forge
pyqt5-sip 4.19.18 pypi_0 pypi
pyqtwebengine 5.12.1 pypi_0 pypi
pyshp 2.1.0 py_0 conda-forge
pysocks 1.7.1 py37hc8dfbb8_1 conda-forge
python 3.7.0 hd21baee_1006 conda-forge
python-dateutil 2.8.1 py_0 conda-forge
python_abi 3.7 1_cp37m conda-forge
pytz 2019.3 py_0 conda-forge
pyyaml 5.3.1 py37h8f50634_0 conda-forge
qt 5.12.5 hd8c4c69_1 conda-forge
readline 7.0 hf8c457e_1001 conda-forge
requests 2.23.0 pyh8c360ce_2 conda-forge
scipy 1.4.1 py37ha3d9a3c_3 conda-forge
setuptools 46.1.3 py37hc8dfbb8_0 conda-forge
shapely 1.7.0 py37hc88ce51_3 conda-forge
six 1.14.0 py_1 conda-forge
sqlite 3.31.1 h7b6447c_0
tk 8.6.10 hed695b0_0 conda-forge
tornado 6.0.4 py37h8f50634_1 conda-forge
urllib3 1.25.8 py37hc8dfbb8_1 conda-forge
wheel 0.34.2 py_1 conda-forge
xorg-libxau 1.0.9 h14c3975_0 conda-forge
xorg-libxdmcp 1.1.3 h516909a_0 conda-forge
xz 5.2.5 h516909a_0 conda-forge
yaml 0.2.3 h516909a_0 conda-forge
zlib 1.2.11 h516909a_1006 conda-forge
zstd 1.4.4 h6597ccf_3 conda-forge
```
</details>
| closed | 2020-04-14T20:46:02Z | 2020-04-27T04:56:07Z | https://github.com/SciTools/cartopy/issues/1524 | [] | blaylockbk | 3 |
pydata/xarray | numpy | 9,186 | cupy_xarray import broken | ### What happened?
```
...Lib\site-packages\cupy_xarray\accessors.py:8
[1](file:///.../Lib/site-packages/cupy_xarray/accessors.py:1) import cupy as cp
[2](file:///.../Lib/site-packages/cupy_xarray/accessors.py:2) from xarray import (
[3](file:///.../Lib/site-packages/cupy_xarray/accessors.py:3) DataArray,
[4](file:///.../Lib/site-packages/cupy_xarray/accessors.py:4) Dataset,
[5](file:///.../Lib/site-packages/cupy_xarray/accessors.py:5) register_dataarray_accessor,
[6](file:///.../Lib/site-packages/cupy_xarray/accessors.py:6) register_dataset_accessor,
[7](file:///.../Lib/site-packages/cupy_xarray/accessors.py:7) )
----> [8](file:///.../Lib/site-packages/cupy_xarray/accessors.py:8) from xarray.core.pycompat import DuckArrayModule
[10](file:///.../Lib/site-packages/cupy_xarray/accessors.py:10) dsk = DuckArrayModule("dask")
[11](file:///.../Lib/site-packages/cupy_xarray/accessors.py:11) dask_array_type = dsk.type
ModuleNotFoundError: No module named 'xarray.core.pycompat'
```
### What did you expect to happen?
import the installed lib
### Minimal Complete Verifiable Example
```Python
Straight from your docs
## Import NumPy and CuPy
import cupy as cp
import numpy as np
import xarray as xr
import cupy_xarray # Adds .cupy to Xarray objects
for versions
cuda-version 11.8 h70ddcb2_3 conda-forge
cudatoolkit 11.8.0 h09e9e62_13 conda-forge
cupy 13.2.0 py311h0508009_0 conda-forge
cupy-core 13.2.0 py311ha6d0cfe_0 conda-forge
cupy-xarray 0.1.3 pyhd8ed1ab_0 conda-forge
dask 2024.4.0 pyhd8ed1ab_0 conda-forge
dask-core 2024.4.0 pyhd8ed1ab_0 conda-forge
dask-expr 1.0.9 pyhd8ed1ab_0 conda-forge
xarray 2024.3.0 pyhd8ed1ab_0 conda-forge
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:40:50) [MSC v.1937 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('English_United States', '1252')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.3.0
pandas: 2.2.1
numpy: 1.26.4
scipy: 1.12.0
netCDF4: 1.6.5
pydap: None
h5netcdf: None
h5py: 3.10.0
Nio: None
zarr: 2.17.1
cftime: 1.6.3
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.4.0
distributed: 2024.4.0
matplotlib: 3.8.3
cartopy: 0.22.0
seaborn: None
numbagg: None
fsspec: 2024.3.1
cupy: 13.2.0
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 69.2.0
pip: 24.0
conda: 24.5.0
pytest: None
mypy: None
IPython: 8.22.2
sphinx: None
</details>
| closed | 2024-06-27T22:14:56Z | 2024-06-28T15:18:34Z | https://github.com/pydata/xarray/issues/9186 | [
"bug",
"needs triage"
] | openSourcerer9000 | 4 |
BeanieODM/beanie | pydantic | 933 | [BUG] pydantic computed properties omitted during `insert_many` operation on Document | **Describe the bug**
Pydantic V2 have a bug where `__iter__` does not include computed property. https://github.com/pydantic/pydantic/issues/8564
This cause the document to omit the computed properties during insert, as `Encoder` is using `__iter__` to get all properties.
**To Reproduce**
```python
from beanie import Document
class TestModel(Document):
normal: int
@pydantic.computed_field
@property
def computed(self) -> int:
return 1
instance = TestModel(normal=42)
assert {field: value for field, value in instance} == instance.model_dump() # fails
# or
TestModel.insert_many([instance]) # This document in mongo will omit `computed` property.
```
**Expected behavior**
Expect all properties to be included during the insert operation.
**Additional context**
As this issue is still open on `pydantic`, not sure if we need to wait for a fix from `pydantic`. | closed | 2024-05-14T14:54:22Z | 2024-06-28T05:52:34Z | https://github.com/BeanieODM/beanie/issues/933 | [
"Stale"
] | aksswami | 3 |
gradio-app/gradio | machine-learning | 10,260 | [Dynamic Components] - Question - Triggering dynamics events after render | ### Describe the bug
Hello everyone, this is a question, not a bug, but I want to know how to do this. I have an event inside the area that is rendered dynamically, its name is "change_preview_image". It is triggered when I check a checkbox on a card that was added dynamically. So I store its value in a state variable so that it is retrieved when a new card is added. When I select this "Preview image" checkbox it enables the column next to the card, for viewing the image. It is also dynamic. The thing is that once I check it and try to add a new card, the column that was already being viewed is not rendered. So I would like to trigger the "change_preview_image" event after rendering. The problem is that if there are several cards it needs to be called for each card to redisplay the image. I tried to do the same thing for the column as I did for the checkbox component by storing its value, but the column does not have the "key" attribute and does not return the same column in the rendering.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import secrets
import string
import time
import gradio as gr
selected_controlnets=[]
controlnet_scales=[]
controlnet_preview_chk = []
controlnet_preview_col = []
CONTROLNETS = ["canny", "pose"]
def clean_status(current_status):
if current_status is not None and current_status != '':
time.sleep(5)
return None
def click_add_controlnet(controlnet):
if controlnet not in selected_controlnets:
selected_controlnets.append(controlnet)
return selected_controlnets, None
else:
return selected_controlnets, "This model has already been added!"
def submit_button(progress=gr.Progress(track_tqdm=True)):
print('Testing')
for i in controlnet_scales:
print(f"key: {i.key[8:]}, value:{i.value}")
return "See the result in the terminal!"
def random_code(size=6):
return ''.join(secrets.choice(string.ascii_letters + string.digits) for _ in range(size))
def find_dynamic_elem(object, name):
if object is not None:
if isinstance(object, list):
for i in range(len(object)):
if name in object[i].elem_id:
return object[i].key.split('-')[1] if "-" in object[i].key else object[i].key
else:
return object.key.split('-')[1] if "-" in object.key else object.key
return random_code()
def get_card_element_value(default_value, element_name, collection_values, param_name, param_type='str'):
value = None
if isinstance(collection_values, list): #gradio element
for l in range(len(collection_values)):
if element_name in collection_values[l].elem_id:
if hasattr(collection_values[l],"value"):
return collection_values[l].value
if hasattr(collection_values[l],"visible"):
return collection_values[l].visible
if element_name not in collection_values:
value = default_value
else:
if param_name not in collection_values[element_name]:
value = default_value
else:
value = collection_values[element_name][param_name]
if value is not None:
if param_type == 'str':
return value
elif param_type == 'float':
return float(value)
elif param_type == 'int':
return int(value)
with gr.Blocks(analytics_enabled=False) as app:
with gr.Row():
with gr.Column(scale=0.4, min_width=100):
controlnet_models = gr.Dropdown(label="Control Type", choices=CONTROLNETS, value=CONTROLNETS[0])
with gr.Column(scale=0, min_width=50):
refresh_controlnet = gr.Button(value="Refresh", elem_id="controlnet_refresh_button")
with gr.Column(scale=0, min_width=50):
add_controlnet = gr.Button(value="+", elem_id="add_controlnet_button")
with gr.Column(scale=0, min_width=50):
submit_test = gr.Button(value="Submit", elem_id="submit_button" )
with gr.Row():
status = gr.Textbox(label="Status", value="", show_label=False)
with gr.Row():
with gr.Column():
selected_controlnet_state= gr.State(value=selected_controlnets)
@gr.render(inputs=selected_controlnet_state)
def render_loras(selected):
global controlnet_scales, controlnet_preview_chk, controlnet_preview_col
with gr.Row(elem_id="control_row"):
for i in range(len(selected)):
control_name = selected[i]
with gr.Column(variant="panel", min_width=300):
with gr.Row():
with gr.Column(min_width=300):
remove_controlnet = gr.Button(value="X", key=f"remove-control-{control_name}", elem_classes="remove-button vertical-center")
controlnet = gr.Textbox(label="File name", value=f"{control_name}", key=f"label-{control_name}", show_label=True)
control_scale = gr.Slider(
elem_id=f"scale-{control_name}",
interactive=True,
minimum=0.1,
maximum=2.0,
step=0.01,
value=0.1,
label="Control scale",
key=find_dynamic_elem(controlnet_scales, f"scale-{control_name}"))
preview_image_chk = gr.Checkbox(label="Preview image",
elem_id=f"preview-image-{control_name}",
key=find_dynamic_elem(controlnet_preview_chk, f"preview-image-{control_name}"),
value=get_card_element_value(False, control_name, controlnet_preview_chk, "preview-image"))
with gr.Column(scale=0, min_width=300,
elem_id=f"preview-image-col-{control_name}",
visible=get_card_element_value(False, control_name, controlnet_preview_col, "preview-image-col")
) as col_preview_image:
with gr.Row():
preview_image = gr.Image(label="Preview control image", visible=True, streaming=False)
def click_remove_controlnet(value, controlnet=control_name):
for l in range(len(controlnet_scales)):
if controlnet in controlnet_scales[l].elem_id:
controlnet_scales.pop(l)
break
for l in range(len(controlnet_preview_chk)):
if controlnet in controlnet_preview_chk[l].elem_id:
controlnet_preview_chk.pop(l)
break
for l in range(len(controlnet_preview_col)):
if controlnet in controlnet_preview_col[l].elem_id:
controlnet_preview_col.pop(l)
break
selected_controlnets.pop(selected_controlnets.index(value))
return selected_controlnets, f"Control {value} removed!"
remove_controlnet.click(fn=click_remove_controlnet, inputs=[controlnet], outputs=[selected_controlnet_state, status]) \
.then(fn=clean_status, inputs=status)
def change_control_scale(value, controlnet=control_name):
for l in range(len(controlnet_scales)):
if controlnet in controlnet_scales[l].elem_id:
controlnet_scales[l].value=value
control_scale.release(fn=change_control_scale, inputs=control_scale)
def change_preview_image(value, controlnet=control_name):
for l in range(len(controlnet_preview_chk)):
if controlnet in controlnet_preview_chk[l].elem_id:
controlnet_preview_chk[l].value=value
return gr.update(visible=value)
preview_image_chk.change(fn=change_preview_image, inputs=preview_image_chk, outputs=[col_preview_image])
hasControlnet = False
for l in range(len(controlnet_scales)):
if control_name in controlnet_scales[l].elem_id:
hasControlnet = True
break
if not hasControlnet:
controlnet_scales.append(control_scale)
controlnet_preview_chk.append(preview_image_chk)
controlnet_preview_col.append(col_preview_image)
add_controlnet.click(fn=click_add_controlnet, inputs=controlnet_models, outputs=[selected_controlnet_state, status])
submit_test.click(
fn=submit_button,
outputs=status)
app.launch(inbrowser=True)
```
### Screenshot
First step - Adding one card and checking Preview image.

Second step - After adding second card image preview disappears

### Logs
```shell
N/A
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.5.2 is not installed.
httpx: 0.27.0
huggingface-hub: 0.25.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.3
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.8.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.5.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.0
huggingface-hub: 0.25.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-12-28T01:30:12Z | 2024-12-28T01:38:07Z | https://github.com/gradio-app/gradio/issues/10260 | [
"bug"
] | elismasilva | 1 |
mirumee/ariadne-codegen | graphql | 105 | Support disabling SSL cert verification for given remote GraphQL schemas | [Discussed here](https://github.com/mirumee/ariadne/discussions/1061)
It should be possible to disable ssh verification in pyproject.toml for given schema in cases where SSL cert is self-signed (eg. running https over internal network). | closed | 2023-03-27T12:04:08Z | 2023-03-30T08:19:04Z | https://github.com/mirumee/ariadne-codegen/issues/105 | [
"roadmap"
] | rafalp | 0 |
JaidedAI/EasyOCR | machine-learning | 901 | EasyOCR Links are not reachable. | I am having a problem reaching the direct links for `https://jaided.ai/` | open | 2022-12-06T13:24:29Z | 2022-12-19T01:49:20Z | https://github.com/JaidedAI/EasyOCR/issues/901 | [] | engahmed1190 | 2 |
open-mmlab/mmdetection | pytorch | 12,332 | Training MMDetection model on AWS or Google cloud | Can any one share details about how to train a model using custom data on AWS or Google Cloud.
Any estimate of cost for training Mask-RCNN model on MS-COCO on AWS | open | 2025-03-22T18:33:47Z | 2025-03-22T18:34:03Z | https://github.com/open-mmlab/mmdetection/issues/12332 | [] | njan-creative | 0 |
huggingface/transformers | machine-learning | 36,361 | warning bug in Qwen2DecoderLayer in transformers ==4.49 | ### System Info
transformers ==4.49
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`class Qwen2DecoderLayer(nn.Module):
def __init__(self, config: Qwen2Config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen2Attention(config=config, layer_idx=layer_idx)
self.mlp = Qwen2MLP(config)
self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
if config.sliding_window and config._attn_implementation != "flash_attention_2":
logger.warning_once(
f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
"unexpected results may be encountered."
)
`
config.sliding_window is a number , the warning active 100%
the code should be config.use_sliding_window ?
### Expected behavior
`class Qwen2DecoderLayer(nn.Module):
def __init__(self, config: Qwen2Config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen2Attention(config=config, layer_idx=layer_idx)
self.mlp = Qwen2MLP(config)
self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
if config.sliding_window and config._attn_implementation != "flash_attention_2":
logger.warning_once(
f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
"unexpected results may be encountered."
)
`
config.sliding_window is a number , the warning active 100%
the code should be config.use_sliding_window ?
| open | 2025-02-24T02:14:20Z | 2025-02-24T19:02:06Z | https://github.com/huggingface/transformers/issues/36361 | [
"bug"
] | Kyrie666 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,301 | Images are getting predicted with inverted color | I am working on medical imaging data and I am trying to convert one class to image to another, the images are spatially correlated, the main differences lie in color.
But the colors are getting predicted inverse, that mean the white part of the images are getting colors like the ROI (in my case the ROIs are cells) and the ROI parts are getting white.
Anyone else had similar issues?
Please help | open | 2021-07-22T16:42:20Z | 2024-07-07T09:00:49Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1301 | [] | souryasengupta | 3 |
tensorflow/tensor2tensor | deep-learning | 997 | GPU usage with the Transformer model | ### Description
I've created a custom translation Problem following the example of
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/translate_enmk.py
Everything from data generation on my own data to interactive decoding went fine and I'm very happy with the results! However, while reviewing TensorBoard I noticed that I've got quite low global_step/sec ratio (around 0.26 global_step/sec). My setup is a double GTX 1080 Ti with CUDA and cuDNN. I'm using around 3 million training examples.
```nvidia-smi``` reports low GPU usage in both memory usage as the GPU-Util. In fact, while continuing a training session my nvida-smi reports 0% GPU usage.
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.48 Driver Version: 390.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:17:00.0 Off | N/A |
| 12% 45C P8 19W / 260W | 2MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:65:00.0 On | N/A |
| 19% 47C P8 14W / 260W | 254MiB / 11175MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 1 1267 G /usr/lib/xorg/Xorg 16MiB |
| 1 1322 G /usr/bin/gnome-shell 49MiB |
| 1 2063 G /usr/lib/xorg/Xorg 86MiB |
| 1 2208 G /usr/bin/gnome-shell 99MiB |
+-----------------------------------------------------------------------------+
```
The only hook I've got is that while supplying --worker_gpu=2 to the trainer I do see
```INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=2
INFO:tensorflow:sync=False
INFO:tensorflow:datashard_devices: ['gpu:0', 'gpu:1']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0', 'gpu:1']
```
What is going on here?
P.S. In #390 someone suggested looking at the pcie mode and look at GT/s. I've run
```sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}.[0-9a-f]|LnkSta:```
and saw the following GPU settings, suggesting they do not use the x16 mode? Sorry If this is wrong, I'm not an expert in GPU configs:
```
65:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) (prog-if 00 [VGA controller])
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
65:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
```
Excuse me for a lot of info and thank you very much for the amazing tensor2tensor lib!
### Training commando
```
t2t-trainer --data_dir=$DATA_DIR --t2t_usr_dir=$CUSTOM_DIR --problem=$CUSTOM_PROBLEM --model=transformer --hparams_set=transformer_base_single_gpu --output_dir=$TRAIN_DIR --train_steps=20000 --worker_gpu=2
```
### Environment information
```
OS: Ubuntu 17.10
$ pip freeze | grep tensor
tensor2tensor==1.7.0
tensorboard==1.10.0
tensorflow==1.10.0
tensorflow-gpu==1.5.0
tensorflow-tensorboard==1.5.1
$ python -V
Python 3.6.4 :: Anaconda, Inc.
```
| closed | 2018-08-15T09:51:39Z | 2018-08-17T14:33:20Z | https://github.com/tensorflow/tensor2tensor/issues/997 | [] | mabergerx | 3 |
samuelcolvin/watchfiles | asyncio | 40 | Running in docker-compose results in no /dev/tty | If I try and run the `watchgod` CLI with my app in a docker-compose environment, it will fail due to the lack of `/dev/tty` being present:
```
web_1 | [02:48:45] watching "/app/" and reloading "app.main" on changes...
web_1 | Process Process-1:
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
web_1 | self.run()
web_1 | File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
web_1 | self._target(*self._args, **self._kwargs)
web_1 | File "/usr/local/lib/python3.7/site-packages/watchgod/cli.py", line 45, in run_function
web_1 | with set_tty(tty_path):
web_1 | File "/usr/local/lib/python3.7/contextlib.py", line 112, in __enter__
web_1 | return next(self.gen)
web_1 | File "/usr/local/lib/python3.7/site-packages/watchgod/cli.py", line 36, in set_tty
web_1 | with open(tty_path) as tty:
web_1 | OSError: [Errno 6] No such device or address: '/dev/tty'
```
By default `docker-compose up` doesn't configure a TTY (but `docker-compose run` does) and while this can be [configured](https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir) does watchgod really need to require a TTY? | closed | 2019-08-29T03:07:14Z | 2019-08-29T11:51:40Z | https://github.com/samuelcolvin/watchfiles/issues/40 | [] | elatt | 1 |
plotly/dash-table | dash | 307 | Tooltip Support [Sponsored, Feb 1 Target] | Some requirements:
- Ability to display tooltips when hovering over cells.
- Tooltip data will be provided as an additional property in the table
- Each tooltip string will be matched with a cell via a row ID and a column ID so that the tooltips remain associated with the cells when filtering and sorting.
- Tooltip strings will be rendered as plain text or as Markdown. For security reasons, raw HTML will not be supported. For architectural reasons, users will not be able to pass arbitrary Dash components as tooltips. Markdown strings will enable bolded text, italics, line breaks, headers, and tables.
- Tooltips will be styleable via external CSS.
- The position of the tooltip will be automatically determined so that it:
- Doesn’t block the cell
- Isn’t hidden
- Users will not necessarily be able to mouse over the tooltip itself to select text or click on links. Doing so would prevent the tooltip from disappearing, a potentially confusing experience. | closed | 2018-12-18T23:54:00Z | 2019-02-01T18:58:46Z | https://github.com/plotly/dash-table/issues/307 | [
"dash-type-enhancement",
"dash-meta-sponsored"
] | chriddyp | 7 |
pytorch/pytorch | machine-learning | 149,042 | Github Actions API is unstable - High queue times for GHA | ## Current Status
Mitigated on github side - recovering queue of jobs
## Error looks like
Queued jobs, failing to pick up runners
## Incident timeline (all times pacific)
* 04:00 Starded
* 06:56 Identified
* 07:12 GH API seems to be start recovering
## User impact
* queued jobs
* increased TTS on CI
## Root cause
* https://www.githubstatus.com/incidents/nhcpszxtqxtm - Actions API is unstable
## Mitigation
Once this CI:SEV is resolved, please cancel and re-run your CI job if there are queued jobs > 30mins
## Prevention/followups
*How do we prevent issues like this in the future?*
| closed | 2025-03-12T14:00:30Z | 2025-03-12T15:08:56Z | https://github.com/pytorch/pytorch/issues/149042 | [
"ci: sev",
"ci: sev-infra.thirdparty"
] | jeanschmidt | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.