repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pydantic/logfire | pydantic | 125 | logfire auth | ### Question
I was trying logfire on a FastAPI app hosted on render.com, I got "You're not authenticate, run logfire auth" on the render log.
So, is a way I can pass the auth credential a environment variable? | closed | 2024-05-05T23:02:06Z | 2024-05-06T10:37:32Z | https://github.com/pydantic/logfire/issues/125 | [
"Question"
] | kenmoh | 2 |
deepinsight/insightface | pytorch | 1,996 | How to correctly train the model arcface in Webface? | Hello!
Thanks for your awesome work and detailed instructions.
I followed the configuration [[wf42m_pfc02_r100.py](https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/configs/wf42m_pfc02_r100.py)](https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/configs/wf42m_pfc02_r100.py) to train the model and made some changes.
```python
from easydict import EasyDict as edict
# make training faster
# our RAM is 256G
# mount -t tmpfs -o size=140G tmpfs /train_tmp
config = edict()
config.margin_list = (1.0, 0.0, 0.4)
config.network = "r100"
config.pretrain_model_path = None
config.resume = False
config.output = None
config.embedding_size = 512
config.sample_rate = 0.2
config.fp16 = True
config.momentum = 0.9
config.weight_decay = 5e-4
config.batch_size = 128
config.lr = 0.1
config.verbose = 10000
config.dali = False
config.rec = "/mnt/arcface_torch"
config.num_classes = 2059906
config.num_image = 42474557
config.num_epoch = 20
config.warmup_epoch = 0
config.val_targets = ['lfw', 'cfp_fp', "agedb_30"]
config.optimizer = "sgd"
config.save_step = 30000
config.start_step = 0
config.fc_path = None
config.opt_path = None
```
- `save_step`, `start_step`, `fc_path`, `opt_path`, `pretrain_model_path`:These parameters are designed to allow the model to continue training after the program is disconnected.
- Other parameters are unchanged.
The code is modified only according to the new parameters added above.
```python
import argparse
import logging
import os
import numpy as np
import torch
from typing import List
from torch import distributed
from torch.utils.tensorboard import SummaryWriter
from backbones import get_model
from dataset import get_dataloader
from torch.utils.data import DataLoader
from lr_scheduler import PolyScheduler
from losses import CombinedMarginLoss
from partial_fc import PartialFC, PartialFCAdamW
from utils.utils_callbacks import CallBackLogging, CallBackVerification
from utils.utils_config import get_config
from utils.utils_logging import AverageMeter, init_logging
import time
from tqdm import tqdm
assert torch.__version__ >= "1.9.0", "In order to enjoy the features of the new torch, \
we have upgraded the torch to 1.9.0. torch before than 1.9.0 may not work in the future."
try:
world_size = int(os.environ["WORLD_SIZE"])
rank = int(os.environ["RANK"])
distributed.init_process_group("nccl")
except KeyError:
world_size = 1
rank = 0
distributed.init_process_group(
backend="nccl",
init_method="tcp://127.0.0.1:12584",
rank=rank,
world_size=world_size,
)
def main(args):
seed = 2333
seed = seed + rank
torch.manual_seed(seed)
np.random.seed(seed)
torch.cuda.set_device(args.local_rank)
cfg = get_config(args.config)
cfg.output = os.path.join(cfg.output, time.strftime("%Y-%m-%d-%H-%M"))
print(cfg.output)
os.makedirs(cfg.output, exist_ok=True)
init_logging(rank, cfg.output)
summary_writer = (
SummaryWriter(log_dir=os.path.join(cfg.output, "tensorboard"))
if rank == 0
else None
)
train_loader = get_dataloader(
cfg.rec, local_rank=args.local_rank, batch_size=cfg.batch_size, dali=cfg.dali)
backbone = get_model(
cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size
).cuda()
if cfg.pretrain_model_path is not None:
backbone.load_state_dict(torch.load(cfg.pretrain_model_path))
backbone = torch.nn.parallel.DistributedDataParallel(
module=backbone, broadcast_buffers=False, device_ids=[args.local_rank], bucket_cap_mb=16,
find_unused_parameters=True)
backbone.train()
# FIXME using gradient checkpoint if there are some unused parameters will cause error
backbone._set_static_graph()
margin_loss = CombinedMarginLoss(
64,
cfg.margin_list[0],
cfg.margin_list[1],
cfg.margin_list[2],
cfg.interclass_filtering_threshold
)
if cfg.optimizer == "sgd":
module_partial_fc = PartialFC(
margin_loss, cfg.embedding_size, cfg.num_classes,
cfg.sample_rate, cfg.fp16)
if cfg.fc_path is not None:
module_partial_fc.load_state_dict(torch.load(os.path.join(cfg.fc_path, 'softmax_fc_gpu_{}_step_{}.pt'.format(rank, cfg.start_step))))
module_partial_fc.train().cuda()
opt = torch.optim.SGD(
params=[{"params": backbone.parameters()}, {"params": module_partial_fc.parameters()}],
lr=cfg.lr, momentum=0.9, weight_decay=cfg.weight_decay)
if cfg.opt_path is not None:
opt.load_state_dict(torch.load(os.path.join(cfg.fc_path, 'opt_gpu_{}_step_{}.pt'.format(rank, cfg.start_step))))
elif cfg.optimizer == "adamw":
module_partial_fc = PartialFCAdamW(
margin_loss, cfg.embedding_size, cfg.num_classes,
cfg.sample_rate, cfg.fp16)
if cfg.opt_path is not None:
module_partial_fc.load_state_dict(torch.load(os.path.join(cfg.fc_path, 'softmax_fc_gpu_{}_step_{}.pt'.format(rank, cfg.start_step))))
module_partial_fc.train().cuda()
opt = torch.optim.AdamW(
params=[{"params": backbone.parameters()}, {"params": module_partial_fc.parameters()}],
lr=cfg.lr, weight_decay=cfg.weight_decay)
if cfg.opt_path is not None:
opt.load_state_dict(torch.load(os.path.join(cfg.fc_path, 'opt_gpu_{}_step_{}.pt'.format(rank, cfg.start_step))))
else:
raise
cfg.total_batch_size = cfg.batch_size * world_size
cfg.warmup_step = cfg.num_image // cfg.total_batch_size * cfg.warmup_epoch
cfg.total_step = cfg.num_image // cfg.total_batch_size * cfg.num_epoch
lr_scheduler = PolyScheduler(
optimizer=opt,
base_lr=cfg.lr,
max_steps=cfg.total_step,
warmup_steps=cfg.warmup_step
)
for key, value in cfg.items():
num_space = 25 - len(key)
logging.info(": " + key + " " * num_space + str(value))
callback_verification = CallBackVerification(
val_targets=cfg.val_targets, rec_prefix=cfg.rec, summary_writer=summary_writer
)
callback_logging = CallBackLogging(
frequent=cfg.frequent,
total_step=cfg.total_step,
batch_size=cfg.batch_size,
writer=summary_writer
)
loss_am = AverageMeter()
start_epoch = 0
global_step = 0
amp = torch.cuda.amp.grad_scaler.GradScaler(growth_interval=100)
for epoch in range(start_epoch, cfg.num_epoch):
if isinstance(train_loader, DataLoader):
train_loader.sampler.set_epoch(epoch)
if global_step + cfg.total_step // cfg.num_epoch < cfg.start_step:
global_step += cfg.total_step // cfg.num_epoch
logging.info("global step: {}".format(global_step))
continue
for _, (img, local_labels) in enumerate(train_loader):
global_step += 1
if global_step <= cfg.start_step:
logging.info("global step: {}".format(global_step))
continue
local_embeddings = backbone(img)
loss: torch.Tensor = module_partial_fc(local_embeddings, local_labels, opt)
if cfg.fp16:
amp.scale(loss).backward()
amp.unscale_(opt)
torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
amp.step(opt)
amp.update()
else:
loss.backward()
torch.nn.utils.clip_grad_norm_(backbone.parameters(), 5)
opt.step()
opt.zero_grad()
lr_scheduler.step()
with torch.no_grad():
loss_am.update(loss.item(), 1)
callback_logging(global_step, loss_am, epoch, cfg.fp16, lr_scheduler.get_last_lr()[0], amp)
if global_step % cfg.verbose == 0:
callback_verification(global_step, backbone)
if global_step % cfg.save_step == 0 and global_step > 200:
path_pfc = os.path.join(cfg.output, "softmax_fc_gpu_{}_step_{}.pt".format(rank, global_step))
torch.save(module_partial_fc.state_dict(), path_pfc)
path_opt = os.path.join(cfg.output, "opt_gpu_{}_step_{}.pt".format(rank, global_step))
torch.save(opt.state_dict(), path_opt)
if rank == 0:
path_module = os.path.join(cfg.output, "model_step_{}.pt".format(global_step))
torch.save(backbone.module.state_dict(), path_module)
path_pfc = os.path.join(cfg.output, "softmax_fc_gpu_{}.pt".format(rank))
torch.save(module_partial_fc.state_dict(), path_pfc)
path_opt = os.path.join(cfg.output, "opt_gpu_{}.pt".format(rank))
torch.save(opt.state_dict(), path_opt)
if rank == 0:
path_module = os.path.join(cfg.output, "model.pt")
torch.save(backbone.module.state_dict(), path_module)
if cfg.dali:
train_loader.reset()
if rank == 0:
path_module = os.path.join(cfg.output, "model.pt")
torch.save(backbone.module.state_dict(), path_module)
from torch2onnx import convert_onnx
convert_onnx(backbone.module.cpu().eval(), path_module, os.path.join(cfg.output, "model.onnx"))
distributed.destroy_process_group()
if __name__ == "__main__":
torch.backends.cudnn.benchmark = True
parser = argparse.ArgumentParser(description="Distributed Arcface Training in Pytorch")
parser.add_argument("config", type=str, help="py config file")
parser.add_argument("--local_rank", type=int, default=0, help="local_rank")
main(parser.parse_args())
```
But when I train from scratch, these extra parameters and code are not needed.
I tried to train a few epochs. Part of the training logs are as follows.(the first epoch)
```bash
Training: 2022-05-05 03:01:29,603-rank_id: 0
Training: 2022-05-05 03:02:23,623-: margin_list [1.0, 0.0, 0.4]
Training: 2022-05-05 03:02:23,624-: network r100
Training: 2022-05-05 03:02:23,624-: resume False
Training: 2022-05-05 03:02:23,624-: output work_dirs/wf42m_pfc02_r100/2022-05-05-03-01
Training: 2022-05-05 03:02:23,624-: embedding_size 512
Training: 2022-05-05 03:02:23,624-: sample_rate 0.2
Training: 2022-05-05 03:02:23,624-: interclass_filtering_threshold0
Training: 2022-05-05 03:02:23,624-: fp16 True
Training: 2022-05-05 03:02:23,624-: batch_size 128
Training: 2022-05-05 03:02:23,624-: optimizer sgd
Training: 2022-05-05 03:02:23,624-: lr 0.1
Training: 2022-05-05 03:02:23,624-: momentum 0.9
Training: 2022-05-05 03:02:23,624-: weight_decay 0.0005
Training: 2022-05-05 03:02:23,624-: verbose 10000
Training: 2022-05-05 03:02:23,624-: frequent 10
Training: 2022-05-05 03:02:23,624-: dali False
Training: 2022-05-05 03:02:23,624-: pretrain_model_path None
Training: 2022-05-05 03:02:23,624-: rec /mnt/arcface_torch
Training: 2022-05-05 03:02:23,624-: num_classes 2059906
Training: 2022-05-05 03:02:23,624-: num_image 42474557
Training: 2022-05-05 03:02:23,624-: num_epoch 20
Training: 2022-05-05 03:02:23,624-: warmup_epoch 0
Training: 2022-05-05 03:02:23,624-: val_targets ['lfw', 'cfp_fp', 'agedb_30']
Training: 2022-05-05 03:02:23,624-: save_step 30000
Training: 2022-05-05 03:02:23,624-: start_step 0
Training: 2022-05-05 03:02:23,624-: fc_path None
Training: 2022-05-05 03:02:23,624-: opt_path None
Training: 2022-05-05 03:02:23,624-: total_batch_size 256
Training: 2022-05-05 03:02:23,624-: warmup_step 0
Training: 2022-05-05 03:02:23,624-: total_step 3318320
Training: 2022-05-05 03:04:00,229-Reducer buckets have been rebuilt in this iteration.
Training: 2022-05-05 03:04:06,521-Speed 731.07 samples/sec Loss 43.3935 LearningRate 0.1000 Epoch: 0 Global Step: 20 Fp16 Grad Scale: 16384 Required: 2244 hours
Training: 2022-05-05 03:04:10,028-Speed 730.09 samples/sec Loss 44.7358 LearningRate 0.1000 Epoch: 0 Global Step: 30 Fp16 Grad Scale: 16384 Required: 1625 hours
Training: 2022-05-05 03:04:13,543-Speed 728.30 samples/sec Loss 44.9578 LearningRate 0.1000 Epoch: 0 Global Step: 40 Fp16 Grad Scale: 16384 Required: 1307 hours
Training: 2022-05-05 03:04:17,060-Speed 728.08 samples/sec Loss 44.8275 LearningRate 0.1000 Epoch: 0 Global Step: 50 Fp16 Grad Scale: 16384 Required: 1115 hours
Training: 2022-05-05 03:04:20,582-Speed 726.96 samples/sec Loss 44.4942 LearningRate 0.1000 Epoch: 0 Global Step: 60 Fp16 Grad Scale: 16384 Required: 985 hours
Training: 2022-05-05 03:04:24,108-Speed 726.03 samples/sec Loss 44.6929 LearningRate 0.1000 Epoch: 0 Global Step: 70 Fp16 Grad Scale: 16384 Required: 892 hours
Training: 2022-05-05 03:04:27,798-Speed 693.88 samples/sec Loss 44.4609 LearningRate 0.1000 Epoch: 0 Global Step: 80 Fp16 Grad Scale: 16384 Required: 824 hours
Training: 2022-05-05 03:04:31,318-Speed 727.35 samples/sec Loss 44.4769 LearningRate 0.1000 Epoch: 0 Global Step: 90 Fp16 Grad Scale: 16384 Required: 769 hours
Training: 2022-05-05 03:04:35,442-Speed 620.98 samples/sec Loss 44.6908 LearningRate 0.1000 Epoch: 0 Global Step: 100 Fp16 Grad Scale: 16384 Required: 731 hours
Training: 2022-05-05 03:04:38,963-Speed 727.09 samples/sec Loss 44.4759 LearningRate 0.1000 Epoch: 0 Global Step: 110 Fp16 Grad Scale: 32768 Required: 694 hours
Training: 2022-05-05 03:04:42,480-Speed 727.90 samples/sec Loss 44.7240 LearningRate 0.1000 Epoch: 0 Global Step: 120 Fp16 Grad Scale: 32768 Required: 663 hours
Training: 2022-05-05 03:04:45,999-Speed 727.67 samples/sec Loss 44.6801 LearningRate 0.1000 Epoch: 0 Global Step: 130 Fp16 Grad Scale: 32768 Required: 638 hours
Training: 2022-05-05 03:04:49,516-Speed 727.92 samples/sec Loss 44.6303 LearningRate 0.1000 Epoch: 0 Global Step: 140 Fp16 Grad Scale: 32768 Required: 615 hours
Training: 2022-05-05 03:04:53,041-Speed 726.51 samples/sec Loss 44.6319 LearningRate 0.1000 Epoch: 0 Global Step: 150 Fp16 Grad Scale: 32768 Required: 596 hours
Training: 2022-05-05 03:04:56,561-Speed 727.37 samples/sec Loss 44.2837 LearningRate 0.1000 Epoch: 0 Global Step: 160 Fp16 Grad Scale: 32768 Required: 579 hours
Training: 2022-05-05 03:05:00,083-Speed 726.84 samples/sec Loss 44.3776 LearningRate 0.1000 Epoch: 0 Global Step: 170 Fp16 Grad Scale: 32768 Required: 564 hours
Training: 2022-05-05 03:05:03,788-Speed 691.03 samples/sec Loss 44.5802 LearningRate 0.1000 Epoch: 0 Global Step: 180 Fp16 Grad Scale: 32768 Required: 552 hours
Training: 2022-05-05 03:05:07,310-Speed 727.09 samples/sec Loss 44.3005 LearningRate 0.1000 Epoch: 0 Global Step: 190 Fp16 Grad Scale: 32768 Required: 540 hours
Training: 2022-05-05 03:05:10,833-Speed 726.65 samples/sec Loss 44.2806 LearningRate 0.1000 Epoch: 0 Global Step: 200 Fp16 Grad Scale: 32768 Required: 529 hours
Training: 2022-05-05 03:05:14,359-Speed 726.32 samples/sec Loss 44.2002 LearningRate 0.1000 Epoch: 0 Global Step: 210 Fp16 Grad Scale: 65536 Required: 520 hours
Training: 2022-05-05 03:05:17,883-Speed 726.46 samples/sec Loss 44.0486 LearningRate 0.1000 Epoch: 0 Global Step: 220 Fp16 Grad Scale: 65536 Required: 511 hours
Training: 2022-05-05 03:05:21,405-Speed 726.92 samples/sec Loss 44.2318 LearningRate 0.1000 Epoch: 0 Global Step: 230 Fp16 Grad Scale: 65536 Required: 503 hours
Training: 2022-05-05 03:05:24,935-Speed 725.47 samples/sec Loss 44.2789 LearningRate 0.1000 Epoch: 0 Global Step: 240 Fp16 Grad Scale: 65536 Required: 495 hours
Training: 2022-05-05 03:05:28,464-Speed 725.53 samples/sec Loss 43.9702 LearningRate 0.1000 Epoch: 0 Global Step: 250 Fp16 Grad Scale: 65536 Required: 489 hours
Training: 2022-05-05 03:05:31,986-Speed 726.93 samples/sec Loss 43.8529 LearningRate 0.1000 Epoch: 0 Global Step: 260 Fp16 Grad Scale: 65536 Required: 482 hours
Training: 2022-05-05 03:05:35,517-Speed 725.20 samples/sec Loss 43.7125 LearningRate 0.1000 Epoch: 0 Global Step: 270 Fp16 Grad Scale: 65536 Required: 477 hours
Training: 2022-05-05 03:05:39,044-Speed 725.77 samples/sec Loss 43.8522 LearningRate 0.1000 Epoch: 0 Global Step: 280 Fp16 Grad Scale: 65536 Required: 471 hours
Training: 2022-05-05 03:05:42,576-Speed 724.93 samples/sec Loss 43.8913 LearningRate 0.1000 Epoch: 0 Global Step: 290 Fp16 Grad Scale: 65536 Required: 466 hours
Training: 2022-05-05 03:05:46,107-Speed 725.26 samples/sec Loss 43.7753 LearningRate 0.1000 Epoch: 0 Global Step: 300 Fp16 Grad Scale: 65536 Required: 462 hours
Training: 2022-05-05 03:05:49,641-Speed 724.52 samples/sec Loss 43.9401 LearningRate 0.1000 Epoch: 0 Global Step: 310 Fp16 Grad Scale: 131072 Required: 457 hours
Training: 2022-05-05 03:05:53,162-Speed 727.03 samples/sec Loss 43.9404 LearningRate 0.1000 Epoch: 0 Global Step: 320 Fp16 Grad Scale: 131072 Required: 453 hours
Training: 2022-05-05 03:05:56,684-Speed 727.08 samples/sec Loss 43.9225 LearningRate 0.1000 Epoch: 0 Global Step: 330 Fp16 Grad Scale: 131072 Required: 449 hours
Training: 2022-05-05 03:06:00,216-Speed 724.79 samples/sec Loss 43.9300 LearningRate 0.1000 Epoch: 0 Global Step: 340 Fp16 Grad Scale: 131072 Required: 445 hours
Training: 2022-05-05 03:06:03,749-Speed 724.74 samples/sec Loss 43.8761 LearningRate 0.1000 Epoch: 0 Global Step: 350 Fp16 Grad Scale: 131072 Required: 442 hours
Training: 2022-05-05 03:06:07,280-Speed 725.12 samples/sec Loss 44.2065 LearningRate 0.1000 Epoch: 0 Global Step: 360 Fp16 Grad Scale: 131072 Required: 439 hours
Training: 2022-05-05 03:06:10,819-Speed 723.52 samples/sec Loss 43.7650 LearningRate 0.1000 Epoch: 0 Global Step: 370 Fp16 Grad Scale: 131072 Required: 436 hours
Training: 2022-05-05 03:06:14,353-Speed 724.53 samples/sec Loss 43.7390 LearningRate 0.1000 Epoch: 0 Global Step: 380 Fp16 Grad Scale: 131072 Required: 433 hours
Training: 2022-05-05 03:06:17,885-Speed 725.07 samples/sec Loss 43.7029 LearningRate 0.1000 Epoch: 0 Global Step: 390 Fp16 Grad Scale: 131072 Required: 430 hours
Training: 2022-05-05 03:06:21,423-Speed 723.52 samples/sec Loss 43.7675 LearningRate 0.1000 Epoch: 0 Global Step: 400 Fp16 Grad Scale: 131072 Required: 428 hours
Training: 2022-05-05 03:06:24,939-Speed 728.37 samples/sec Loss 43.9076 LearningRate 0.1000 Epoch: 0 Global Step: 410 Fp16 Grad Scale: 131072 Required: 425 hours
Training: 2022-05-05 03:06:28,475-Speed 724.16 samples/sec Loss 43.8942 LearningRate 0.1000 Epoch: 0 Global Step: 420 Fp16 Grad Scale: 131072 Required: 423 hours
Training: 2022-05-05 03:06:31,999-Speed 726.52 samples/sec Loss 43.9265 LearningRate 0.1000 Epoch: 0 Global Step: 430 Fp16 Grad Scale: 131072 Required: 420 hours
Training: 2022-05-05 03:06:35,532-Speed 724.71 samples/sec Loss 43.8219 LearningRate 0.1000 Epoch: 0 Global Step: 440 Fp16 Grad Scale: 131072 Required: 418 hours
Training: 2022-05-05 03:06:39,065-Speed 724.59 samples/sec Loss 43.8077 LearningRate 0.1000 Epoch: 0 Global Step: 450 Fp16 Grad Scale: 131072 Required: 416 hours
Training: 2022-05-05 03:06:42,594-Speed 725.68 samples/sec Loss 43.9029 LearningRate 0.1000 Epoch: 0 Global Step: 460 Fp16 Grad Scale: 131072 Required: 414 hours
Training: 2022-05-05 03:06:46,133-Speed 723.49 samples/sec Loss 43.9024 LearningRate 0.1000 Epoch: 0 Global Step: 470 Fp16 Grad Scale: 131072 Required: 412 hours
Training: 2022-05-05 03:06:49,670-Speed 723.81 samples/sec Loss 43.6135 LearningRate 0.1000 Epoch: 0 Global Step: 480 Fp16 Grad Scale: 131072 Required: 411 hours
Training: 2022-05-05 03:06:53,211-Speed 723.15 samples/sec Loss 43.6867 LearningRate 0.1000 Epoch: 0 Global Step: 490 Fp16 Grad Scale: 131072 Required: 409 hours
Training: 2022-05-05 03:06:56,737-Speed 726.14 samples/sec Loss 43.5186 LearningRate 0.1000 Epoch: 0 Global Step: 500 Fp16 Grad Scale: 131072 Required: 407 hours
Training: 2022-05-05 03:07:00,251-Speed 728.57 samples/sec Loss 43.5245 LearningRate 0.1000 Epoch: 0 Global Step: 510 Fp16 Grad Scale: 131072 Required: 406 hours
Training: 2022-05-05 03:07:03,781-Speed 725.46 samples/sec Loss 43.6427 LearningRate 0.1000 Epoch: 0 Global Step: 520 Fp16 Grad Scale: 131072 Required: 404 hours
Training: 2022-05-05 03:07:07,310-Speed 725.46 samples/sec Loss 43.6788 LearningRate 0.1000 Epoch: 0 Global Step: 530 Fp16 Grad Scale: 131072 Required: 403 hours
Training: 2022-05-05 03:07:10,843-Speed 724.73 samples/sec Loss 43.4738 LearningRate 0.1000 Epoch: 0 Global Step: 540 Fp16 Grad Scale: 131072 Required: 401 hours
Training: 2022-05-05 03:07:14,371-Speed 725.72 samples/sec Loss 43.5328 LearningRate 0.1000 Epoch: 0 Global Step: 550 Fp16 Grad Scale: 131072 Required: 400 hours
Training: 2022-05-05 03:07:17,914-Speed 722.58 samples/sec Loss 43.5722 LearningRate 0.1000 Epoch: 0 Global Step: 560 Fp16 Grad Scale: 131072 Required: 398 hours
Training: 2022-05-05 03:07:21,448-Speed 724.53 samples/sec Loss 43.6039 LearningRate 0.1000 Epoch: 0 Global Step: 570 Fp16 Grad Scale: 131072 Required: 397 hours
Training: 2022-05-05 03:07:24,969-Speed 727.31 samples/sec Loss 43.4314 LearningRate 0.1000 Epoch: 0 Global Step: 580 Fp16 Grad Scale: 131072 Required: 396 hours
Training: 2022-05-05 03:07:28,499-Speed 725.27 samples/sec Loss 43.3965 LearningRate 0.1000 Epoch: 0 Global Step: 590 Fp16 Grad Scale: 131072 Required: 395 hours
Training: 2022-05-05 03:07:32,027-Speed 725.86 samples/sec Loss 43.4256 LearningRate 0.1000 Epoch: 0 Global Step: 600 Fp16 Grad Scale: 131072 Required: 394 hours
Training: 2022-05-05 03:07:35,548-Speed 727.10 samples/sec Loss 43.3327 LearningRate 0.1000 Epoch: 0 Global Step: 610 Fp16 Grad Scale: 131072 Required: 392 hours
Training: 2022-05-05 03:07:39,078-Speed 725.43 samples/sec Loss 43.3029 LearningRate 0.1000 Epoch: 0 Global Step: 620 Fp16 Grad Scale: 131072 Required: 391 hours
Training: 2022-05-05 03:07:42,608-Speed 725.28 samples/sec Loss 43.3096 LearningRate 0.1000 Epoch: 0 Global Step: 630 Fp16 Grad Scale: 131072 Required: 390 hours
Training: 2022-05-05 03:07:46,143-Speed 724.20 samples/sec Loss 43.2749 LearningRate 0.1000 Epoch: 0 Global Step: 640 Fp16 Grad Scale: 131072 Required: 389 hours
Training: 2022-05-05 03:07:49,673-Speed 725.36 samples/sec Loss 43.4988 LearningRate 0.1000 Epoch: 0 Global Step: 650 Fp16 Grad Scale: 131072 Required: 388 hours
Training: 2022-05-05 03:07:53,208-Speed 724.32 samples/sec Loss 43.3166 LearningRate 0.1000 Epoch: 0 Global Step: 660 Fp16 Grad Scale: 131072 Required: 387 hours
Training: 2022-05-05 03:07:56,738-Speed 725.42 samples/sec Loss 43.3468 LearningRate 0.1000 Epoch: 0 Global Step: 670 Fp16 Grad Scale: 131072 Required: 386 hours
Training: 2022-05-05 03:08:00,278-Speed 723.26 samples/sec Loss 43.4304 LearningRate 0.1000 Epoch: 0 Global Step: 680 Fp16 Grad Scale: 131072 Required: 386 hours
Training: 2022-05-05 03:08:03,813-Speed 724.34 samples/sec Loss 43.3743 LearningRate 0.1000 Epoch: 0 Global Step: 690 Fp16 Grad Scale: 131072 Required: 385 hours
Training: 2022-05-05 03:08:07,346-Speed 724.55 samples/sec Loss 43.3464 LearningRate 0.1000 Epoch: 0 Global Step: 700 Fp16 Grad Scale: 131072 Required: 384 hours
Training: 2022-05-05 03:08:10,855-Speed 729.71 samples/sec Loss 43.2136 LearningRate 0.1000 Epoch: 0 Global Step: 710 Fp16 Grad Scale: 131072 Required: 383 hours
Training: 2022-05-05 03:08:14,384-Speed 725.51 samples/sec Loss 43.2740 LearningRate 0.1000 Epoch: 0 Global Step: 720 Fp16 Grad Scale: 131072 Required: 382 hours
Training: 2022-05-05 03:08:17,920-Speed 724.19 samples/sec Loss 43.2941 LearningRate 0.1000 Epoch: 0 Global Step: 730 Fp16 Grad Scale: 131072 Required: 381 hours
Training: 2022-05-05 03:08:21,451-Speed 725.21 samples/sec Loss 43.3661 LearningRate 0.1000 Epoch: 0 Global Step: 740 Fp16 Grad Scale: 131072 Required: 381 hours
Training: 2022-05-05 03:08:24,984-Speed 724.69 samples/sec Loss 43.5143 LearningRate 0.1000 Epoch: 0 Global Step: 750 Fp16 Grad Scale: 131072 Required: 380 hours
Training: 2022-05-05 03:08:28,520-Speed 723.93 samples/sec Loss 43.3332 LearningRate 0.1000 Epoch: 0 Global Step: 760 Fp16 Grad Scale: 131072 Required: 379 hours
Training: 2022-05-05 03:08:32,060-Speed 723.23 samples/sec Loss 43.3508 LearningRate 0.1000 Epoch: 0 Global Step: 770 Fp16 Grad Scale: 131072 Required: 379 hours
Training: 2022-05-05 03:08:35,593-Speed 724.79 samples/sec Loss 43.4320 LearningRate 0.1000 Epoch: 0 Global Step: 780 Fp16 Grad Scale: 131072 Required: 378 hours
Training: 2022-05-05 03:08:39,136-Speed 722.79 samples/sec Loss 43.4134 LearningRate 0.1000 Epoch: 0 Global Step: 790 Fp16 Grad Scale: 131072 Required: 377 hours
Training: 2022-05-05 03:08:42,678-Speed 722.72 samples/sec Loss 43.2290 LearningRate 0.1000 Epoch: 0 Global Step: 800 Fp16 Grad Scale: 131072 Required: 377 hours
Training: 2022-05-05 03:08:46,189-Speed 729.32 samples/sec Loss 43.3066 LearningRate 0.1000 Epoch: 0 Global Step: 810 Fp16 Grad Scale: 131072 Required: 376 hours
Training: 2022-05-05 03:08:49,718-Speed 725.63 samples/sec Loss 43.4427 LearningRate 0.1000 Epoch: 0 Global Step: 820 Fp16 Grad Scale: 131072 Required: 375 hours
Training: 2022-05-05 03:08:53,262-Speed 722.47 samples/sec Loss 43.3619 LearningRate 0.0999 Epoch: 0 Global Step: 830 Fp16 Grad Scale: 131072 Required: 375 hours
Training: 2022-05-05 03:08:56,795-Speed 724.61 samples/sec Loss 43.4427 LearningRate 0.0999 Epoch: 0 Global Step: 840 Fp16 Grad Scale: 131072 Required: 374 hours
Training: 2022-05-05 03:09:00,343-Speed 721.69 samples/sec Loss 43.3795 LearningRate 0.0999 Epoch: 0 Global Step: 850 Fp16 Grad Scale: 131072 Required: 374 hours
Training: 2022-05-05 03:09:03,874-Speed 725.11 samples/sec Loss 43.4918 LearningRate 0.0999 Epoch: 0 Global Step: 860 Fp16 Grad Scale: 131072 Required: 373 hours
Training: 2022-05-05 03:09:07,398-Speed 726.55 samples/sec Loss 43.2607 LearningRate 0.0999 Epoch: 0 Global Step: 870 Fp16 Grad Scale: 131072 Required: 372 hours
Training: 2022-05-05 03:09:10,926-Speed 725.79 samples/sec Loss 43.5018 LearningRate 0.0999 Epoch: 0 Global Step: 880 Fp16 Grad Scale: 131072 Required: 372 hours
Training: 2022-05-05 03:09:14,456-Speed 725.19 samples/sec Loss 43.4363 LearningRate 0.0999 Epoch: 0 Global Step: 890 Fp16 Grad Scale: 131072 Required: 371 hours
Training: 2022-05-05 03:09:17,993-Speed 723.89 samples/sec Loss 43.6587 LearningRate 0.0999 Epoch: 0 Global Step: 900 Fp16 Grad Scale: 131072 Required: 371 hours
Training: 2022-05-05 03:09:21,505-Speed 729.16 samples/sec Loss 43.5048 LearningRate 0.0999 Epoch: 0 Global Step: 910 Fp16 Grad Scale: 131072 Required: 370 hours
Training: 2022-05-05 03:09:25,037-Speed 724.88 samples/sec Loss 43.2417 LearningRate 0.0999 Epoch: 0 Global Step: 920 Fp16 Grad Scale: 131072 Required: 370 hours
Training: 2022-05-05 03:09:28,563-Speed 726.07 samples/sec Loss 43.5033 LearningRate 0.0999 Epoch: 0 Global Step: 930 Fp16 Grad Scale: 131072 Required: 369 hours
Training: 2022-05-05 03:09:32,096-Speed 724.81 samples/sec Loss 43.4549 LearningRate 0.0999 Epoch: 0 Global Step: 940 Fp16 Grad Scale: 131072 Required: 369 hours
Training: 2022-05-05 03:09:35,618-Speed 727.10 samples/sec Loss 43.1973 LearningRate 0.0999 Epoch: 0 Global Step: 950 Fp16 Grad Scale: 131072 Required: 368 hours
Training: 2022-05-05 03:09:39,149-Speed 725.15 samples/sec Loss 43.4111 LearningRate 0.0999 Epoch: 0 Global Step: 960 Fp16 Grad Scale: 131072 Required: 368 hours
Training: 2022-05-05 03:09:42,686-Speed 723.94 samples/sec Loss 43.2899 LearningRate 0.0999 Epoch: 0 Global Step: 970 Fp16 Grad Scale: 131072 Required: 368 hours
Training: 2022-05-05 03:09:46,211-Speed 726.28 samples/sec Loss 43.2542 LearningRate 0.0999 Epoch: 0 Global Step: 980 Fp16 Grad Scale: 131072 Required: 367 hours
Training: 2022-05-05 03:09:49,738-Speed 726.04 samples/sec Loss 43.2698 LearningRate 0.0999 Epoch: 0 Global Step: 990 Fp16 Grad Scale: 131072 Required: 367 hours
Training: 2022-05-05 03:09:53,261-Speed 726.82 samples/sec Loss 43.3357 LearningRate 0.0999 Epoch: 0 Global Step: 1000 Fp16 Grad Scale: 131072 Required: 366 hours
...
...
...
...
...
...
Training: 2022-05-05 04:02:38,531-Speed 654.20 samples/sec Loss 42.4198 LearningRate 0.0994 Epoch: 0 Global Step: 9810 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:02:42,430-Speed 656.83 samples/sec Loss 42.4915 LearningRate 0.0994 Epoch: 0 Global Step: 9820 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:02:46,392-Speed 646.49 samples/sec Loss 42.4388 LearningRate 0.0994 Epoch: 0 Global Step: 9830 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:02:50,393-Speed 640.30 samples/sec Loss 42.4244 LearningRate 0.0994 Epoch: 0 Global Step: 9840 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:02:54,402-Speed 638.66 samples/sec Loss 42.3092 LearningRate 0.0994 Epoch: 0 Global Step: 9850 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:02:58,344-Speed 649.63 samples/sec Loss 42.3126 LearningRate 0.0994 Epoch: 0 Global Step: 9860 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:02,275-Speed 651.37 samples/sec Loss 42.4544 LearningRate 0.0994 Epoch: 0 Global Step: 9870 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:06,232-Speed 647.06 samples/sec Loss 42.4995 LearningRate 0.0994 Epoch: 0 Global Step: 9880 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:10,153-Speed 653.10 samples/sec Loss 42.4693 LearningRate 0.0994 Epoch: 0 Global Step: 9890 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:14,078-Speed 653.30 samples/sec Loss 42.2807 LearningRate 0.0994 Epoch: 0 Global Step: 9900 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:18,041-Speed 646.23 samples/sec Loss 42.4305 LearningRate 0.0994 Epoch: 0 Global Step: 9910 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:21,969-Speed 651.95 samples/sec Loss 42.2798 LearningRate 0.0994 Epoch: 0 Global Step: 9920 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:25,908-Speed 649.98 samples/sec Loss 42.4574 LearningRate 0.0994 Epoch: 0 Global Step: 9930 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:29,965-Speed 631.44 samples/sec Loss 42.3743 LearningRate 0.0994 Epoch: 0 Global Step: 9940 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:33,950-Speed 642.54 samples/sec Loss 42.4366 LearningRate 0.0994 Epoch: 0 Global Step: 9950 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:37,915-Speed 645.97 samples/sec Loss 42.3575 LearningRate 0.0994 Epoch: 0 Global Step: 9960 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:41,852-Speed 650.47 samples/sec Loss 42.3721 LearningRate 0.0994 Epoch: 0 Global Step: 9970 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:45,809-Speed 647.12 samples/sec Loss 42.4875 LearningRate 0.0994 Epoch: 0 Global Step: 9980 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:49,754-Speed 649.48 samples/sec Loss 42.6298 LearningRate 0.0994 Epoch: 0 Global Step: 9990 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:03:53,689-Speed 650.63 samples/sec Loss 42.4385 LearningRate 0.0994 Epoch: 0 Global Step: 10000 Fp16 Grad Scale: 32768 Required: 334 hours
Training: 2022-05-05 04:04:58,211-[lfw][10000]XNorm: 33.232371
Training: 2022-05-05 04:04:58,211-[lfw][10000]Accuracy-Flip: 0.57800+-0.01941
Training: 2022-05-05 04:04:58,211-[lfw][10000]Accuracy-Highest: 0.57800
Training: 2022-05-05 04:05:59,945-[cfp_fp][10000]XNorm: 38.853677
Training: 2022-05-05 04:05:59,945-[cfp_fp][10000]Accuracy-Flip: 0.57586+-0.00978
Training: 2022-05-05 04:05:59,946-[cfp_fp][10000]Accuracy-Highest: 0.57586
Training: 2022-05-05 04:07:10,846-[agedb_30][10000]XNorm: 33.714078
Training: 2022-05-05 04:07:10,846-[agedb_30][10000]Accuracy-Flip: 0.50183+-0.00565
Training: 2022-05-05 04:07:10,847-[agedb_30][10000]Accuracy-Highest: 0.50183
...
...
...
...
...
...
0.0912 Epoch: 0 Global Step: 149970 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:16:36,587-Speed 744.49 samples/sec Loss 31.3188 LearningRate 0.0912 Epoch: 0 Global Step: 149980 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:16:40,024-Speed 745.08 samples/sec Loss 31.7255 LearningRate 0.0912 Epoch: 0 Global Step: 149990 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:16:43,455-Speed 746.23 samples/sec Loss 31.6172 LearningRate 0.0912 Epoch: 0 Global Step: 150000 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:17:17,979-[lfw][150000]XNorm: 23.721737
Training: 2022-05-05 20:17:17,981-[lfw][150000]Accuracy-Flip: 0.95317+-0.01102
Training: 2022-05-05 20:17:17,981-[lfw][150000]Accuracy-Highest: 0.95317
Training: 2022-05-05 20:17:57,559-[cfp_fp][150000]XNorm: 22.461578
Training: 2022-05-05 20:17:57,559-[cfp_fp][150000]Accuracy-Flip: 0.75986+-0.01316
Training: 2022-05-05 20:17:57,559-[cfp_fp][150000]Accuracy-Highest: 0.75986
Training: 2022-05-05 20:18:31,206-[agedb_30][150000]XNorm: 23.346873
Training: 2022-05-05 20:18:31,207-[agedb_30][150000]Accuracy-Flip: 0.77467+-0.01928
Training: 2022-05-05 20:18:31,207-[agedb_30][150000]Accuracy-Highest: 0.77467
Training: 2022-05-05 20:18:39,020-Speed 22.15 samples/sec Loss 31.3819 LearningRate 0.0912 Epoch: 0 Global Step: 150010 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:18:42,425-Speed 751.99 samples/sec Loss 31.5926 LearningRate 0.0912 Epoch: 0 Global Step: 150020 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:18:45,836-Speed 750.62 samples/sec Loss 31.8748 LearningRate 0.0912 Epoch: 0 Global Step: 150030 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:18:49,255-Speed 748.88 samples/sec Loss 31.8023 LearningRate 0.0912 Epoch: 0 Global Step: 150040 Fp16 Grad Scale: 65536 Required: 364 hours
Training: 2022-05-05 20:18:52,673-Speed 749.26 samples/sec Loss 31.5251 LearningRate 0.0912 Epoch: 0 Global Step: 150050 Fp16 Grad Scale: 65536 Required: 364 hours
...
...
...
...
...
...
Training: 2022-05-05 21:51:17,584-Speed 743.88 samples/sec Loss 31.3707 LearningRate 0.0903 Epoch: 0 Global Step: 165850 Fp16 Grad Scale: 131072 Required: 357 hours
Training: 2022-05-05 21:51:21,009-Speed 747.59 samples/sec Loss 31.5126 LearningRate 0.0903 Epoch: 0 Global Step: 165860 Fp16 Grad Scale: 65536 Required: 357 hours
Training: 2022-05-05 21:51:24,447-Speed 744.78 samples/sec Loss 31.5628 LearningRate 0.0903 Epoch: 0 Global Step: 165870 Fp16 Grad Scale: 65536 Required: 357 hours
Training: 2022-05-05 21:51:27,885-Speed 744.87 samples/sec Loss 31.5398 LearningRate 0.0903 Epoch: 0 Global Step: 165880 Fp16 Grad Scale: 65536 Required: 357 hours
Training: 2022-05-05 21:51:31,317-Speed 746.07 samples/sec Loss 31.5160 LearningRate 0.0903 Epoch: 0 Global Step: 165890 Fp16 Grad Scale: 65536 Required: 357 hours
Training: 2022-05-05 21:51:34,758-Speed 744.11 samples/sec Loss 31.5816 LearningRate 0.0903 Epoch: 0 Global Step: 165900 Fp16 Grad Scale: 65536 Required: 357 hours
Training: 2022-05-05 21:51:38,484-Speed 687.12 samples/sec Loss 31.4126 LearningRate 0.0903 Epoch: 0 Global Step: 165910 Fp16 Grad Scale: 65536 Required: 357 hours
```
I compared this result with the [[official logs(wf42m_pfc02_r100/training.log)](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf42m_pfc02_r100/training.log)](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/wf42m_pfc02_r100/training.log), and found that my results were much worse.
- My loss only dropped to 30 in the first epoch but the official one can drop to 14. Meanwhile, my results are much worse on the three test sets("lfw", "cfp_fp", "agedb_30")
My hardware environment is 2 GPUs(RTX 3090, 24G), I think maybe there are some setup issues due to the difference in the number of GPUs.
How to correctly train the model arcface in Webface?
Thanks! | closed | 2022-05-08T05:31:05Z | 2023-02-20T02:58:19Z | https://github.com/deepinsight/insightface/issues/1996 | [] | Facico | 14 |
viewflow/viewflow | django | 471 | django-money package compatibility | widget doesnt have type | closed | 2024-09-05T08:57:15Z | 2024-10-07T17:16:50Z | https://github.com/viewflow/viewflow/issues/471 | [
"request/enhancement",
"dev/forms"
] | kmmbvnr | 1 |
clovaai/donut | nlp | 38 | Need code for SROIE custom dataset regarding | Hi Neha,
Kindly send me a code for DONUT using a custom dataset. | open | 2022-08-29T08:10:05Z | 2022-08-29T08:10:05Z | https://github.com/clovaai/donut/issues/38 | [] | SankarSennan | 0 |
recommenders-team/recommenders | data-science | 1,750 | [FEATURE] Add time performance benchmark for functions in unit tests | ### Description
<!--- Describe your expected feature in detail -->
The idea is from [the pull request for improving the performance of `get_top_k_items()`](https://github.com/microsoft/recommenders/pull/1748#issuecomment-1156972351). By adding time performance benchmark, we can know whether changes from a PR would affect the performance or not other than the expected correct results.
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
I find [pytest-benchmark](https://pypi.org/project/pytest-benchmark/) can be used. But not sure whether there are other ways.
### Other Comments
| closed | 2022-06-16T07:12:34Z | 2022-07-11T08:30:56Z | https://github.com/recommenders-team/recommenders/issues/1750 | [
"enhancement"
] | simonzhaoms | 1 |
recommenders-team/recommenders | deep-learning | 1,807 | [ASK] Running evaluation on test set for DKN | ### Description
I've have been using [DKN Deep Dive](https://github.com/microsoft/recommenders/blob/aeb6b0b12e177b3eaf55bb7ab2b747549a541394/examples/02_model_content_based_filtering/dkn_deep_dive.ipynb), but I wanted to add evaluation on the test data. When I run `model.run_eval(test_file)` I am getting an error. I believe this is because the test data has users that are not present in the training data.
I am using the small MIND dataset. But I split my training, validation and test data a little differently.
Training is all of the small training data but the last day.
Validation is the last day of the small training data
Test is all of the small validation data.
<img width="1197" alt="Screen Shot 2022-08-04 at 1 09 19 PM" src="https://user-images.githubusercontent.com/110063544/182911207-d9a93dd1-1c73-4727-aabf-f24a7381c749.png">
Is this the expected behavior?
### Other Comments
| open | 2022-08-04T17:16:03Z | 2022-08-04T17:16:03Z | https://github.com/recommenders-team/recommenders/issues/1807 | [
"help wanted"
] | Bhammin | 0 |
lepture/authlib | flask | 140 | Version 0.12 | In this release, Authlib will focus on API redesign.
- [ ] OAuth2: load configuration from RFC8414
- [ ] OpenID Connect: load configuration from "Discovery"
## RFC
- [x] OpenID Connect Discovery
- [ ] RFC8414 integration
---
- [x] https://github.com/lepture/authlib/issues/118
- [x] https://github.com/authlib/example-oauth2-server/issues/52
- [x] Creating an example OpenID Connect Server
| closed | 2019-07-09T06:10:02Z | 2019-09-03T12:19:22Z | https://github.com/lepture/authlib/issues/140 | [
"spec",
"break change"
] | lepture | 0 |
aiogram/aiogram | asyncio | 1,414 | router.message(F.web_app_data) doesn't work | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Macos
### Python version
3.10.3
### aiogram version
3.3.0
### Expected behavior
When i send data from webapp, telegram bot must answer "Test"
```
@router.message(F.web_app_data)
async def enter_date(message: Message) -> None:
message.answer("Test")
```
It worked with the previous version (2.x)
```
@dp.message_handler(content_types="web_app_data")
async def enter_date(message: Message) -> None:
message.answer("Test")
```
### Current behavior
It doesn't answer "Test"
### Steps to reproduce
1. Send this keyboard to user
```
def get_calendar_keyboard() -> InlineKeyboardMarkup:
builder = InlineKeyboardBuilder()
builder.row(
InlineKeyboardButton(
text="Calender",
web_app=WebAppInfo(
url="https://shitposting.su/",
),
)
)
return builder.as_markup(
is_persistent=True,
resize_keyboard=True,
)
```
2. Register the router
```
create_token_router = Router()
@router.message(F.web_app_data)
async def enter_date(message: Message) -> None:
message.answer("Test")
```
### Code example
```python3
import asyncio
from aiogram import Bot
from aiogram import Dispatcher
from aiogram import F
from aiogram import Router
from aiogram.filters import Command
from aiogram.types import InlineKeyboardButton
from aiogram.types import InlineKeyboardMarkup
from aiogram.types import Message
from aiogram.types import WebAppInfo
from aiogram.utils.keyboard import InlineKeyboardBuilder
router = Router()
def get_calendar_keyboard() -> InlineKeyboardMarkup:
builder = InlineKeyboardBuilder()
builder.row(
InlineKeyboardButton(
text="Calender",
web_app=WebAppInfo(
url="https://shitposting.su/",
),
)
)
return builder.as_markup(
is_persistent=True,
resize_keyboard=True,
)
@router.message(Command("start"))
async def enter_date(message: Message) -> None:
await message.answer(
text="Calendar",
reply_markup=get_calendar_keyboard(),
)
@router.message(F.web_app_data)
async def enter_date(message: Message) -> None:
await message.answer("Test")
async def main():
dispatcher = Dispatcher()
dispatcher.include_router(router)
await dispatcher.start_polling(
Bot(token="enter token here"),
allowed_updates=dispatcher.resolve_used_update_types(),
)
asyncio.run(main())
```
```
### Logs
_No response_
### Additional information
_No response_ | open | 2024-02-13T10:40:20Z | 2025-03-10T21:56:22Z | https://github.com/aiogram/aiogram/issues/1414 | [
"bug"
] | iQiexie | 1 |
pyg-team/pytorch_geometric | pytorch | 8,773 | Simply add the types of captum support | ### 🛠 Proposed Refactor
The algorithms supported by captum can be easily extended in the explain module
### Suggest a potential alternative/fix
In captum_explainer.py, supported_methods=[] can be added to 'FeatureAblation', which is built into captum, without further modification to fit the current explain module | open | 2024-01-16T06:19:09Z | 2024-01-18T10:18:53Z | https://github.com/pyg-team/pytorch_geometric/issues/8773 | [
"refactor"
] | lck-handsome | 2 |
mwaskom/seaborn | matplotlib | 2,744 | Passing vmin, vmax to LogNorm raises ValueError: Passing parameters norm and vmin/vmax simultaneously is not supported | I opened [this issue on matplotlib's repository](https://github.com/matplotlib/matplotlib/issues/22518) but then realized the problem might be with Seaborn's heatmap. The problem is: suppose I try to create a heatmap like so:
```
sns.heatmap(data,
ax=ax,
mask=np.isnan(data),
cmap='Spectral',
norm=LogNorm(vmin=cutoff, vmax=1.),
)
```
I'll get the error:
```
Traceback (most recent call last):
File "00_prior/run_one.py", line 317, in <module>
run_one(args)
File "00_prior/run_one.py", line 66, in run_one
dynamics_latex_str=dynamics_latex_str)
File "/net/vast-storage.ib.cluster/scratch/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/00_prior/plot.py", line 47, in plot_customer_assignments_analytical_vs_monte_carlo
norm=LogNorm(cutoff, 1.),
File "/rdma/vast-rdma/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/rncrp_venv/lib/python3.7/site-packages/seaborn/matrix.py", line 525, in heatmap
plotter.plot(ax, cbar_ax, kwargs)
File "/rdma/vast-rdma/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/rncrp_venv/lib/python3.7/site-packages/seaborn/matrix.py", line 279, in plot
cmap=self.cmap, **kws)
File "/rdma/vast-rdma/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/rncrp_venv/lib/python3.7/site-packages/matplotlib/__init__.py", line 1412, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/rdma/vast-rdma/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/rncrp_venv/lib/python3.7/site-packages/matplotlib/axes/_axes.py", line 6073, in pcolormesh
collection._scale_norm(norm, vmin, vmax)
File "/rdma/vast-rdma/vast/fiete/rylansch/FieteLab-Recursive-Nonstationary-CRP/rncrp_venv/lib/python3.7/site-packages/matplotlib/cm.py", line 381, in _scale_norm
"Passing parameters norm and vmin/vmax simultaneously is "
ValueError: Passing parameters norm and vmin/vmax simultaneously is not supported. Please pass vmin/vmax directly to the norm when creating it.
```
The problem persists even if I create the norm separately first. | closed | 2022-02-21T05:40:24Z | 2022-02-21T16:22:58Z | https://github.com/mwaskom/seaborn/issues/2744 | [] | RylanSchaeffer | 3 |
ray-project/ray | pytorch | 50,843 | [Core] Ray creates a Issue with catboost | ### What happened + What you expected to happen
Using autogluon, Ray is used for paralellism, but when have a big dataset with a lot of columns eg:10.000 Ray reproduces an error from catboost. It's only catboost with problems, all other algorithms work well.
The problem should be investigated by Ray team because if i uninstall Ray, Autogluon do not gives the error:
Error:
Fitting model: CatBoost_BAG_L1 ... Training model for up to 9610.46s of the 16674.71s of remaining time.
Memory not enough to fit 8 folds in parallel. Will train 2 folds in parallel instead (Estimated 28.90% memory usage per fold, 57.80%/80.00% total).
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (2 workers, per: cpus=8, gpus=0, memory=28.90%)
Warning: Exception caused CatBoost_BAG_L1 to fail during training... Skipping this model.
ray::_ray_fit() (pid=1932, ip=127.0.0.1)
File "python\\ray\\_raylet.pyx", line 1883, in ray._raylet.execute_task
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 413](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=412), in _ray_fit
fold_model.fit(X=X_fold, y=y_fold, X_val=X_val_fold, y_val=y_val_fold, time_limit=time_limit_fold, **resources, **kwargs_fold)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\catboost\catboost_model.py", line 243](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/catboost/catboost_model.py#line=242), in _fit
self.model.fit(X, **fit_final_kwargs)
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 5245](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=5244), in fit
self._fit(X, y, cat_features, text_features, embedding_features, None, graph, sample_weight, None, None, None, None, baseline, use_best_model,
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 2410](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=2409), in _fit
self._train(
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 1790](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=1789), in _train
self._object._train(train_pool, test_pool, params, allow_clear_pool, init_model._object if init_model else None)
File "_catboost.pyx", line 5017, in _catboost._CatBoost._train
File "_catboost.pyx", line 5066, in _catboost._CatBoost._train
_catboost.CatBoostError: bad allocation
Detailed Traceback:
Traceback (most recent call last):
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\trainer\abstract_trainer.py", line 2160](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/trainer/abstract_trainer.py#line=2159), in _train_and_save
model = self._train_single(**model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\trainer\abstract_trainer.py", line 2047](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/trainer/abstract_trainer.py#line=2046), in _train_single
model = model.fit(X=X, y=y, X_val=X_val, y_val=y_val, X_test=X_test, y_test=y_test, total_resources=total_resources, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\stacker_ensemble_model.py", line 270](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py#line=269), in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 390](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=389), in _fit
self._fit_folds(
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 847](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=846), in _fit_folds
fold_fitting_strategy.after_all_folds_scheduled()
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 690](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=689), in after_all_folds_scheduled
self._run_parallel(X, y, X_pseudo, y_pseudo, model_base_ref, time_limit_fold, head_node_id)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 631](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=630), in _run_parallel
self._process_fold_results(finished, unfinished, fold_ctx)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 587](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=586), in _process_fold_results
raise processed_exception
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 550](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=549), in _process_fold_results
fold_model, pred_proba, time_start_fit, time_end_fit, predict_time, predict_1_time, predict_n_size, fit_num_cpus, fit_num_gpus = self.ray.get(finished)
^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\auto_init_hook.py", line 21](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/auto_init_hook.py#line=20), in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\client_mode_hook.py", line 103](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/client_mode_hook.py#line=102), in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\worker.py", line 2771](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/worker.py#line=2770), in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\worker.py", line 919](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/worker.py#line=918), in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(CatBoostError): ray::_ray_fit() (pid=1932, ip=127.0.0.1)
File "python\\ray\\_raylet.pyx", line 1883, in ray._raylet.execute_task
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 413](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=412), in _ray_fit
fold_model.fit(X=X_fold, y=y_fold, X_val=X_val_fold, y_val=y_val_fold, time_limit=time_limit_fold, **resources, **kwargs_fold)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\catboost\catboost_model.py", line 243](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/catboost/catboost_model.py#line=242), in _fit
self.model.fit(X, **fit_final_kwargs)
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 5245](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=5244), in fit
self._fit(X, y, cat_features, text_features, embedding_features, None, graph, sample_weight, None, None, None, None, baseline, use_best_model,
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 2410](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=2409), in _fit
self._train(
File "[C:\Users\celes\anaconda3\Lib\site-packages\catboost\core.py", line 1790](file:///C:/Users/celes/anaconda3/Lib/site-packages/catboost/core.py#line=1789), in _train
self._object._train(train_pool, test_pool, params, allow_clear_pool, init_model._object if init_model else None)
File "_catboost.pyx", line 5017, in _catboost._CatBoost._train
File "_catboost.pyx", line 5066, in _catboost._CatBoost._train
_catboost.CatBoostError: bad allocation
### Versions / Dependencies
Autogluon 1.2
Ray 3.0.0-dev (latest to have latest fixes)
Python 3.12.9
Windows
Catboost 1.2.7 dev (latest to have latest fixes)
### Reproduction script
my code is autogluon code
predictor = TabularPredictor(
label="n2_maior_igual_17",
eval_metric="log_loss",
path="modelos/n2_maior_igual_17/"
).fit(
dados_treino_n2_maior_igual_17,
presets="best_quality",
excluded_model_types=["KNN", "XT", "RF"],
ds_args={"enable_ray_logging": False},
ag_args_fit={
"early_stop": None,
"colsample_bylevel": 1.0,
},
time_limit= 8 * 3600,
refit_full=True,
calibrate=True
)
### Issue Severity
High: It blocks me from completing my task. | closed | 2025-02-23T09:25:20Z | 2025-02-25T08:36:33Z | https://github.com/ray-project/ray/issues/50843 | [
"bug",
"triage",
"core"
] | celestinoxp | 3 |
Colin-b/pytest_httpx | pytest | 79 | Update to httpx 0.23.0 | httpx released [version 0.23.0](https://github.com/encode/httpx/releases/tag/0.23.0) this morning, which corrects a significant security issue. It also removes support for Python 3.6.
I'd like to update my projects to use the new version alongside pytest_httpx. Could the project update its dependency, or would you welcome a PR to make that change? | closed | 2022-05-23T17:39:40Z | 2024-09-20T06:33:09Z | https://github.com/Colin-b/pytest_httpx/issues/79 | [
"enhancement"
] | davidmreed | 2 |
huggingface/datasets | machine-learning | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | closed | 2024-06-02T09:43:34Z | 2024-06-04T09:54:24Z | https://github.com/huggingface/datasets/issues/6942 | [
"maintenance"
] | albertvillanova | 0 |
google-deepmind/graph_nets | tensorflow | 37 | Error in multi feature using for target nodes | I want to add multiple features to target nodes like below in shortest path example:
```
input_node_fields = ("solution", "ComId")
input_edge_fields = ("solution", "ComId")
target_node_fields = ("solution", "ComId")
target_edge_fields = ("solution",)
```
when I want to run my code, rise below error:
```
Traceback (most recent call last):
File "C:/Users/Farshid/PycharmProjects/CommunityDetection/Main.py", line 104, in <module>
num_nodes_min_max_tr, theta)
File "C:\Users\Farshid\PycharmProjects\CommunityDetection\Helper.py", line 234, in create_placeholders
rand, batch_size, num_nodes_min_max, theta)
File "C:\Users\Farshid\PycharmProjects\CommunityDetection\Helper.py", line 209, in generate_networkx_graphs
input_graph, target_graph = graph_to_input_target(graph)
File "C:\Users\Farshid\PycharmProjects\CommunityDetection\Helper.py", line 168, in graph_to_input_target
target_node = to_one_hot(create_feature(node_feature, target_node_fields).astype(int), 2)[0]
File "C:\Users\Farshid\PycharmProjects\CommunityDetection\Helper.py", line 28, in to_one_hot
one_hot = np.eye(max_value)[indices]
IndexError: index 2 is out of bounds for axis 0 with size 2
```
Can someone help me?
@mateuszmalinowski
@pbattaglia | closed | 2018-12-05T21:13:08Z | 2019-12-12T18:35:35Z | https://github.com/google-deepmind/graph_nets/issues/37 | [] | FarshidShekari | 1 |
Yorko/mlcourse.ai | numpy | 596 | links are broken on mlcourse.ai | example: https://festline.github.io/notebooks/blob/master/jupyter_english/topic02_visual_data_analysis/topic2_visual_data_analysis.ipynb?flush_cache=true | closed | 2019-05-30T02:15:44Z | 2019-06-27T14:54:06Z | https://github.com/Yorko/mlcourse.ai/issues/596 | [] | nickcorona | 4 |
PaddlePaddle/PaddleHub | nlp | 1,657 | ace2p图像分割服务化部署后预测结果如何转成伪彩图? | server端得到的预测结果是伪彩图,png保存到了server端的路径下,这个没有问题,但是client端要获得png需要通过http请求再get一次,这个好像效率比较低。
server端png图像如下:

我想在http请求返回的json中直接使用获得png图像数据。但是转换后的结果与server端保存的png质量查很多,想请教一下是什么原因
http返回的json如下:
{'msg': '', 'results': [{'data': '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/wAALCAEsASwBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/AP5/6KKKKKKKKKKKKKKKKKKKckTNyeBTvI/2/wBKDAMcMaR4SoyDn1plFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFKg3OB71PRRRUUsYX5l6elMooooooooooooooooooooooooooooooop8KHO8/hWnYaFc3iebI3lIRlSRkt+FWpfDERbMN2yjHRlz/AIUq+GbYR4a5cvj7wAx+X/16qX+g3FnGZo5BIij5sDBHvj0qgQGGDUBGDg9qKKKKKKKKKKKKKKKKKKKKKKKOvSnCFz2x9aDC4GeD9Kb060UUUVZtozO6QxkAuQoz6muqVVRQiKAAMAAdKWiiuWuIvIneDdnY5XOOuDVViCxI9aSiiiiiiiiiiiiiiiiiiiiilVSxwKlSNU9z606ikdFcc/nULKVODSUUVp6Ppt3I1vdhB5ZlyDnsDnn8iK6Giiisy40lbvWWeViY9iuwH5AfpWHfRpDezQxrhVlYKM9ADUVFFFFFFFFFFFFFFFFFFFFFSQAYLe9a1r4cmmhEs8/lE9E2ZIHvzUN5od7a/Mi+apOB5YJP4iqdFRzjo2KjoorpPDNxHLpiwqfmiYhh9SSD+v6GtCiiimyyJDG00jYVVJY46AVyFzN9ouZLjbje5bGemTmmUUUUUUUUUUUUUUUUUUUUUVZsIlmlihcnDuFOPc11dFUtU0aO/PnRsEkA5OOG+v8AjWHPBNbSGKeIqw7EVBOeAPeo6KKvaDqJsLwI7ARSkLJnt6HPbr+VdNRRRWD4n1GV7g6dGxCIAXH949fy6fj+FZNFFFFFFFFFFFFFFFFFFFFFFavhaAy3ZmOcRD17ngf1rfooqprNl9sszsXLp8yYHJ9R/nvisnVNKFjpSTTKPOaYAkHoCDx6dqzKKKK6bQdRF/ZhHYmWIBZM9/Q579Pzq9RUGo6hFptsbiUE84VR3Pp7VyksjzSNNI2WZiWOOpNNoooooooooooooooooooooorT8K3Hl37W7PgSpwMdWHP8s10NFFNmmit4mnncKijLMa53XdZ/tFxBAMQo2QSOWPr7f5/DPoooqS1u7izl8+2lKNjGR6VsWfiuJgEvoCpz9+Pkfl1H60t94qhj+Swj8w/33BC/l1Pf0rHur26vX8y6nZyOmeg+g7U+3sGnh83zMZztBHWq5BBwRgjqDRRRRRRRRRRRRRRRRRRRRRRT7a4ktZ0uIjhkbI9/auttLqK8t1uYCdrjjI5qSisfxTqKrENOicFmOZQOw6gf1/D3rDooooAJOAMk9AKnj0+4dGZkwduVBPU1C6NGxR1wR1FLFDLMdsSE1bttNVcPOcnrt7f/AF6tgADAGAOgFQXFhDMS6/Kx6kd6pT2k8HLrkf3h0qOiiiiiiiiiiiiiiiiiiiiir+h6wdOl8mdj5LH5gB908c+vat6LVNOnj82O9jwBk5bBAzjkHpzVDVPEsUIMGnkO+CDJ2U+3r/Lp1rCZmdi7sSSckk8k0lFABJwBknoBVqDTHb5p22j+6OtWobWGADYmSP4j1qSmTQRzrtkXPoe4pYYUgQRxjAH606iiioZ7GCfLY2t/eFU5rG4hBYqCAMkqahoooooooooooooooooooAJOAMk9AKty2CxWZcgmQcn/AAqpRUtpb/aJdpB2j7xHanXlkbf50JKep6ioK0bG2EEQdl+dhz7e1T0UUUUUUUUUVmXkAt5yi9CMjmoqKKKKKKKKKKKKKKKACTgDJPQCpo7G5l/5Z7R6txVu0slt/nY7m9fSp6ztQtxBKGTAV+gHaktLRrlsnhR1P9K0URY1CIuAOgpSARgjIPUGqM+mOHzBgqT0J6VeoooooooooooqvqEHmwbx1Tn8O9Z9FFFFFFFFFFFFFFPt4GuJPLU47k+grRt7aO3XCjJ7tjk1JRRTLi3S4QI5IAOeKWKNYYxGnQetOooooooooooooooorLuofInZAOOq/So6KKKKKKKKKKKKKv6XHthMh/iPH0H+TVmiiiiiiiiiiimvNFGQskignoCaVmVBudgAO5NCujjKMCD3BoLAdTRvWlBB6UUUUVT1VOEkC+oJ/wA/jVOiiiiiiiiiiiiitW2Ty7dF24+UZB9afRRRSNuA+QAn0JxQgYL855746UtFFFFKEJGagaJdzlhy/Dc/hRJH5uEY/Jj5l9T2pVUKoVRwBgUtFAJHSl3tTwcjNFKiPI2xFLE9ABmpJdHvbiEoYDhhxlhx+tYbqyMUdSGBwQRyDSUUUUUUUUUUUUVsUUUVHdTNBF5qpuwRnntTYLv7Q+I4m2gcs3r6VNRRRRQBk4qSop/v/hTKkCp5WNwz161GRg4ooopTIsaruYAE4JLYxV/SbK2vYftLybxuI2q3THr/ADrSSOOIbY41UZ6KMU6ua8U2H2a++1KV2z5IUDGCMZ/Pr+NZlFFFFFFFFFFFSWsfm3CJgfeyQfStSoJtQghcxncSOu0U3+1Lf+4/5D/Gp45ElQSRnIPQ06iiiiiigcHNPdwgyagJJOSaAASAfWiiiiiq+o/6gf7/APQ1L4b1JbC9MczhY5RhiegPY/0/Gunoqj4is/tmlvhsGL94OfQHP6ZrlaKKKKKKKKKKKlsUeS9iijk2F5AobGcZOK6O80CRYSbGYswXhZMZJx69K5iRXV2WUEMCQwbqD70laWnkG0UA9M5/OpqKKKKKVI5JTtjjZjjoozSzwy2zlJk2nGcVEzAqBzxSUUUUUUVV1L+D8f6VVrp/D2rDULXyZnHnRDBGeWH97n9f/r1o0Vx+q2ostRltlACq/wAoBzgHkdfY1XooooooooooqxpCO+q2wRST56nAHYHJrtK5vxVq/wBpk/s1IGURSZdnGCTyBj2wc++ax6vaU4MTJ3DZ/wA/lVqiiiirWn6cLxTLJIQobGB1NacNvDbrthjCjvjvWbr96i3EdkISXKli+eADx/SqFFFFFFFFVdS/g/H+lVas6TeS2OoRzRKzZbayKMlge2O59PfFdfRXOeL/APkJJ/1wH/oTVlUUUUUUUUUUVq+EbbztUM5DYijJBHTJ4wfwJ/Kunqve6XYahg3lqrkdG6H6ZHOOelc7qvhq6sH8yE+ZCz4Vu6/73+P8qkhhSBBHGMAfrUc1xcRk7bRmGcAhuv4CnxySNgSQMp7/ADAgU+iitfSgBYoQByTn35qzWJryk6urY4FsB/48arUUUUUUUVV1L+D8f6VVq94ds/tmqJlsCL94efQjH64rqqK5LXL032pSSAgqh2R4ORgd8+/J/GqlFFFFFFFFFFb3gn/l6/4B/wCzVvUVna/NEY0tzy+7f06DBFZlFFFFFa+krtsVO4nJJ5PTmrNZniCNBJFKF+Yggn2HT+ZrOopVIBBIzSHGeBRRRRVXUv4Px/pVWug8IWipayXjIdzvtUkfwj0/H+VbFUPEV8tnprpwXmBRQfQ9T+X6kVy1FFFFFFFFFFFWNL1KbSrsXUKhuMOp/iHp7V1mn6nZ6nEZbSXOMblIwVPvTr6+t9Pt2ubl8KOgHVj6D3rAF6+oFrt+rseM/dGeB+WKWimfaLf/AJ7p/wB9Cj7Rb/8APdP++hSPd2yDJmX8Dn+VNkvrdIvMVw3oo61r+G7iS40sNJGRh2AJ/iGc5H54/Cr9Z+vx5hjlz91iMY9f/wBVZdFPij3fM3TsKJUCnIHBplFFFVNSYF1TuATVdVZ2CIpLE4AA5JrsrG0SxtI7SM5CLjPqe5/Opa5XxBqP9oag3ltmOP5Y8Hg+p/E/piqNFFFFFFFFFFFFFWbfTnmjEjSbQegxUg0tgCouTg9Rt6/rSDSRnmfjvhaf/Zdv/ff8x/hR/Zdv/ff8x/hSrptsAQdxz0JPSlXTrVRgoT7ljTLjTYyu63GGH8OeDWv4TleTSyrtkJKVXjoMA/zJrTqprUavYMxJ+RgR+eP61jUAZOB3qx06U2YAoahoooqlqP8Arx/uf1NW/C9kbnUftDAFIBk5GeT0/wAfwrpqo+INR/s/T28tsSSfLHg8j1P4D9cVytFFFFFFFFFFFFFOijMsixjucdK1gABgDAHQCiiiiiiiregAxXEqK52uu4pz97PJ68dfTtWrVTWnK2JULncwH07/ANKxqVfvD61PSSY2HPpUFFFFULjdNdMqISxbaFHJJ6V1OkacNMsltiwLE7pGHQk/5A/CrLsqKXdgFAySTwBXJaxqb6pdmbLCNeI0J6D/ABP+elVaKKKKKKKKKKKKKsaaga5yf4VJH8q0KKKKKKKKmsJvIu0cnAJw3OODW1VTWObVc/3x/I1kPGydenrTenSrFMmJ2cetRUVJbWk92+yBM46k9BWjFoNuvM0zOc9uBj0qSz0WwsZ2uYYiZGYkMxztz2H+c1brF8V6mFT+y4jy2GlORwOw+vQ/l61g0UUUUUUUUUUUUUVc0kHEhxxxz+dXKKKKKKKKK3LaYXFuk3HzLzj171DqylrZQo/jH8jWf5Un939aja1kL8R8d+afsf8Aun8qRl/hYfgajMA7Makt7IzTJGWwpbk9OK10NpaKIk2oM9B/X/GpaKr6nqMOmWpuZVJ5wij+I+ntXJTzzXMzTzyFnY5ZjTKKKKKKKKKKKKKKK0dOQLagj+Ikn+VT0UUUUUUUVsaV/wAeEf4/zNGoqDGr9w2Kp0UUUUUVpRuJIw47j1ptzcwWcLXFw+1Fxk4J747Vymp6ncapcedMcKOEQHhR/j71WooooooooooooooorUtVC2yBR/CDUlFFFFFFFFamikm0IJPEhx7cCpNR/wBQP9/+hqnRRRRRRVm0u44oSkjHg8DFc9rGrXmoy+VcAIsbHESngH39TVOiiiiiiiiiiiiiiiitiiiiiiiiiitXR0KWe4kfM5I/l/Spr1N1u3GSORVCiiiiiiisLUo/KvpVznLZ6evP9agoooooooooooooooorYooooooooorY0r/jwj/H+Zqd1DqUPQjBrNZSrFWHIODSUUUUUUVka6qi8BAAzGCffk1Sooooooooooooooop0SCSVUPQsAcVrUUUUUUUUUVsab8lhHv469fc8VYrNlYNKzKeCxIptFFFFFFZniBV3RNgZIYE/lWdRRRRRRRRRRRRRRRUlmm+5Rc4+bP5c1qUUUUUUUUUVYmvpLi3js0jIwADg5LYrRdha2axu3zBAox64qlRRRRRRRWb4h/5Y/8AAv6Vm0UUUUUUUUUUUUUUVJaOUuUI/vY/PitSiiiiiiiiipLW5NpL5qorHaQN3b3qwtzLcoGlfcRx0oooooooorJ19m+1IuTgR5A/E1Rooooooooooooooop9v/x8R/74/nWrRRRRRRRRRRU9r/qz/vVLRRRRRRRWLrDMdQcEk4AA9uBVWiiiiiiiiiiiiiiigEg5BwR0IrYooooooooooqe1J2EY79alooooooorD1R1fUJGU5GQPxAxVeiiiiiiiiiiiiiiiipIrq4hG2OUgenWui0TRxqenrfTTbd4O1VHQg4yfyPH61Y/4Rf/AKfv/IX/ANenReGIg2ZrtmGOirj/ABqDWtIttM02S+hd2MZHysRzkgenvWXo9wmpajHZSRFVfOSrc8An09q3P+Edsv8AnrL/AN9D/ClXw9YhgS8hwehYc/pUn9i6Z/z7f+Pt/jXKT6o5kcW6gJuOwsPmx2z2zVjQHd/N3sTjGMn1zWjRRRRRRRRXP3v/AB+S/wDXVv51HRRRRRRRRRRRRRRRRRXW+CZpJNGKO2RHMyoMdBgH+ZNa9FZ3iz/kX7j/AIB/6GK5zwt/yHoP+Bf+gmuwoorz+r2gM32p1ycGPJH4itaiiioNMlkmskllYsxJyT9TU9FFFc/e/wDH5L/11b+dR0UUUUUUUUV//9k='}], 'status': '000'}
#base64编码转换
r_img = base64_to_cv2(r.json()["results"][0]['data'])
#这步后得到的png 是单通道灰度图?图片如下:

Pil_image=Image.fromarray(r_img,mode='P')
#上色
c=get_color_map_list(256)
Pil_image.putpalette(c)
经过这步后,图像有颜色了,但与server生成的比,质量差了很多,多了很多噪点。再回头仔细看上色之前的灰度图png,也有很多噪点,应该不是上色造成的,不知道是什么原因,请问怎么才能得到与server端output路径下生成的png一样的图片呢?先拜谢!!

| open | 2021-10-19T04:48:57Z | 2021-10-28T01:56:34Z | https://github.com/PaddlePaddle/PaddleHub/issues/1657 | [
"cv"
] | sdl415 | 1 |
scikit-learn/scikit-learn | machine-learning | 30,160 | Change forcing sequence in newton-cg solver of LogisticRegression | ### Describe the workflow you want to enable
I'd like to have faster convergence of the `"newton-cg"` solver of `LogisticRegression` based on scientific publications with empirical studies as done in [A Study on Truncated Newton Methods for Linear Classification (2022)](https://doi.org/10.1109/TNNLS.2020.3045836) (free [pdf](https://www.csie.ntu.edu.tw/~cjlin/papers/tncg/tncg.pdf) version).
### Describe your proposed solution
It is about the inner stopping criterion in a truncated Newton solver, i.e. when should the inner solver for "hessian @ coefficients = -gradient" stop.
$eta = \eta$ is the forcing sequence.
#### Current stopping criterion
$residual ratio = \frac{\rVert res\lVert_1}{\rVert grad \lVert_1} \leq \eta$ with $res = residual = grad - hess @ coef$ and $\eta = \min([0.5, \sqrt{\rVert grad \lVert_1]})$ (this eta is called adaptive forcing sequence.
#### Proposed stopping criterion
As recommended by Chapter VII.
- Replace residual ratio with the quadratic approximation ratio $j\frac{Q_j - Q_{j-1}}{Q_j}$ and $Q_j = grad @ coef_j + \frac{1}{2} coef_j^T @ hessian @ coef_j$ and $j$ is the inner iteration number.
- Optionally replace L1-norm by L2-norm. For the quadratic ratio, this does not matter much.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | open | 2024-10-27T10:42:19Z | 2024-11-06T20:54:50Z | https://github.com/scikit-learn/scikit-learn/issues/30160 | [
"New Feature",
"Performance"
] | lorentzenchr | 3 |
fohrloop/dash-uploader | dash | 21 | Consider migrating from resumable.js to something else | From https://github.com/23/resumable.js/issues/533 seems that resumable.js not maintained anymore. [Flow.js](https://github.com/flowjs/flow.js) if a fork of resumable.js, which has added functionality and seems to be actively maintained. The latest version of resumable.js on npm (1.1.0) is from Nov 2017, and latest version of flow.js on npm (2.14.1) is from June 2020.
There could be also some other interesting options.
I am open to suggestions, if anyone knows better than me what is out there. | closed | 2021-01-15T21:29:02Z | 2022-02-19T11:41:22Z | https://github.com/fohrloop/dash-uploader/issues/21 | [
"help wanted",
"discussion",
"good first issue"
] | fohrloop | 9 |
MagicStack/asyncpg | asyncio | 533 | Equivalent of cursor.description in DBAPI2 for deriving the columns that would be returned by a 'limit 0' query? | I may have missed something, but I don't think `asyncpg` has an equivalent to the `cursor.description` capability in the DBAPI2 semi-standard, which is causing me a small problem.
I've been using a trick to determine the column names that would be returned by a query without fully executing the query. I'll demonstrate with SQLite:
```
In [1]: import sqlite3
In [2]: db = sqlite3.connect(":memory:")
In [3]: cursor = db.execute("select * from sqlite_master limit 0")
In [4]: [c[0] for c in cursor.description]
Out[4]: ['type', 'name', 'tbl_name', 'rootpage', 'sql']
```
So you run a query with `limit 0` - avoiding returning any rows - but you can still use the `cursor.description` property to figure out the names of the columns that WOULD have been returned.
This is particularly useful for dealing with complex SELECT statements - if you're doing a `select *` you could just look at what columns the table has, but for a more complex query being able to figure out what columns it will return without actually executing the full query can be really useful.
As far as I can tell, `asyncpg` only lets you run `fetch()` and get back a Python list of Record objects. Provided you have at least one result this is fine - you can look at the first result and use `record.keys()` to figure out what the columns are. But... if you request 0 results you get back an empty Python list, which you can't use to access an equivalent of `cursor.description`!
Please let me know if there's another way to do this (to analyze a query and figure out the columns it would return without fully executing it) - if there isn't then please consider this a feature request! | closed | 2020-02-14T00:27:35Z | 2020-02-14T00:42:55Z | https://github.com/MagicStack/asyncpg/issues/533 | [] | simonw | 1 |
browser-use/browser-use | python | 681 | Not able to run through pipeline as there is issue in encoding and decoding emoji's in service.py in agents | ### Bug Description
Not able to run through pipeline as there is issue in encoding and decoding emoji's in service.py in agents
### Reproduction Steps
1. Set up a azure dev ops pipeline
2. Install playwright and requrements
3. Run the browser-use code
4. The execution does not happen with below error
### Code Sample
```python
pool:
vmImage: 'windows-latest'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
- task: NodeTaskRunnerInstaller@0
inputs:
nodeVersion: '16'
- task: CmdLine@2
inputs:
script: |
npm install -D @playwright/test
npx playwright install
npx playwright --version
playwright install --project=chromium
set PYTHONIOENCODING=utf-8
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
python browser-use.py
displayName: 'Run Python script'
```
### Version
0.1.29
### LLM Model
GPT-4o
### Operating System
Wondows 11
### Relevant Log Output
```shell
File "C:\hostedtoolcache\windows\Python\3.13.1\x64\Lib\logging\__init__.py", line 1153, in emit
stream.write(msg + self.terminator)
File "C:\hostedtoolcache\windows\Python\3.13.1\x64\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f680' in position 17: character maps to <undefined>
``` | open | 2025-02-12T07:37:44Z | 2025-02-14T01:53:46Z | https://github.com/browser-use/browser-use/issues/681 | [
"bug"
] | krishnapriya21 | 1 |
tiangolo/uwsgi-nginx-flask-docker | flask | 55 | Would I be able to use any flask app, just making sure I have the main.py and everything in the app directory? | Or is there something different if I have static content (js, css, img) and templates (html) | closed | 2018-05-08T22:30:53Z | 2018-05-09T11:40:21Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/55 | [] | guppy57 | 2 |
clovaai/donut | nlp | 83 | failed to predict by the model generated by Classification FineTune | I finetuned the base model on the rvlcdip dataset as below.
!python train.py \
--config config/train_rvlcdip.yaml \
--pretrained_model_name_or_path "naver-clova-ix/donut-base" \
--dataset_name_or_paths '["dataset/rvlcdip"]' \
--exp_version "test_rvlcdip"
And using the trained model and inference with task_prompt = "<s_rvlcdip>", the output result is as follows.
pretrained_model.inference(image=input_img, prompt=task_prompt)["predictions"][0]
outputs -> {'text_sequence': ''}
I was expecting output like below.
{'class': 'invoice'}
Do you know what is wrong? | open | 2022-11-04T08:32:09Z | 2022-11-04T08:32:09Z | https://github.com/clovaai/donut/issues/83 | [] | kaz12tech | 0 |
keras-team/autokeras | tensorflow | 1,150 | Upgrade to TF 2.3 | Use stringlookup layer instead of indexlookup layer. | closed | 2020-05-26T19:40:05Z | 2020-08-15T22:54:32Z | https://github.com/keras-team/autokeras/issues/1150 | [
"pinned"
] | haifeng-jin | 0 |
autogluon/autogluon | computer-vision | 4,839 | [timeseries] Expose all MLForecast configuration options in DirectTabular and RecursiveTabular models | [timeseries module]
In nixtla mlforecast one can specify not just the lags (1, 2, 3, etc.) but also lag transforms such as min, max, mean, rolling, etc.
https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/lag_transforms_guide.html
The idea would be when specifying hyperparameters one could pass in lag_transforms as well.
Mock Example:
```
lag_transforms={
1: [ExpandingStd()],
7: [RollingMean(window_size=7, min_samples=1), RollingMean(window_size=14)]}
hyperparameters = {"DirectTabular": {"lags":[1, 2, 3],
"lag_transforms":lag_transforms
}}
predictor = TimeSeriesPredictor(
prediction_length=6,
path="test",
target="y",
eval_metric="MAE"
)
predictor.fit(train_data,
hyperparameters=hyperparameters
time_limit=300)
```
| open | 2025-01-24T21:11:04Z | 2025-01-28T12:21:07Z | https://github.com/autogluon/autogluon/issues/4839 | [
"enhancement",
"module: timeseries"
] | breadwall | 0 |
kennethreitz/responder | flask | 106 | Error in view not getting reported |
```python
import tempfile
import responder
api = responder.API()
@api.route("/bugtest")
async def test(req, res):
with tempfile.NamedTemporaryFile() as f:
# This write call will fail
f.write('this should be bytes, not str')
resp.content = 'Success!'
```
When I hit this view, I get a 200 response (no response body). I expect a 500 with a response of "Internal server error" (which I've gotten on other occasions). | closed | 2018-10-19T20:13:28Z | 2018-10-20T18:52:31Z | https://github.com/kennethreitz/responder/issues/106 | [] | benekastah | 2 |
ranaroussi/yfinance | pandas | 1,405 | Several tickers return Exception when getting historical data with `period="max"` and `interval="1mo"` | # Several tickers return Exception when getting historical data with `period="max"` and `interval="1mo"`
I had code that used yfinance to get monthly max period close price data for a small number of tickers. The code worked fine for yfinance version 0.1, but I ran it again using yfinance version 0.2.9 and for several of the tickers it was returning the exception below. It worked just fine for several of the tickers, but then returned the exception below for others. I checked the yahoo! finance website and the data is all there for those tickers that are returning the exception. Oddly enough, it seemed to work fine when I changed either the period or interval - it just didn't seem to like having both `period="max"` and `interval="1mo"` for some reason.
# Examples
## Works fine
`yf.Ticker("ABBV").history(period="10y", interval="1mo")["Close"]`
`yf.Ticker("ABBV").history(period="max", interval="1wk")["Close"]`
## Returns Exception
`yf.Ticker("ABBV").history(period="max", interval="1mo")["Close"]`
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_13864/234900509.py in <module>
----> 1 yf.Ticker('ABBV').history(period="max", interval="1mo")["Close"]
~\Anaconda3\lib\site-packages\yfinance\base.py in history(self, period, interval, start, end, prepost, actions, auto_adjust, back_adjust, repair, keepna, proxy, rounding, timeout, debug, raise_errors)
709 df = quotes.sort_index()
710 if dividends.shape[0] > 0:
--> 711 df = utils.safe_merge_dfs(df, dividends, interval)
712 if "Dividends" in df.columns:
713 df.loc[df["Dividends"].isna(), "Dividends"] = 0
~\Anaconda3\lib\site-packages\yfinance\utils.py in safe_merge_dfs(df_main, df_sub, interval)
631 df = _pd.concat([df, df_sub_missing], sort=True)[col_ordering]
632 else:
--> 633 raise Exception("Lost data during merge despite all attempts to align data (see above)")
634
635 return df
Exception: Lost data during merge despite all attempts to align data (see above) | closed | 2023-02-07T21:16:51Z | 2023-03-21T18:16:31Z | https://github.com/ranaroussi/yfinance/issues/1405 | [] | pfischer1687 | 1 |
polarsource/polar | fastapi | 5,215 | License Keys: Integrate Customer Event Ingestion | open | 2025-03-10T08:04:15Z | 2025-03-10T12:20:29Z | https://github.com/polarsource/polar/issues/5215 | [
"must-have",
"v1.5",
"feat/entitlements"
] | birkjernstrom | 0 | |
ydataai/ydata-profiling | pandas | 1,384 | Dataset with categorical features causes memory error even on tiny dataset. | ### Current Behaviour
Dataset with categorical features causes memory error even on tiny dataset.
File "/usr/local/lib/python3.9/dist-packages/ydata_profiling/profile_report.py", line 439, in _render_json
description = self.description_set
File "/usr/local/lib/python3.9/dist-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/ydata_profiling/profile_report.py", line 245, in description_set
self._description_set = describe_df(
File "/usr/local/lib/python3.9/dist-packages/ydata_profiling/model/describe.py", line 151, in describe
metrics, duplicates = progress(get_duplicates, pbar, "Detecting duplicates")(
File "/usr/local/lib/python3.9/dist-packages/ydata_profiling/utils/progress_bar.py", line 11, in inner
ret = fn(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/multimethod/__init__.py", line 315, in __call__
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/ydata_profiling/model/pandas/duplicates_pandas.py", line 37, in pandas_get_duplicates
df[duplicated_rows]
File "/usr/local/lib/python3.9/dist-packages/pandas/core/groupby/groupby.py", line 2411, in size
return self._reindex_output(result, fill_value=0)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/groupby/groupby.py", line 4143, in _reindex_output
index, _ = MultiIndex.from_product(levels_list, names=names).sortlevel()
File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/multi.py", line 643, in from_product
codes = cartesian_product(codes)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/reshape/util.py", line 60, in cartesian_product
return [
File "/usr/local/lib/python3.9/dist-packages/pandas/core/reshape/util.py", line 62, in <listcomp>
np.repeat(x, b[i]),
File "<__array_function__ internals>", line 180, in repeat
File "/usr/local/lib/python3.9/dist-packages/numpy/core/fromnumeric.py", line 479, in repeat
return _wrapfunc(a, 'repeat', repeats, axis=axis)
File "/usr/local/lib/python3.9/dist-packages/numpy/core/fromnumeric.py", line 57, in _wrapfunc
return bound(*args, **kwds)
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 389. GiB for an array with shape (418190868480,) and data type int8
### Expected Behaviour
Current ydata-profile code:
duplicated_rows = (
df[duplicated_rows]
.groupby(supported_columns, dropna=False)
.size()
.reset_index(name=duplicates_key)
)
Should be:
duplicated_rows = (
df[duplicated_rows]
.groupby(supported_columns, dropna=False, **observed=True**)
.size()
.reset_index(name=duplicates_key)
)
Please pay attention to this issue that explains the solution:
[https://github.com/pandas-dev/pandas/issues/30552 ](https://github.com/pandas-dev/pandas/issues/30552 )
### Data Description
10x10 random strings pandas dataset, with type specified as Category
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
df = pd.DataFrame(data=pd.util.testing.rands_array(10, size=(10, 10)), dtype="category")
report = ProfileReport(df, title="Profiling Report")
display(report)
```
### pandas-profiling version
v4.0.0
### Dependencies
```Text
pandas==1.3.4
```
### OS
ubuntu
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-07-16T11:52:15Z | 2023-08-24T15:51:13Z | https://github.com/ydataai/ydata-profiling/issues/1384 | [
"bug 🐛"
] | boris-kogan | 2 |
mage-ai/mage-ai | data-science | 4,886 | Incorporate PandasAI SmartDataframe as a dataframe return | When trying to return a PandasAI SmartDataframe, I get the following error.
Would really appreciate it if it can be resolved this month. Thanks!
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
File /usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/models/block/__init__.py:3170, in Block.store_variables(self, variable_mapping, execution_partition, override, override_outputs, spark, dynamic_block_index, dynamic_block_uuid)
3167 if spark is not None and self.pipeline.type == PipelineType.PYSPARK \
3168 and type(data) is pd.DataFrame:
3169 data = spark.createDataFrame(data)
-> 3170 self.pipeline.variable_manager.add_variable(
3171 self.pipeline.uuid,
3172 block_uuid,
3173 uuid,
3174 data,
3175 partition=execution_partition,
3176 clean_block_uuid=not changed,
3177 )
3179 if not is_dynamic_child:
3180 for uuid in variables_data['removed_variables']:
File /usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/variable_manager.py:79, in VariableManager.add_variable(self, pipeline_uuid, block_uuid, variable_uuid, data, partition, variable_type, clean_block_uuid)
77 variable.delete()
78 variable.variable_type = variable_type
---> 79 variable.write_data(data)
80 if is_debug():
81 print(
82 f'Variable {variable_uuid} ({variable_type or "no type"}) for block {block_uuid} '
83 f'in pipeline {pipeline_uuid} '
84 f'stored in {variable.variable_path}'
85 )
File /usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/models/variable.py:284, in Variable.write_data(self, data)
282 self.__write_geo_dataframe(data)
283 else:
--> 284 self.__write_json(data)
286 if self.variable_type != VariableType.SPARK_DATAFRAME:
287 # Not write json file in spark data directory to avoid read error
288 self.write_metadata()
File /usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/models/variable.py:427, in Variable.__write_json(self, data)
425 file_path = os.path.join(self.variable_path, JSON_FILE)
426 sample_file_path = os.path.join(self.variable_path, JSON_SAMPLE_FILE)
--> 427 self.storage.write_json_file(file_path, data)
428 self.storage.write_json_file(sample_file_path, sample_output(data)[0])
File /usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/storage/local_storage.py:95, in LocalStorage.write_json_file(self, file_path, data)
92 os.makedirs(dirname, exist_ok=True)
94 with open(file_path, 'w') as file:
---> 95 simplejson.dump(
96 data,
97 file,
98 default=encode_complex,
99 ignore_nan=True,
100 )
File /usr/local/lib/python3.10/site-packages/simplejson/__init__.py:269, in dump(obj, fp, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, use_decimal, namedtuple_as_object, tuple_as_array, bigint_as_string, sort_keys, item_sort_key, for_json, ignore_nan, int_as_string_bitcount, iterable_as_array, **kw)
254 if cls is None:
255 cls = JSONEncoder
256 iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
257 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
258 separators=separators, encoding=encoding,
259 default=default, use_decimal=use_decimal,
260 namedtuple_as_object=namedtuple_as_object,
261 tuple_as_array=tuple_as_array,
262 iterable_as_array=iterable_as_array,
263 bigint_as_string=bigint_as_string,
264 sort_keys=sort_keys,
265 item_sort_key=item_sort_key,
266 for_json=for_json,
267 ignore_nan=ignore_nan,
268 int_as_string_bitcount=int_as_string_bitcount,
--> 269 **kw).iterencode(obj)
270 # could accelerate with writelines in some versions of Python, at
271 # a debuggability cost
272 for chunk in iterable:
File /usr/local/lib/python3.10/site-packages/simplejson/encoder.py:379, in JSONEncoder.iterencode(self, o)
370 _iterencode = _make_iterencode(
371 markers, self.default, _encoder, self.indent, floatstr,
372 self.key_separator, self.item_separator, self.sort_keys,
(...)
376 self.item_sort_key, self.encoding, self.for_json,
377 self.iterable_as_array, Decimal=decimal.Decimal)
378 try:
--> 379 return _iterencode(o, 0)
380 finally:
381 key_memo.clear()
File /usr/local/lib/python3.10/site-packages/mage_ai/shared/parsers.py:36, in encode_complex(obj)
34 elif isinstance(obj, Enum):
35 return obj.value
---> 36 elif hasattr(obj, 'isoformat') and 'method' in type(obj.isoformat).__name__:
37 return obj.isoformat()
38 elif isinstance(obj, np.integer):
File /usr/local/lib/python3.10/site-packages/pandasai/smart_dataframe/__init__.py:266, in SmartDataframe.__getattr__(self, name)
265 def __getattr__(self, name):
--> 266 if name in self._core.__dir__():
267 return getattr(self._core, name)
268 elif name in self.dataframe.__dir__():
File /usr/local/lib/python3.10/site-packages/pandasai/smart_dataframe/__init__.py:266, in SmartDataframe.__getattr__(self, name)
265 def __getattr__(self, name):
--> 266 if name in self._core.__dir__():
267 return getattr(self._core, name)
268 elif name in self.dataframe.__dir__():
[... skipping similar frames: SmartDataframe.__getattr__ at line 266 (2956 times)]
File /usr/local/lib/python3.10/site-packages/pandasai/smart_dataframe/__init__.py:266, in SmartDataframe.__getattr__(self, name)
265 def __getattr__(self, name):
--> 266 if name in self._core.__dir__():
267 return getattr(self._core, name)
268 elif name in self.dataframe.__dir__():
RecursionError: maximum recursion depth exceeded | open | 2024-04-04T19:09:14Z | 2024-04-04T19:13:59Z | https://github.com/mage-ai/mage-ai/issues/4886 | [] | jerwinrosal | 0 |
Skyvern-AI/skyvern | api | 1,246 | [Feature Request] Send a signal to Skyvern's running process to stop the agent which sends webhook | I am running some evaluations currently on the Skyvern web agent. I set a 3 minute timeout, in which I kill the port 8000 the Skyvern process is running in. Once this happens, Skyvern doesn't send a webhook. I need the task ID sent in the webhook response, so that I can copy Skyvern's logs over to my project.
What I would like is to send a signal to Skyvern (such as through an API call) letting Skyvern know that it should halt the currently running process and send a webhook response with the details of that task as was sent in the success cases. This will avoid me having to kill the port and have no webhook response. | closed | 2024-11-22T22:49:32Z | 2024-12-09T06:43:09Z | https://github.com/Skyvern-AI/skyvern/issues/1246 | [] | devinat1 | 8 |
ckan/ckan | api | 7,831 | Dataset resource views not created | ## CKAN version
2.10.1
## Describe the bug
I have created a dataset and added a `.txt` resource to the dataset, but there is no views created for the resource in dataset.
Plugin setting in my environment is` ckan.plugins = stats text_view datatables_view` and Resource views setting is `ckan.views.default_views = image_view datatables_view`
### Steps to reproduce
1. Create a Dataset and add a resource to the dataset .
2. Go to created resource and select preview from the Explore button.
### Expected behaviour
Views must be created for the resource.
### Additional details
Please find the screenshot of the below:

| closed | 2023-09-25T06:12:52Z | 2023-09-26T03:53:59Z | https://github.com/ckan/ckan/issues/7831 | [] | Gauravp-NEC | 1 |
Kav-K/GPTDiscord | asyncio | 167 | "Empty Response" on /index query | 
| closed | 2023-02-22T05:05:31Z | 2023-02-25T01:12:07Z | https://github.com/Kav-K/GPTDiscord/issues/167 | [
"bug",
"help wanted",
"good first issue",
"high-prio",
"help-wanted-important"
] | Kav-K | 1 |
vaexio/vaex | data-science | 1,246 | pip install vaex fails on M1 with Big Sur 11.2.3 | **Description**
%pip3 install vaex
results in error under Big Sir 11.2.3 on M1 processor [don't know if that matters, don't have another one test]
**Software information**
% pip3 --version
pip 21.0.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9)
% python3.9 --version
Python 3.9.2
**Additional information**
error output:
...
running build_ext
building 'vaex.vaexfast' extension
creating build/temp.macosx-11-x86_64-3.9
creating build/temp.macosx-11-x86_64-3.9/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/local/lib/python3.9/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/openssl@1.1/include -I/usr/local/opt/sqlite/include -I/usr/local/opt/tcl-tk/include -I/usr/local/Cellar/python@3.9/3.9.2_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c src/vaexfast.cpp -o build/temp.macosx-11-x86_64-3.9/src/vaexfast.o -std=c++11 -mfpmath=sse -O3 -funroll-loops -g -mmacosx-version-min=10.9
error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.9" but "11" during configure
----------------------------------------
ERROR: Command errored out with exit status 1
... | closed | 2021-03-10T00:22:03Z | 2021-03-18T09:57:15Z | https://github.com/vaexio/vaex/issues/1246 | [] | InterruptSpeed | 9 |
modelscope/modelscope | nlp | 412 | load model got an unexpected keyword argument 'device' | OS: centos
CPU: `lscpu` / gpu
modelscope : version is 1.7.2rc0
I run the example code :
```python
from modelscope import snapshot_download, Model
from modelscope.models.nlp.llama2 import Llama2Tokenizer
model_dir = snapshot_download("modelscope/Llama-2-7b-ms", revision='v1.0.1',
ignore_file_pattern = [r'\w+\.safetensors'])
# model = Model.from_pretrained(model_dir, device_map='auto', torch_dtype=torch.float16)
model = Model.from_pretrained(model_dir, device_map='auto', torch_dtype=torch.float16)
tokenizer = Llama2Tokenizer.from_pretrained(model_dir)
```
got RuntimeError:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
if I add device = GPU , will got
```
TypeError: Llama2ForTextGeneration: Llama2ForTextGeneration.__init__() got an unexpected keyword argument 'device'
```
codes :
```python
model = Model.from_pretrained(model_dir, device_map='auto', device='gpu', torch_dtype=torch.float16)
```
| closed | 2023-07-25T06:50:40Z | 2023-08-11T07:57:42Z | https://github.com/modelscope/modelscope/issues/412 | [] | lingfengchencn | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 752 | stuck at encoder | when i try to record my voice every thing goes well untill it says "loading the encoder encoder\saved_models\pretrained.pt" the path is C:\Users\ytty\OneDrive\Bureaublad\voice clone\Real-Time_Voice_Cloning\Real-Time-Voice-Cloning-master\encoder\saved_models, how can this be fixed? (says SV2TTS is not responding) | closed | 2021-05-11T07:03:09Z | 2021-05-30T07:44:16Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/752 | [] | ytty-tyyt | 1 |
amidaware/tacticalrmm | django | 2,041 | [Feature request] Run individual checks | **Is your feature request related to a problem? Please describe.**
currently we can only manually run ALL checks at once and no single checks
**Describe the solution you'd like**
a button when right clicking a check that allow to run the selected check
**Describe alternatives you've considered**
change the timer to a low interval and set it back to default value when refreshed...
**Additional context**

| closed | 2024-10-22T13:52:24Z | 2024-10-22T13:56:00Z | https://github.com/amidaware/tacticalrmm/issues/2041 | [] | P6g9YHK6 | 1 |
Kinto/kinto | api | 2,680 | AttributeError: 'ReentrantFileLock' object has no attribute '_lock' | when running `make tests` I get the error: `AttributeError: 'ReentrantFileLock' object has no attribute '_lock'` on the master branch
going through the steps in https://docs.kinto-storage.org/en/latest/community.html#get-started | open | 2020-12-21T00:42:21Z | 2024-07-23T20:01:24Z | https://github.com/Kinto/kinto/issues/2680 | [
"stale"
] | jkieberking | 0 |
MagicStack/asyncpg | asyncio | 884 | Exception when attempting to fetch SSL info | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
Thanks for creating asyncpg! It's dramatically improved the performance of my [open-source web application](https://github.com/RunestoneInteractive) ([Runestone Academy](https://runestone.academy/), a free interactive e-book).
To reproduce this bug, simply start asyncpg as a non-root user (one without permission to access `/root`).
* **asyncpg version**: 0.25
* **PostgreSQL version**: 12.7
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: I use AWS RDS; haven't tested locally
* **Python version**: 3.9.1
* **Platform**: Debian GNU/Linux 11 (bullseye)
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: N/A
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: I only use asyncio
<!-- Enter your issue details below this comment. -->
I run asyncpg as a non-root user for improved security; this user lacks root access. During startup in asyncpg v. 0.25, I see the error like this:
```
File "/srv/web2py/applications/runestone/.venv/lib/python3.9/site-packages/asyncpg/connection.py", line 2085, in connect
return await connect_utils._connect(
File "/srv/web2py/applications/runestone/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 874, in _connect
addrs, params, config = _parse_connect_arguments(timeout=timeout, **kwargs)
File "/srv/web2py/applications/runestone/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 640, in _parse_connect_arguments
addrs, params = _parse_connect_dsn_and_args(
File "/srv/web2py/applications/runestone/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 543, in _parse_connect_dsn_and_args
if not sslkey.exists():
File "/usr/local/lib/python3.9/pathlib.py", line 1424, in exists
self.stat()
File "/usr/local/lib/python3.9/pathlib.py", line 1232, in stat
return self._accessor.stat(self)
PermissionError: [Errno 13] Permission denied
```
In [connect_utils.py line 543](https://github.com/MagicStack/asyncpg/commit/383c711eb68bc6a042c121e1fddfde0cdefb8068#diff-fe0ec192392fe4131382e009b40723ba9469e62ab4760fed681a438d1d352bc3R529), asyncpg checks if a root-owned file exists. Unfortunately, a non-root user gets a permission denied exception instead of a False return value from `exists()`. It looks like wrapping this in a try/except would fix this bug. (It looks like [a later exception](https://github.com/MagicStack/asyncpg/commit/383c711eb68bc6a042c121e1fddfde0cdefb8068#diff-fe0ec192392fe4131382e009b40723ba9469e62ab4760fed681a438d1d352bc3R529) needs PermissionError added to it.)
For me, reverting to asyncpg v. 0.24 causes my code to run without problems. | open | 2022-02-09T00:07:44Z | 2022-12-26T09:20:49Z | https://github.com/MagicStack/asyncpg/issues/884 | [] | bjones1 | 1 |
aio-libs/aiomysql | asyncio | 986 | How to initialize a database connection only once when deploying services using FastAPI | ### Describe the bug
The simplest way is to define conn and cursor in the initialization method, but it fails because the init method cannot be asynchronous
### To Reproduce
无
### Expected behavior
I only want to connect to the database once for a FastAPI service, rather than having to connect to the database every time a request is made
### Logs/tracebacks
```python-traceback
无
```
### Python Version
```console
$ python --version
python = 3.9
```
### aiomysql Version
```console
$ python -m pip show aiomysql
0.2.0
```
### PyMySQL Version
```console
$ python -m pip show PyMySQL
1.1.0
```
### SQLAlchemy Version
```console
$ python -m pip show sqlalchemy
```
### OS
linux
### Database type and version
```console
SELECT VERSION();
无
```
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | open | 2024-05-22T06:52:26Z | 2024-07-30T17:33:42Z | https://github.com/aio-libs/aiomysql/issues/986 | [
"bug"
] | TheHonestBob | 2 |
thtrieu/darkflow | tensorflow | 926 | Getting only upto 12 FPS for 360p video when running darkflow on Google Colab K80 GPU | Hi,
This is might be of topic than darkflow library, but I am hoping someone has tried this implementation in Google Colab.
I think i should be getting high FPS on the the k80 for such a low resolution video.(i am getting only upto 12 FPS for 360p video). Now google colab does not have CUDA installed. Will CUDA installation help me boost the fps?
I tried to install it but it did not change anything. Maybe I dint install it correctly. | closed | 2018-10-24T08:11:48Z | 2022-03-31T12:48:57Z | https://github.com/thtrieu/darkflow/issues/926 | [] | jjiteshh | 0 |
ultralytics/ultralytics | computer-vision | 19,807 | How to completely disable Albumentations-based augmentations in YOLOv11 (e.g., Blur, MedianBlur etc..)? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I tried setting augment=False, but it seems this specific augmentation is still being applied. I couldn’t find clear instructions on how to fully disable it.
`DDP: debug command /user77/miniforge3/envs/user/bin/python -m torch.distributed.run --nproc_per_node 2 --master_port 40121 /user77/.config/Ultralytics/DDP/_temp_073wsljo23452078238096.py
Ultralytics 8.3.86 🚀 Python-3.13.2 torch-2.6.0+cu124 CUDA:0 (NVIDIA A100 80GB PCIe, 81229MiB)
CUDA:1 (NVIDIA A100 80GB PCIe, 81229MiB)
Overriding model.yaml nc=80 with nc=2
Freezing layer 'model.23.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks...
AMP: checks passed ✅
train: Scanning /path/2.F
train: Caching images (25.5GB Disk): 100%|██████████| 997/997 [00:00<00:00, 704
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01, num_output_channels=3, method='weighted_average'), CLAHE(p=0.01, clip_limit=(1.0, 4.0), tile_grid_size=(8, 8))`
### Additional
_No response_ | open | 2025-03-21T02:04:05Z | 2025-03-21T10:24:02Z | https://github.com/ultralytics/ultralytics/issues/19807 | [
"question",
"detect"
] | hillsonghimire | 5 |
yihong0618/running_page | data-visualization | 732 | 一个疑惑 | 大佬就是我已经用按照你的文档,能导出了gpx文件,但是我这个文件手动导入其他文件,我发现不显示心率,记录手表是keep自带的手表,是因为某些方面进行加密了吗? | closed | 2024-11-08T23:09:23Z | 2024-11-11T06:09:10Z | https://github.com/yihong0618/running_page/issues/732 | [] | coutureone | 18 |
horovod/horovod | deep-learning | 3,881 | Follow Tensorflow evolution in "examples/keras/keras_mnist_tf2.py" | **Environment:**
Tensorflow version: 2.12
Horovod version: 0.27.0
Python version: 3.10
**Bug report:**
tf.Session is not compatible with last tf versions. I propose this new code under the block tagged "# NEW TF2".
**Solution**
```
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import math
import tensorflow as tf
import horovod.keras as hvd
# Horovod: initialize Horovod.
hvd.init()
# OLD TF2
# Horovod: pin GPU to be used to process local rank (one GPU per process)
#config = tf.compat.v1.ConfigProto()
#config.gpu_options.allow_growth = True
#config.gpu_options.visible_device_list = str(hvd.local_rank())
#K.set_session(tf.Session(config=config))
# NEW TF2
# Pin GPU to be used to process local rank (one GPU per process)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')
tf.config.experimental.set_memory_growth(gpus[hvd.local_rank()], True)
batch_size = 128
num_classes = 10
# Horovod: adjust number of epochs based on number of GPUs.
epochs = int(math.ceil(12.0 / hvd.size()))
# Input image dimensions
img_rows, img_cols = 28, 28
# The data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# Horovod: adjust learning rate based on number of GPUs.
opt = keras.optimizers.Adadelta(1.0 * hvd.size())
# Horovod: add Horovod Distributed Optimizer.
opt = hvd.DistributedOptimizer(opt)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=opt,
metrics=['accuracy'])
callbacks = [
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
hvd.callbacks.BroadcastGlobalVariablesCallback(0),
]
# Horovod: save checkpoints only on worker 0 to prevent other workers from corrupting them.
if hvd.rank() == 0:
callbacks.append(keras.callbacks.ModelCheckpoint('./checkpoint-{epoch}.h5'))
model.fit(x_train, y_train,
batch_size=batch_size,
callbacks=callbacks,
epochs=epochs,
verbose=1 if hvd.rank() == 0 else 0,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
| closed | 2023-04-06T15:06:58Z | 2023-04-21T09:37:18Z | https://github.com/horovod/horovod/issues/3881 | [
"bug"
] | PierrickPochelu | 1 |
mitmproxy/pdoc | api | 428 | Support `functools.singledispatchmethod` rendering | #### Problem Description
A clear and concise description of what the bug is.
#### Steps to reproduce the behavior:
1. minimal example
```python
import functools
from typing import Union
class SingleDispatchMethodExample:
"""Fancy class to show the behaviour of pdoc."""
def __init__(self):
"""The `__init__` method is empty."""
@functools.singledispatchmethod
def fancymethod(self, str_or_int: Union[str, int]):
"""A fancy method which is capable of handling either `str` or `int`.
:param str_or_int: string or integr to handle
"""
raise NotImplementedError(f"{type(str_or_int)=} not implemented!")
@fancymethod.register
def fancymethod_handle_str(self, str_to_handle: str):
"""Fancy method handles a string.
:param str_to_handle: string which will be handled
"""
print(f"{type(str_to_handle)} = '{str_to_handle}")
@fancymethod.register
def fancymethod_handle_int(self, int_to_handle: int):
"""Fancy method handles int.
:param int_to_handle: int which will be handled
"""
print(f"{type(int_to_handle)} = '{int_to_handle:x}'")
```
2. pdoc output shows always a copy of singledispatchmethod docstring and not the docstring from the method. (run `pdoc` on the example)
#### System Information
Paste the output of "pdoc --version" here.
```bash
pdoc: 12.0.2
Python: 3.8.10
Platform: Windows-10-10.0.19042-SP0
``` | closed | 2022-08-09T12:19:29Z | 2022-08-23T07:27:46Z | https://github.com/mitmproxy/pdoc/issues/428 | [
"bug"
] | 9y2070m | 2 |
neuml/txtai | nlp | 103 | Update notebooks and example applications | Update notebooks and example applications with latest version of txtai. | closed | 2021-08-17T20:24:31Z | 2021-08-17T21:11:34Z | https://github.com/neuml/txtai/issues/103 | [] | davidmezzetti | 0 |
allenai/allennlp | pytorch | 5,720 | AllenNLP-Light! 🎉 🙂 | I felt I would miss the AllenNLP `modules` and `nn` packages I had used for years. So recently, I factored them into a separate `pip`-installable lightweight repo: [allennlp-light](https://github.com/delmaksym/allennlp-light). I also updated them to be registered with [AI2 Tango](https://github.com/allenai/tango).
As the codes in `modules` and `nn` are grown-up, well-tested, and easy to extend, I believe the repo can sustain itself and be helpful in a read-only zero-maintenance mode!
To address copyright, I tried to figure out the right [way](https://github.com/allenai/allennlp/issues/4491). I added copyright notices to all files, but because the project is not under AI2's GitHub, I would prefer you to tell me if the name `allennlp` in the [package name](https://twitter.com/simonw/status/1540717175714873345?s=20&t=GyFZHoyisUaE_5ZnC-6JeQ) is all good🙂.
@dirkgr @epwalsh, happy to hear your thoughts/suggestions on that and my little project overall! | closed | 2022-10-12T12:49:28Z | 2022-10-19T23:14:01Z | https://github.com/allenai/allennlp/issues/5720 | [
"Feature request"
] | MaksymDel | 2 |
huggingface/datasets | pytorch | 7,122 | [interleave_dataset] sample batches from a single source at a time | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | open | 2024-08-23T07:21:15Z | 2024-08-23T07:21:15Z | https://github.com/huggingface/datasets/issues/7122 | [
"enhancement"
] | memray | 0 |
pydantic/pydantic | pydantic | 11,179 | AttributeError: 'ModelName:JobSpec' object has no attribute 'rid' - Even when the attribute is present | ### `main.py`
```python
from _api._models.response import JobspecUpstream
data = [
{
"jobSpec": {
"inputSpecs": [
{
"inputType": "foundry",
"branch": None,
"datasetLocator": {
"datasetRid": "ri.foundry.main.dataset.",
"datasetProperties": {},
},
"assumedMarkings": {},
"inputFailureStrategy": None,
"identifier": "e61049d1-3db5-49c7-8c75-729162f1c24f",
},
]
},
"branch": "master",
},
]
```
### `response.py`
```python
from ._sub_models._upstream_downstream_traversal import JobSpecAndBranch
from ._sub_models._configs._default_models import _RootModel
class JobspecUpstream(_RootModel):
'''
Validation model for upstream traversal
'''
root: conset(JobSpecAndBranch, min_length=0)
```
### `_default_models.py`
```python
from pydantic import BaseModel, RootModel
from ._model_configs import __pyspark_schema__
class _BaseModel(BaseModel):
"""
The Base class for all base models
TODO: Need to change this to an abstract class
"""
__pyspark_schema__ = __pyspark_schema__
def __cyclic__():
return {}
def __derived__():
return {}
class Config:
arbitrary_types_allowed = True
class _RootModel(RootModel):
"""
The Base class for all root models
TODO: Need to change this to an abstract class
"""
__pyspark_schema__ = __pyspark_schema__
def __cyclic__():
return {}
def __derived__():
return {}
def __iter__(self):
return iter(self.root)
class Config:
arbitrary_types_allowed = True
```
### `_upstream_downstream_traversal.py`
```python
from datetime import datetime
from typing import Any, Dict, List, Optional, Set
from pydantic import Field, ValidationInfo, BaseModel
from pydantic import conlist, conset, field_validator
from pyspark.sql import types as T
from ._configs._model_statics import RID_PAT
from ._configs._default_models import _BaseModel
from ._configs._model_configs import format_dict
from ._enums._job_spec_client import (
InputFailureStrategy,
RunMode,
TokenMode,
)
class DatasetLocator(_BaseModel):
datasetRid: str = resource_id
datasetProperties: Dict[str, Any]
def __hash__(self):
return hash(self.datasetRid)
# Add the extra keys to the `__derived__` attribute to make it available to the __pyspark_schema__ method
def __dict__(self):
return {
"datasetRid": self.datasetRid,
"datasetProperties": format_dict(self.datasetProperties),
"profiles_used": self.datasetProperties.get("profileNames", []),
}
def __derived__() -> Dict[str, T.DataType]:
return {"profiles_used": T.ArrayType(T.StringType())}
class AssumedMarkings(_BaseModel):
assumedMarkingIds: Set[str]
class Config:
frozen = True
# Dataset Specification Models
class InputSpec(_BaseModel):
inputType: str
branch: Optional[str]
datasetLocator: DatasetLocator
assumedMarkings: Dict[str, AssumedMarkings]
inputFailureStrategy: Optional[InputFailureStrategy]
identifier: Optional[str]
def __dict__(self):
return {
"inputType": self.inputType,
"branch": (self.branch or None),
"datasetLocator": dict(self.datasetLocator),
"assumedMarkings": self.assumedMarkings,
"inputFailureStrategy": (self.inputFailureStrategy or None),
"identifier": (self.identifier or None),
}
class Config:
extra = "allow"
frozen = True
class JobSpec(_BaseModel):
"""
Final Model for Jobspec
"""
rid: str = resource_id
# attribution: Optional[Attribution]
# workerType: str
inputSpecs: Set[InputSpec]
# outputSpecs: Set[OutputSpec]
# computationParameters: Dict[str, Any]
# runtimeParameters: Dict[str, Any]
# sourceProvenance: Optional[SourceProvenance]
# tokenMode: Optional[TokenMode]
# useScopedTokens: Optional[bool]
# useInputScopedTokens: Optional[bool]
# maxAllowedDuration: Optional[str]
# executionConstraint: Optional[ExecutionConstraint]
# jobParameters: Optional[JobParameters]
# resourceManagementMetadata: Optional[ResourceManagementMetadata]
# allowRunOnTrashedResources: Optional[str]
# jobVariant: Optional[JobVariant]
# incrementalSpec: Optional[IncrementalSpec]
class Config:
arbitrary_types_allowed = True
def __hash__(self):
return hash(self.rid)
def __dict__(self):
return {
"rid": self.rid,
"attribution": dict(self.attribution),
"workerType": self.workerType.strip(),
"inputSpecs": [dict(spec) for spec in self.inputSpecs],
"outputSpecs": [dict(spec) for spec in self.outputSpecs],
# TODO: Need to drill down further
"computationParameters": format_dict(self.computationParameters),
# "runtimeParameters": ,
# "sourceProvenance": ,
"tokenMode": self.tokenMode,
# "useScopedTokens": ,
# "useInputScopedTokens": ,
# "maxAllowedDuration": ,
# "executionConstraint": ,
# "jobParameters": ,
# "resourceManagementMetadata": ,
# "allowRunOnTrashedResources": ,
# "jobVariant": ,
"incrementalSpec": (self.incrementalSpec or None),
}
def __cyclic__() -> Set:
return {"executionConstraint"}
class JobSpecAndBranch(_BaseModel):
jobSpec: JobSpec
branch: str
class Config:
frozen = True
```
```python
from enum import Enum, verify, UNIQUE
@verify(UNIQUE)
class InputFailureStrategy(str, Enum):
'''
What to do when the run for input datasets fail.
'''
CONTINUE = "CONTINUE"
FAIL = "FAIL"
```
The above is my model and the sample data used for validation. The below is the error i am getting
### stacktrace
```
AttributeError Traceback (most recent call last)
Cell In[3], [line 1](vscode-notebook-cell:?execution_count=3&line=1)
----> [1](vscode-notebook-cell:?execution_count=3&line=1) JobspecUpstream(data)
[... skipping hidden 1 frame]
File c:\Users\M332204\AppData\Local\Programs\Python\Python313\Lib\site-packages\pydantic\_internal\_model_construction.py:567, in make_hash_func.<locals>.hash_func(self)
[565](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:565) def hash_func(self: Any) -> int:
[566](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:566) try:
--> [567](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:567) return hash(getter(self.__dict__))
[568](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:568) except KeyError:
[569](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:569) # In rare cases (such as when using the deprecated copy method), the __dict__ may not contain
[570](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:570) # all model fields, which is how we can get here.
[571](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:571) # getter(self.__dict__) is much faster than any 'safe' method that accounts for missing keys,
[572](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:572) # and wrapping it in a `try` doesn't slow things down much in the common case.
[573](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/_internal/_model_construction.py:573) return hash(getter(SafeGetItemProxy(self.__dict__)))
File c:\Users\M332204\GitFiles\ls-use-case-create-data-set-with-the-list-of-data-sources-Repo\transforms-python\src\myproject\datasets\_api\_models\_sub_models\_upstream_downstream_traversal.py:124, in DatasetLocator.__hash__(self)
[123](file:///C:/Users/M332204/GitFiles/ls-use-case-create-data-set-with-the-list-of-data-sources-Repo/transforms-python/src/myproject/datasets/_api/_models/_sub_models/_upstream_downstream_traversal.py:123) def __hash__(self):
--> [124](file:///C:/Users/M332204/GitFiles/ls-use-case-create-data-set-with-the-list-of-data-sources-Repo/transforms-python/src/myproject/datasets/_api/_models/_sub_models/_upstream_downstream_traversal.py:124) return hash(self.datasetRid)
File c:\Users\M332204\AppData\Local\Programs\Python\Python313\Lib\site-packages\pydantic\main.py:892, in BaseModel.__getattr__(self, item)
[889](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/main.py:889) return super().__getattribute__(item) # Raises AttributeError if appropriate
[890](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/main.py:890) else:
[891](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/main.py:891) # this is the current error
--> [892](file:///C:/Users/M332204/AppData/Local/Programs/Python/Python313/Lib/site-packages/pydantic/main.py:892) raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'DatasetLocator' object has no attribute 'datasetRid'
```
**This happens inside the `__hash__` function of the particular model**. I could not make the class as `frozen` as there is an `Any` type which can be unhashable. I tried `Model.model_validate(data)` but got the same issue. I tried printing the output of self and `dir` of it.
> [!NOTE]
> Though I am seeing the values in the output, the error is thrown. It seems the `class variable` are not available in `self`
### Output
```
self --> datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087' datasetProperties={}
['Config', '__abstractmethods__', '__annotations__', '__class__', '__class_getitem__', '__class_vars__', '__copy__', '__cyclic__', '__deepcopy__', '__delattr__', '__derived__', '__dict__', '__dir__', '__doc__', '__eq__', '__fields__', '__fields_set__', '__firstlineno__', '__format__', '__ge__', '__get_pydantic_core_schema__', '__get_pydantic_json_schema__', '__getattr__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__pretty__', '__private_attributes__', '__pydantic_complete__', '__pydantic_computed_fields__', '__pydantic_core_schema__', '__pydantic_custom_init__', '__pydantic_decorators__', '__pydantic_extra__', '__pydantic_fields__', '__pydantic_fields_set__', '__pydantic_generic_metadata__', '__pydantic_init_subclass__', '__pydantic_parent_namespace__', '__pydantic_post_init__', '__pydantic_private__', '__pydantic_root_model__', '__pydantic_serializer__', '__pydantic_validator__', '__pyspark_schema__', '__reduce__', '__reduce_ex__', '__replace__', '__repr__', '__repr_args__', '__repr_name__', '__repr_recursion__', '__repr_str__', '__rich_repr__', '__setattr__', '__setstate__', '__signature__', '__sizeof__', '__slots__', '__static_attributes__', '__str__', '__subclasshook__', '__weakref__', '_abc_impl', '_calculate_keys', '_check_frozen', '_copy_and_set_values', '_get_value', '_iter', 'construct', 'copy', 'datasetProperties', 'datasetRid', 'dict', 'from_orm', 'json', 'model_computed_fields', 'model_config', 'model_construct', 'model_copy', 'model_dump', 'model_dump_json', 'model_extra', 'model_fields', 'model_fields_set', 'model_json_schema', 'model_parametrized_name', 'model_post_init', 'model_rebuild', 'model_validate', 'model_validate_json', 'model_validate_strings', 'parse_file', 'parse_obj', 'parse_raw', 'schema', 'schema_json', 'update_forward_refs', 'validate']
Config <class '_api._models._sub_models._configs._default_models._BaseModel.Config'>
__abstractmethods__ frozenset()
__annotations__ {'datasetRid': <class 'str'>, 'datasetProperties': typing.Dict[str, typing.Any]}
__class__ <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>
__class_getitem__ <bound method BaseModel.__class_getitem__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
__class_vars__ set()
__copy__ <bound method BaseModel.__copy__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__cyclic__ <bound method _BaseModel.__cyclic__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__deepcopy__ <bound method BaseModel.__deepcopy__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__delattr__ <bound method BaseModel.__delattr__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__derived__ <bound method DatasetLocator.__derived__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__dict__ {'datasetRid': 'ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', 'datasetProperties': {}}
__dir__ <built-in method __dir__ of DatasetLocator object at 0x000001A725DA32F0>
__doc__ None
__eq__ <bound method BaseModel.__eq__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__fields__ {'datasetRid': FieldInfo(annotation=str, required=False, default_factory=str, metadata=[_PydanticGeneralMetadata(pattern='ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}')]), 'datasetProperties': FieldInfo(annotation=Dict[str, Any], required=True)}
__fields_set__ {'datasetRid', 'datasetProperties'}
__firstlineno__ 119
__format__ <built-in method __format__ of DatasetLocator object at 0x000001A725DA32F0>
__ge__ <method-wrapper '__ge__' of DatasetLocator object at 0x000001A725DA32F0>
__get_pydantic_core_schema__ <bound method BaseModel.__get_pydantic_core_schema__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
__get_pydantic_json_schema__ <bound method BaseModel.__get_pydantic_json_schema__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
__getattr__ <bound method BaseModel.__getattr__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__getattribute__ <method-wrapper '__getattribute__' of DatasetLocator object at 0x000001A725DA32F0>
__getstate__ <bound method BaseModel.__getstate__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__gt__ <method-wrapper '__gt__' of DatasetLocator object at 0x000001A725DA32F0>
__hash__ <bound method DatasetLocator.__hash__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__init__ <bound method BaseModel.__init__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__init_subclass__ <built-in method __init_subclass__ of ModelMetaclass object at 0x000001A7251A1B80>
__iter__ <bound method BaseModel.__iter__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__le__ <method-wrapper '__le__' of DatasetLocator object at 0x000001A725DA32F0>
__lt__ <method-wrapper '__lt__' of DatasetLocator object at 0x000001A725DA32F0>
__module__ _api._models._sub_models._upstream_downstream_traversal
__ne__ <method-wrapper '__ne__' of DatasetLocator object at 0x000001A725DA32F0>
__new__ <built-in method __new__ of type object at 0x00007FF88B1D0700>
__pretty__ <bound method Representation.__pretty__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__private_attributes__ {}
__pydantic_complete__ True
__pydantic_computed_fields__ {}
__pydantic_core_schema__ {'type': 'model', 'cls': <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>, 'schema': {'type': 'model-fields', 'fields': {'datasetRid': {'type': 'model-field', 'schema': {'type': 'default', 'schema': {'type': 'str', 'pattern': 'ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}'}, 'default_factory': <class 'str'>, 'default_factory_takes_data': False}, 'metadata': {}}, 'datasetProperties': {'type': 'model-field', 'schema': {'type': 'dict', 'keys_schema': {'type': 'str'}, 'values_schema': {'type': 'any'}}, 'metadata': {}}}, 'model_name': 'DatasetLocator', 'computed_fields': []}, 'custom_init': False, 'root_model': False, 'config': {'title': 'DatasetLocator'}, 'ref': '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator:1817393634176', 'metadata': {'pydantic_js_functions': [<bound method BaseModel.__get_pydantic_json_schema__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>]}}
__pydantic_custom_init__ False
__pydantic_decorators__ DecoratorInfos(validators={}, field_validators={}, root_validators={}, field_serializers={}, model_serializers={}, model_validators={}, computed_fields={})
__pydantic_extra__ None
__pydantic_fields__ {'datasetRid': FieldInfo(annotation=str, required=False, default_factory=str, metadata=[_PydanticGeneralMetadata(pattern='ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}')]), 'datasetProperties': FieldInfo(annotation=Dict[str, Any], required=True)}
__pydantic_fields_set__ {'datasetRid', 'datasetProperties'}
__pydantic_generic_metadata__ {'origin': None, 'args': (), 'parameters': ()}
__pydantic_init_subclass__ <bound method BaseModel.__pydantic_init_subclass__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
__pydantic_parent_namespace__ None
__pydantic_post_init__ None
__pydantic_private__ None
__pydantic_root_model__ False
__pydantic_serializer__ SchemaSerializer(serializer=Model(
ModelSerializer {
class: Py(
0x000001a7251a1b80,
),
serializer: Fields(
GeneralFieldsSerializer {
fields: {
"datasetRid": SerField {
key_py: Py(
0x000001a7263527b0,
),
alias: None,
alias_py: None,
serializer: Some(
WithDefault(
WithDefaultSerializer {
default: DefaultFactory(
Py(
0x00007ff88b1d10c0,
),
false,
),
serializer: Str(
StrSerializer,
),
},
),
),
required: true,
},
"datasetProperties": SerField {
key_py: Py(
0x000001a7263524f0,
),
alias: None,
alias_py: None,
serializer: Some(
Dict(
DictSerializer {
key_serializer: Str(
StrSerializer,
),
value_serializer: Any(
AnySerializer,
),
filter: SchemaFilter {
include: None,
exclude: None,
},
name: "dict[str, any]",
},
),
),
required: true,
},
},
computed_fields: Some(
ComputedFields(
[],
),
),
mode: SimpleDict,
extra_serializer: None,
filter: SchemaFilter {
include: None,
exclude: None,
},
required_fields: 2,
},
),
has_extra: false,
root_model: false,
name: "DatasetLocator",
},
), definitions=[])
__pydantic_validator__ SchemaValidator(title="DatasetLocator", validator=Model(
ModelValidator {
revalidate: Never,
validator: ModelFields(
ModelFieldsValidator {
fields: [
Field {
name: "datasetRid",
lookup_key: Simple {
key: "datasetRid",
py_key: Py(
0x000001a727468530,
),
path: LookupPath(
[
S(
"datasetRid",
Py(
0x000001a7274685b0,
),
),
],
),
},
name_py: Py(
0x000001a7263527b0,
),
validator: WithDefault(
WithDefaultValidator {
default: DefaultFactory(
Py(
0x00007ff88b1d10c0,
),
false,
),
on_error: Raise,
validator: StrConstrained(
StrConstrainedValidator {
strict: false,
pattern: Some(
Pattern {
pattern: "ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}",
engine: RustRegex(
Regex(
"ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}",
),
),
},
),
max_length: None,
min_length: None,
strip_whitespace: false,
to_lower: false,
to_upper: false,
coerce_numbers_to_str: false,
},
),
validate_default: false,
copy_default: false,
name: "default[constrained-str]",
undefined: Py(
0x000001a725c91490,
),
},
),
frozen: false,
},
Field {
name: "datasetProperties",
lookup_key: Simple {
key: "datasetProperties",
py_key: Py(
0x000001a727468570,
),
path: LookupPath(
[
S(
"datasetProperties",
Py(
0x000001a7274685f0,
),
),
],
),
},
name_py: Py(
0x000001a7263524f0,
),
validator: Dict(
DictValidator {
strict: false,
key_validator: Str(
StrValidator {
strict: false,
coerce_numbers_to_str: false,
},
),
value_validator: Any(
AnyValidator,
),
min_length: None,
max_length: None,
name: "dict[str,any]",
},
),
frozen: false,
},
],
model_name: "DatasetLocator",
extra_behavior: Ignore,
extras_validator: None,
strict: false,
from_attributes: false,
loc_by_alias: true,
},
),
class: Py(
0x000001a7251a1b80,
),
generic_origin: None,
post_init: None,
frozen: false,
custom_init: false,
root_model: false,
undefined: Py(
0x000001a725c91490,
),
name: "DatasetLocator",
},
), definitions=[], cache_strings=True)
__pyspark_schema__ <bound method __pyspark_schema__ of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
__reduce__ <built-in method __reduce__ of DatasetLocator object at 0x000001A725DA32F0>
__reduce_ex__ <built-in method __reduce_ex__ of DatasetLocator object at 0x000001A725DA32F0>
__replace__ <bound method BaseModel.__replace__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__repr__ <bound method BaseModel.__repr__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__repr_args__ <bound method BaseModel.__repr_args__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__repr_name__ <bound method Representation.__repr_name__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__repr_recursion__ <bound method Representation.__repr_recursion__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__repr_str__ <bound method Representation.__repr_str__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__rich_repr__ <bound method Representation.__rich_repr__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__setattr__ <bound method BaseModel.__setattr__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__setstate__ <bound method BaseModel.__setstate__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__sizeof__ <built-in method __sizeof__ of DatasetLocator object at 0x000001A725DA32F0>
__slots__ ('__dict__', '__pydantic_fields_set__', '__pydantic_extra__', '__pydantic_private__')
__static_attributes__ ()
__str__ <bound method BaseModel.__str__ of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
__subclasshook__ <built-in method __subclasshook__ of ModelMetaclass object at 0x000001A7251A1B80>
__weakref__ None
_abc_impl <_abc._abc_data object at 0x000001A72745FB00>
_calculate_keys <bound method BaseModel._calculate_keys of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
_check_frozen <bound method BaseModel._check_frozen of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
_copy_and_set_values <bound method BaseModel._copy_and_set_values of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
_get_value <bound method BaseModel._get_value of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
_iter <bound method BaseModel._iter of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
construct <bound method BaseModel.construct of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
copy <bound method BaseModel.copy of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
dict <bound method BaseModel.dict of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
from_orm <bound method BaseModel.from_orm of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
json <bound method BaseModel.json of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
model_computed_fields {}
model_config {'arbitrary_types_allowed': True}
model_construct <bound method BaseModel.model_construct of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_copy <bound method BaseModel.model_copy of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
model_dump <bound method BaseModel.model_dump of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
model_dump_json <bound method BaseModel.model_dump_json of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
model_extra None
model_fields {'datasetRid': FieldInfo(annotation=str, required=False, default_factory=str, metadata=[_PydanticGeneralMetadata(pattern='ri\\..*\\.\\w{0,64}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,4}-{0,1}\\w{0,12}')]), 'datasetProperties': FieldInfo(annotation=Dict[str, Any], required=True)}
model_fields_set {'datasetRid', 'datasetProperties'}
model_json_schema <bound method BaseModel.model_json_schema of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_parametrized_name <bound method BaseModel.model_parametrized_name of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_post_init <bound method BaseModel.model_post_init of DatasetLocator(datasetRid='ri.foundry.main.dataset.9b657f70-94f2-4a86-adbb-cfde034ac087', datasetProperties={})>
model_rebuild <bound method BaseModel.model_rebuild of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_validate <bound method BaseModel.model_validate of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_validate_json <bound method BaseModel.model_validate_json of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
model_validate_strings <bound method BaseModel.model_validate_strings of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
parse_file <bound method BaseModel.parse_file of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
parse_obj <bound method BaseModel.parse_obj of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
parse_raw <bound method BaseModel.parse_raw of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
schema <bound method BaseModel.schema of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
schema_json <bound method BaseModel.schema_json of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
update_forward_refs <bound method BaseModel.update_forward_refs of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
validate <bound method BaseModel.validate of <class '_api._models._sub_models._upstream_downstream_traversal.DatasetLocator'>>
```
> [!TIP]
> When the models are on the same page i am not getting the issue. Also I am not using my custom model.
Can someone help pointing out the issue or redirect me to an exsisting issue if any.
Also if you see some optimizations also kindly let me know.
Let me know if any more information is needed.
Forgive me if the issue is dumb 😂 | closed | 2024-12-25T14:27:13Z | 2024-12-25T15:41:00Z | https://github.com/pydantic/pydantic/issues/11179 | [] | nikhilesh1234 | 2 |
numpy/numpy | numpy | 27,737 | BUG: MemoryError when indexing 2D StringDType array with a list index | ### Describe the issue:
Trying to index a `StringDType` array of shape `(1, 1)`, where the single string has length more than 15, using a list results in a `MemoryError`. This also happens when indexing with an array.
Specifically, this error appears when this array is printed, or (more directly), when it is accessed at `(-1,-1)`.
Possibly related to #27710.
The issue does not appear when:
- The single string has length 15
- The array has shape `(1, )`
Additionally, I get `SystemError: error return without exception set`.
### Reproduce the code example:
```python
import numpy as np
from numpy.dtypes import StringDType
ok = np.array([["abcdefghijklmno"]], dtype=StringDType())
bad = np.array([["abcdefghijklmnop"]], dtype=StringDType())
ok[[0]][-1, -1]
bad[[0]][-1, -1]
# These also raise errors:
bad[np.array([0])][-1, -1]
repr(bad[[0]])
# However this does not:
ok_2 = np.array(["abcdefghijklmnop"], dtype=StringDType())
repr(ok_2[[0]])
```
### Error message:
```shell
Traceback (most recent call last):
File "bug.py", line 8, in <module>
bad[[0]][-1, -1]
~~~~~~~~^^^^^^^^
MemoryError: Failed to load string in StringDType getitem
Traceback (most recent call last):
File "bug.py", line 8, in <module>
bad[[0]][-1, -1]
~~~~~~~~^^^^^^^^
SystemError: error return without exception set
```
### Python and NumPy Versions:
2.2.0.dev0+git20241111.fd4f467
3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0]
### Runtime Environment:
[{'numpy_version': '2.2.0.dev0+git20241111.fd4f467',
'python': '3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0]',
'uname': uname_result(system='Linux', node='laozi', release='6.8.0-47-generic', version='#47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_KNM',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL',
'AVX512_SPR']}}]
### Context for the issue:
I found the bug because I wanted to randomly permute an array of strings. When I tried to print the result, I got a memory error.
Being able to index a `StringDType` array using an array seems like important functionality, because I can't think of a workaround other than iterating over the index array using a Python loop. | closed | 2024-11-11T21:46:12Z | 2024-11-13T16:26:46Z | https://github.com/numpy/numpy/issues/27737 | [
"00 - Bug",
"component: numpy.strings"
] | SamAdamDay | 1 |
Avaiga/taipy | data-visualization | 1,798 | Chat control default appearance | ### Description
As of now, the chat control displays the sender's avatar (or a generated image if there is none), and this image appears to the left side of the message area, which has an arrow pointing to the right.

I suggest we change this behavior:
- No sender avatar by default - force it with 'show_sender' to True
- Display the sender's avatar to the right side of the message box.
I'm not certain at this point that this is going to look better... but I think it will.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-09-18T06:12:24Z | 2024-09-18T08:41:25Z | https://github.com/Avaiga/taipy/issues/1798 | [
"📈 Improvement",
"📄 Documentation",
"🖰 GUI",
"🟨 Priority: Medium"
] | FabienLelaquais | 1 |
yihong0618/running_page | data-visualization | 185 | 悦跑圈登陆失败,返回“Your version of App is out of date. Please update to our latest version.” | 今天尝试登录悦跑圈,一直出错,把login_data打出来,是以下信息。
尝试修改APPVERSION 4.2.0 ->5.22.2无效,仍然报告一样的出错消息,5.22.2是官网最新版本
{
"msg": "Your version of App is out of date. Please update to our latest version.",
"ret": "102"
} | closed | 2021-12-30T01:23:47Z | 2022-01-04T04:49:35Z | https://github.com/yihong0618/running_page/issues/185 | [
"help wanted"
] | gavinlichn | 12 |
ultralytics/ultralytics | computer-vision | 19,248 | Error during Exporting my custom model.pt to tflite | i geting below error ny trying this
from ultralytics import YOLO
model = YOLO("/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.pt")
model.export(format="tflite")
and also tryed with cmd line
Ultralytics 8.3.75 🚀 Python-3.10.11 torch-2.4.1+cu118 CPU (12th Gen Intel Core(TM) i5-12400)
Model summary (fused): 168 layers, 3,009,548 parameters, 0 gradients, 8.1 GFLOPs
PyTorch: starting from '/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 24, 8400) (5.9 MB)
requirements: Ultralytics requirement ['protobuf>=5'] not found, attempting AutoUpdate...
requirements: ❌ AutoUpdate skipped (offline)
TensorFlow SavedModel: starting export with tensorflow 2.16.2...
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.34...
ONNX: export success ✅ 0.6s, saved as '/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.onnx' (11.8 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.17.8...
ERROR: The trace log is below.
Traceback (most recent call last):
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 288, in print_wrapper_func
result = func(*args, **kwargs)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 361, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/ops/Conv.py", line 246, in make_node
input_tensor = get_padding_as_op(
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 2009, in get_padding_as_op
return tf.pad(x, padding)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/keras/src/backend/common/keras_tensor.py", line 138, in __tf_tensor__
raise ValueError(
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.operations`). You are likely doing something like:
```
x = Input(...)
...
tf_fn(x) # Invalid.
```
What you should do instead is wrap `tf_fn` in a layer:
```
class MyLayer(Layer):
def call(self, x):
return tf_fn(x)
x = MyLayer()(x)
```
ERROR: input_onnx_file_path: /home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.onnx
ERROR: onnx_op_name: /model.0/conv/Conv
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
An exception has occurred, use %tb to see the full traceback.
SystemExit: 1 | open | 2025-02-14T12:17:02Z | 2025-02-15T08:14:11Z | https://github.com/ultralytics/ultralytics/issues/19248 | [
"bug",
"detect",
"exports"
] | Vinaygoudasp7 | 2 |
yzhao062/pyod | data-science | 56 | Resource updation request of an Article - Outlier Detection using PyOD | Hi,
I have written an article on Outlier Detection using PyOD on Analytics Vidhya Blog -
[https://www.analyticsvidhya.com/blog/2019/02/outlier-detection-python-pyod/](https://www.analyticsvidhya.com/blog/2019/02/outlier-detection-python-pyod/)
In the article, I have tried to explain the need for outlier detection and how can we use pyod for the same and also implemented pyod on a real world data set.
Please consider it including in your resources section on GitHub. I believe it would be really helpful for the people who wanted to get started with pyod.
Thanks
| closed | 2019-02-15T07:56:42Z | 2019-02-15T15:21:31Z | https://github.com/yzhao062/pyod/issues/56 | [] | lakshay-arora | 2 |
tartiflette/tartiflette | graphql | 624 | "poetry add tartiflette" not working | ## Report a bug
* [ ] **Tartiflette version:** 1.4.1
* [ ] **Python version:** 3.11
* [ ] **Executed in docker:** No_
* [ ] **Is it a regression from a previous versions?** No
* [ ] **Explain with a simple sentence the expected behavior**
I have this error when I run ```poetry add tartiflette```
• Installing tartiflette (1.4.1): Failed
ChefBuildError
Backend subprocess exited when trying to invoke build_wheel
running bdist_wheel
running build
running build_py
error: [WinError 2] The system cannot find the file specified
at ~\AppData\Roaming\pypoetry\venv\Lib\site-packages\poetry\installation\chef.py:152 in _prepare
148│
149│ error = ChefBuildError("\n\n".join(message_parts))
150│
151│ if error is not None:
→ 152│ raise error from None
153│
154│ return path
155│
156│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with tartiflette (1.4.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "tartiflette (==1.4.1)"'.
## Request a feature
* How to install tartiflete with poetry (not pip)
| open | 2023-04-29T00:21:12Z | 2023-04-29T00:21:37Z | https://github.com/tartiflette/tartiflette/issues/624 | [] | nl4opt | 0 |
indico/indico | flask | 6,444 | Secret URLs for late submissions | Indico 3.3.2
I couldn’t find a way to generate a secret URL I can send to people wanting to submit an abstract, but the call for abstracts has been closed already. | open | 2024-07-18T13:47:34Z | 2024-10-29T08:03:58Z | https://github.com/indico/indico/issues/6444 | [
"enhancement"
] | paulmenzel | 5 |
modelscope/modelscope | nlp | 290 | 将任意两张人脸图输入FRFM-large 这个模型,输出的向量做余弦相似度都在0.99以上 | 问题如标题,modelscope版本1.5.2
Model revision not specified, use the latest revision: v1.0
用的是范例代码 | closed | 2023-05-09T09:51:53Z | 2023-06-16T01:59:43Z | https://github.com/modelscope/modelscope/issues/290 | [
"Stale"
] | xyzkk3 | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,318 | Problem about visdom.server | Hi there, I got a problem with opening visdom.server. My OS is win11, and I use docker, which I run the code in the container. Here is the screenshot with my issue.

It said Address already in use, then I use -port 12345

which it can success show the address and i got the url link. But I can't open the url link, I don't know what to do, I kind of new with code stuff, can anyone know how to solve this?

| closed | 2021-09-23T02:13:05Z | 2021-12-04T17:01:43Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1318 | [] | AugustLee93 | 2 |
xlwings/xlwings | automation | 2,408 | Probable useless stdout/stderr capture | I don't understand why you are capturing stderr and stdout at:
https://github.com/xlwings/xlwings/blob/03a965e6ccbb90437871d3e3646aa52ad7de3b25/xlwings/_xlwindows.py#L534
since you are not using them. | open | 2024-02-29T12:00:33Z | 2024-02-29T15:11:25Z | https://github.com/xlwings/xlwings/issues/2408 | [] | gdementen | 1 |
vaexio/vaex | data-science | 1,521 | [Q] Assigning expression.TimeDelta.total_seconds() to a variable resulting AttributeError | **Problem** I tried to assign a variable with `expression.TimeDelta.total_seconds()` and the error message says the following:
`AttributeError: 'pyarrow.lib.DurationArray' object has no attribute 'flags'`
However just printing the object is fine.
In the full error message there are several times it shows that `During handling of the above exception, another exception occurred:....`.
**Code**
```python
import vaex
import numpy as np
df_dict = {"datetime_1": np.array(["2021-01-11T04:05:43",
"2009-03-22T00:54:37",
"2017-12-22T14:46:27"]),
"datetime_2": np.array(["2011-01-11T04:05:43",
"2019-03-22T00:54:37",
"2037-12-22T14:46:27"])}
df = vaex.from_dict(df_dict)
df["datetime_1"] = df.datetime_1.astype("datetime64")
df["datetime_2"] = df.datetime_2.astype("datetime64")
df["delta"] = df.datetime_2 - df.datetime_1
print(df.delta.td.total_seconds())
print(df.delta.td)
print("\n-------\n")
df["seconds"] = df.delta.td.total_seconds()
print(df)
```
**Output**(The full error message is very long so I put it at the end)
```
td_total_seconds(delta)
<vaex.expression.TimeDelta object at 0x12247c250>
-------
ERROR:MainThread:vaex:error evaluating: seconds at rows 0-3
...
AttributeError: 'pyarrow.lib.DurationArray' object has no attribute 'flags'
# datetime_1 datetime_2 delta seconds
0 2021-01-11 04:05:43 2011-01-11 04:05:43 -3653 days +00:00:00 error
1 2009-03-22 00:54:37 2019-03-22 00:54:37 3652 days 00:00:00 error
2 2017-12-22 14:46:27 2037-12-22 14:46:27 7305 days 00:00:00 error
```
**Vaex version** (installed via anaconda)
```
{'vaex-core': '4.4.0',
'vaex-viz': '0.5.0',
'vaex-hdf5': '0.8.0',
'vaex-server': '0.5.0',
'vaex-astro': '0.8.3',
'vaex-jupyter': '0.6.0',
'vaex-ml': '0.12.0'}
```
OS system: macOS High Sierra
**The full error message**
```
ERROR:MainThread:vaex:error evaluating: seconds at rows 0-3
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 166, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: 'td_total_seconds(delta)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2002, in data_type
data = self.evaluate(expression, 0, 1, filtered=False, array_type=array_type, parallel=False)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6110, in _evaluate_implementation
value = scope.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 156, in __getitem__
values = self.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 112, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/arrow/numpy_dispatch.py", line 136, in wrapper
result = f(*args, **kwargs)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 941, in td_total_seconds
return _to_pandas_series(x).dt.total_seconds().values
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 284, in _to_pandas_series
return pd.Series(_pandas_dt_fix(x), dtype=x.dtype)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 277, in _pandas_dt_fix
if not x.flags['WRITEABLE']:
AttributeError: 'pyarrow.lib.DurationArray' object has no attribute 'flags'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 166, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: 'td_total_seconds(delta)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 3815, in table_part
values = dict(zip(column_names, df.evaluate(column_names)))
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6022, in _evaluate_implementation
dtypes[expression] = dtype = df.data_type(expression).internal
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2004, in data_type
data = self.evaluate(expression, 0, 1, filtered=True, array_type=array_type, parallel=False)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6110, in _evaluate_implementation
value = scope.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 156, in __getitem__
values = self.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 112, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/arrow/numpy_dispatch.py", line 136, in wrapper
result = f(*args, **kwargs)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 941, in td_total_seconds
return _to_pandas_series(x).dt.total_seconds().values
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 284, in _to_pandas_series
return pd.Series(_pandas_dt_fix(x), dtype=x.dtype)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 277, in _pandas_dt_fix
if not x.flags['WRITEABLE']:
AttributeError: 'pyarrow.lib.DurationArray' object has no attribute 'flags'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 166, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: 'td_total_seconds(delta)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2002, in data_type
data = self.evaluate(expression, 0, 1, filtered=False, array_type=array_type, parallel=False)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6110, in _evaluate_implementation
value = scope.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 156, in __getitem__
values = self.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 112, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/arrow/numpy_dispatch.py", line 136, in wrapper
result = f(*args, **kwargs)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 941, in td_total_seconds
return _to_pandas_series(x).dt.total_seconds().values
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 284, in _to_pandas_series
return pd.Series(_pandas_dt_fix(x), dtype=x.dtype)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 277, in _pandas_dt_fix
if not x.flags['WRITEABLE']:
AttributeError: 'pyarrow.lib.DurationArray' object has no attribute 'flags'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 166, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: 'td_total_seconds(delta)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 3820, in table_part
values[name] = df.evaluate(name)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6022, in _evaluate_implementation
dtypes[expression] = dtype = df.data_type(expression).internal
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2004, in data_type
data = self.evaluate(expression, 0, 1, filtered=True, array_type=array_type, parallel=False)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 2850, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/dataframe.py", line 6110, in _evaluate_implementation
value = scope.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 106, in evaluate
result = self[expression]
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 156, in __getitem__
values = self.evaluate(expression)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/scopes.py", line 112, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/arrow/numpy_dispatch.py", line 136, in wrapper
result = f(*args, **kwargs)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 941, in td_total_seconds
return _to_pandas_series(x).dt.total_seconds().values
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 284, in _to_pandas_series
return pd.Series(_pandas_dt_fix(x), dtype=x.dtype)
File "/opt/anaconda3/envs/ADS/lib/python3.8/site-packages/vaex/functions.py", line 277, in _pandas_dt_fix
if not x.flags['WRITEABLE']:
``` | open | 2021-08-14T19:01:50Z | 2021-08-19T11:32:50Z | https://github.com/vaexio/vaex/issues/1521 | [
"bug"
] | SciCode4437 | 2 |
aminalaee/sqladmin | sqlalchemy | 388 | Handling of `lazy="dynamic"` relationships | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Model relations are automatically loaded regardless of the `ModelView` configuration, this causes issues with relationships defined as `lazy="dynamic"` because it ends up in `InvalidRequestError( '%s' does not support object population - eager loading cannot be applied.)` from sqlalchemy. The error is raised on any `list` or `detail` request.
The current workaround I'm applying is:
```python
class ModelView(ModelView):
def __init__(self) -> None:
super().__init__()
self._relations = [rel for rel in self._relations if rel.lazy != "dynamic"]
```
### Steps to reproduce the bug
Setting up two SQLAlchemy Models and configuring a relationship between them with the `lazy="dynamic"` parameter should suffice
### Expected behavior
1. Relationships not being listed as part of `column_list` or `column_details_list` are not eagerly loaded
2. Relationships with `lazy="dynamic"` are either ignored (short term fix?) or correctly handled (longer term feature?)
### Actual behavior
`InvalidRequestError` causes the `ModelView` to be unusable for models with `lazy="dynamic"` relationships
### Debugging material
[Starlette Debugger.pdf](https://github.com/aminalaee/sqladmin/files/10101348/Starlette.Debugger.pdf)
### Environment
- OS: Debian 11 (Bullseye)
- Python: 3.11
- SQLAlchemy: 1.4.44
### Additional context
_No response_ | open | 2022-11-28T07:18:26Z | 2025-03-13T19:43:00Z | https://github.com/aminalaee/sqladmin/issues/388 | [] | jarojasm95 | 7 |
jupyter/docker-stacks | jupyter | 2,073 | [BUG] Healthcheck fails when using a custom runtime dir | ### What docker image(s) are you using?
scipy-notebook (but applies to all images based on the `base-notebook` image)
### Host OS system
RHEL 8.0
### Host architecture
x86_64
### What Docker command are you running?
The following command DOES work as expected (default runtime dir):
```
docker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
```
The following command does NOT work as expected (customized runtime dir):
```
docker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
```
### How to Reproduce the problem?
1. Start the Jupyter container using the commands above.
2. In another terminal, run the healtcheck script: `docker exec jupyter /etc/jupyter/docker_healthcheck.py`
3. Observe the healthcheck script failing due to server state JSON file(s) not being found.
### Command output
```bash session
$ docker run --rm -p 8888:8888 --name jupyter quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
$ docker exec jupyter /etc/jupyter/docker_healthcheck.py
b'{"version": "2.12.1"}'
$ docker run --rm -p 8888:8888 --name jupyter -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 start-notebook.sh
$ docker exec jupyter /etc/jupyter/docker_healthcheck.py
Traceback (most recent call last):
File "/etc/jupyter/docker_healthcheck.py", line 14, in <module>
json_file = next(runtime_dir.glob("*server-*.json"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
```
### Expected behavior
Healthcheck script to not fail, e.g. to display `b'{"version": "2.12.1"}'`, even with a customized runtime dir.
### Actual behavior
The healthcheck script fails because it cannot find server state JSON files in the hard-coded default runtime dir.
### Anything else?
The problem is that the `/etc/jupyter/docker_healthcheck.py` healtcheck script hard-codes the default runtime directory to search for server JSON state files as below:
https://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/docker_healthcheck.py#L13
When this directory is customized for example via `JUPYTER_RUNTIME_DIR`, then the healthcheck script does not work.
The actual problem is when deploying Jupyter containers as services.
The Jupyter images have a default healthcheck configured as below:
https://github.com/jupyter/docker-stacks/blob/fcb20a914ed20e44a96053caf43eef6e12fb4c04/images/base-notebook/Dockerfile#L66-L70
When the healthcheck fails due to a custom runtime dir, the service is restarted continuously.
I think the healthcheck script should use the output of `jupyter --runtime-dir` which respects customizations:
```
$ docker run --rm -e JUPYTER_RUNTIME_DIR=/home/jovyan/custom-runtime quay.io/jupyter/scipy-notebook:2023-12-25 jupyter --runtime-dir
/home/jovyan/custom-runtime
```
If you agree with the above, I can send a PR with this fix.
### Latest Docker version
- [X] I've updated my Docker version to the latest available, and the issue persists | closed | 2024-01-04T18:55:09Z | 2024-01-08T11:36:40Z | https://github.com/jupyter/docker-stacks/issues/2073 | [
"type:Bug"
] | hhromic | 5 |
FlareSolverr/FlareSolverr | api | 439 | [0magnet] (testing) Exception (0magnet): The cookies provided by FlareSolverr are not valid: The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-07-24T04:11:57Z | 2022-07-24T23:50:02Z | https://github.com/FlareSolverr/FlareSolverr/issues/439 | [
"invalid"
] | baopc001 | 1 |
pennersr/django-allauth | django | 3,573 | Issue with Microsoft sign in | Facing login issue while using Microsoft for sign in, I have configured everything according to the documentation, the app for Microsoft Sign in has been configured and has admin consent granted in the Microsoft Entra admin. I used the Django admin panel to register the social application with the following settings:
[![social app settings][1]][1]
When I click the Sign in with Microsoft button, it redirects me to the consent page:
[![consent page][2]][2]
after I click Continue, I am redirected to the sign in page to select the account to sign in, I sign in with an account from my organization and continue, the login does not succeed at this point and I am directed to the following page:
[![error page][3]][3]
The redirect URL is set to : "http://localhost:8000/accounts/microsoft/login/callback/"
and the app is set for **Single tenant.**
> API permissions:
[![enter image description here][4]][4]
> settings.py
SOCIALACCOUNT_PROVIDERS = {
'microsoft': {
'TENANT': 'organizations',
}
}
[1]: https://i.stack.imgur.com/EheEy.png
[2]: https://i.stack.imgur.com/VWqOS.png
[3]: https://i.stack.imgur.com/puFeS.png
[4]: https://i.stack.imgur.com/JRA7u.png | closed | 2023-12-19T17:17:16Z | 2023-12-20T08:55:59Z | https://github.com/pennersr/django-allauth/issues/3573 | [] | muazshahid | 1 |
oegedijk/explainerdashboard | plotly | 38 | Question regarding deployment on Heroku | I just tried to deploy my app on Heroku by directly importing the github project.
However, I did not manage to "add the buildpack" correctly - I'm still generating a slug larger than 500MB. I did
- add the folder bin from https://github.com/niteoweb/heroku-buildpack-shell.git to my project folder,
- add the folder .heroku including the file run.sh containing "pip install -y xgboost".
What am I doing wrong, do I have to add the buildpack somewhere in Heroku itself? | closed | 2020-12-04T15:38:50Z | 2021-01-20T11:37:22Z | https://github.com/oegedijk/explainerdashboard/issues/38 | [
"help wanted"
] | hkoppen | 67 |
xorbitsai/xorbits | numpy | 564 | BUG: pip install from source build failed on macos | ### Describe the bug
I try to build the xorbits by `pip install -e .`, but build fails. My macos runs on Apple M2.
```python
➜ python git:(main) ✗ pip install -e . -v
Using pip 23.0.1 from /Users/codingl2k1/.pyenv/versions/3.9.17/lib/python3.9/site-packages/pip (python 3.9)
Obtaining file:///Users/codingl2k1/Work/xorbits/python
Running command pip subprocess to install build dependencies
Ignoring pandas: markers 'python_version < "3.9" and platform_machine != "aarch64"' don't match your environment
Ignoring pandas: markers 'python_version < "3.9" and platform_machine == "aarch64"' don't match your environment
Ignoring pandas: markers 'python_version >= "3.10" and python_version < "3.11"' don't match your environment
Ignoring pandas: markers 'python_version >= "3.11"' don't match your environment
Ignoring scipy: markers 'python_version < "3.9" and platform_machine != "aarch64"' don't match your environment
Ignoring scipy: markers 'python_version < "3.9" and platform_machine == "aarch64"' don't match your environment
Ignoring scipy: markers 'python_version >= "3.10" and python_version < "3.11"' don't match your environment
Ignoring scipy: markers 'python_version >= "3.11"' don't match your environment
Ignoring cloudpickle: markers 'python_version >= "3.11"' don't match your environment
Collecting setuptools<64
Using cached setuptools-63.4.3-py3-none-any.whl (1.2 MB)
Collecting wheel
Using cached wheel-0.40.0-py3-none-any.whl (64 kB)
Collecting numpy
Using cached numpy-1.25.0-cp39-cp39-macosx_11_0_arm64.whl (14.0 MB)
Collecting pandas==1.2.2
Using cached pandas-1.2.2.tar.gz (5.5 MB) <-- Download the pandas source code.
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
^C Installing build dependencies ... canceled
ERROR: Operation cancelled by user
```
The problems is because `pyproject.toml` pins an old version of pandas. In my case, the pandas==1.2.2 is too old that it does not support Apple M2.
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version 3.9.17
2. The version of Xorbits you use Lastest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
### Expected behavior
A clear and concise description of what you expected to happen.
### Additional context
Add any other context about the problem here.
| closed | 2023-07-04T09:13:14Z | 2023-07-05T03:15:47Z | https://github.com/xorbitsai/xorbits/issues/564 | [
"bug"
] | codingl2k1 | 0 |
blockchain-etl/bitcoin-etl | dash | 23 | Zcash dataset ignores Sapling pool when calculating fees | For transactions containing Sapling shielded inputs or outputs, a single fake input or output is added to the transaction representing the net migration of funds into or out of the Sapling pool.
https://github.com/blockchain-etl/bitcoin-etl/blob/23877a54f90536bc2fe817490661efbd07d97be9/bitcoinetl/service/btc_service.py#L177-L187
However, this fake input or output is not included in the fee calculation, causing fees to appear massively inflated in some cases. Looking at the transaction implementation, the fee _should_ include the fake input or output:
https://github.com/blockchain-etl/bitcoin-etl/blob/23877a54f90536bc2fe817490661efbd07d97be9/bitcoinetl/domain/transaction.py#L45-L71
My guess is that this is an ordering problem: the fee is being cached somewhere before the fake input or output is added. I can't see where that is happening though.
As an example, BigQuery says that transaction `13d78b6036aa5be6646c4fa96e3a779eb03f82fdc226e7cf82a9197d28d03a6b` has a fee of 294206950000 zatoshis, when it was really 0.0001 ZEC (10,000 zatoshis) according to a blockchain explorer. However, it returns `inputs.value = 147103480000` and `outputs.value = -147103470000`, and adding these together gives the real fee. | closed | 2019-07-06T23:32:50Z | 2019-07-22T22:34:03Z | https://github.com/blockchain-etl/bitcoin-etl/issues/23 | [] | str4d | 4 |
fastapi-users/fastapi-users | fastapi | 1,312 | fastapi depreciation in "full example" | ## Documentation "full example" contains deprecated on_event call
Setting up the "fully example" demo from the documentation I'm running into a deprecation issue in line 50 of `examples/sqlalchemy-oauth/app/app.py`:
> The method "on_event" in class "FastAPI" is deprecated
>
> on_event is deprecated, use lifespan event handlers instead.
>
> Read more about it in the
> [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).
> Pylance
> (method) def on_event(event_type: str) -> ((DecoratedCallable@on_event) -> DecoratedCallable@on_event)
> Add an event handler for the application.
>
> on_event is deprecated, use lifespan event handlers instead.
>
> Read more about it in the [FastAPI docs for Lifespan Events](vscode-file://vscode-app/c:/Program%20Files/Microsoft%20VS%20Code/resources/app/out/vs/code/electron-sandbox/workbench/workbench.html).
Depreciation documentation for fastapi is here: https://fastapi.tiangolo.com/advanced/events/#alternative-events-deprecated
## Configuration
- Python version : 3.10
- FastAPI version : 0.104.0
- FastAPI Users version : 12.1.2
| closed | 2023-10-24T08:22:43Z | 2024-03-05T08:09:51Z | https://github.com/fastapi-users/fastapi-users/issues/1312 | [
"documentation"
] | cargocultprogramming | 4 |
pydata/pandas-datareader | pandas | 167 | yahoo datasource does not work for currencies | Yahoo uses '=' in currency symbols, example: "EURUSD=X"
The datareader does not work with this, even though the = is no special character in URL Requests.
| closed | 2016-02-06T11:38:54Z | 2018-01-18T16:32:31Z | https://github.com/pydata/pandas-datareader/issues/167 | [
"yahoo-finance"
] | awb99 | 4 |
deepspeedai/DeepSpeed | pytorch | 5,819 | [BUG] Deepspeed ZeRO3 not partitioning model parameters | **Describe the bug**
Even after applying ZeRO3, model parameters are copied, not partitioned, across all the available GPUs
**To Reproduce**
When I run the below code with this command: `deepspeed pretrain.py --deepspeed ds_config_zero3.json`, I get this result, meaning model parameters are copied, not partitioned. And I get CUDA OOM if I try to train this model.
```python
defaultdict(<class 'list'>, {'model.embed_tokens.weight': [device(type='cuda', index=3)], ...)
defaultdict(<class 'list'>, {'model.embed_tokens.weight': [device(type='cuda', index=2)], ...)
defaultdict(<class 'list'>, {'model.embed_tokens.weight': [device(type='cuda', index=1)], ...)
```
**pretrain.py**
```python
...
# https://github.com/microsoft/DeepSpeed/issues/4208
import deepspeed
from transformers.deepspeed import HfDeepSpeedConfig
ds_config = {
"fp16": {"enabled": False},
"bf16": {"enabled": True},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"offload_param": {
"device": "cpu",
"pin_memory": True
},
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
# Share the DeepSpeed config with HuggingFace so we can properly load the
# large model with zero stage 3
hfdsc = HfDeepSpeedConfig(ds_config)
# For pretrained models, the DeepSped config file needs to have is_deepspeed_zero3_enabled: true setup in TrainingArguments and it needs a ZeRO configuration enabled. The TrainingArguments object must be created before calling the model from_pretrained().
training_args = TrainingArguments(
output_dir=output_dir,
max_steps=max_steps,
num_train_epochs=num_train_epochs,
logging_steps=logging_steps,
eval_steps=eval_steps,
save_steps=save_steps,
evaluation_strategy='steps',
per_device_train_batch_size=per_device_train_batch_size,
per_device_eval_batch_size=per_device_eval_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
eval_accumulation_steps=gradient_accumulation_steps,
gradient_checkpointing=gradient_checkpointing,
learning_rate=learning_rate,
lr_scheduler_type=lr_scheduler_type,
warmup_ratio=warmup_ratio,
weight_decay=weight_decay,
# optim=optim, # You are using ZeRO with an untested optimizer
bf16=bf16,
remove_unused_columns=remove_unused_columns,
run_name=run_name,
report_to=report_to,
ddp_find_unused_parameters=False, # RuntimeError: Expected to mark a variable ready only once.
ddp_timeout=72000, # RuntimeError: [2] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Socket Timeout
deepspeed=ds_config,
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_storage=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
# device_map="cpu",
)
...
# check GPU usage
from collections import defaultdict
device_map = defaultdict(list)
for n,p in model.named_parameters():
device_map[n].append(p.device)
print(device_map)
```
**ds_config_zero3.json**
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 0,
"stage3_max_reuse_distance": 0,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
**Expected behavior**
Model parameters should be partitioned across the GPUs and model should be trained without CUDA OOM.
**ds_report output**
```
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [92m[OKAY][0m
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[93m [WARNING] [0m async_io requires the dev libaio .so object and headers but these were not found.
[93m [WARNING] [0m async_io: please install the libaio-dev package with apt
[93m [WARNING] [0m If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [93m[NO][0m ....... [93m[NO][0m
fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m
cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m
cpu_adagrad ............ [93m[NO][0m ....... [92m[OKAY][0m
cpu_lion ............... [93m[NO][0m ....... [92m[OKAY][0m
[93m [WARNING] [0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [93m[NO][0m ....... [93m[NO][0m
fused_lamb ............. [93m[NO][0m ....... [92m[OKAY][0m
fused_lion ............. [93m[NO][0m ....... [92m[OKAY][0m
inference_core_ops ..... [93m[NO][0m ....... [92m[OKAY][0m
cutlass_ops ............ [93m[NO][0m ....... [92m[OKAY][0m
transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m
quantizer .............. [93m[NO][0m ....... [92m[OKAY][0m
ragged_device_ops ...... [93m[NO][0m ....... [92m[OKAY][0m
ragged_ops ............. [93m[NO][0m ....... [92m[OKAY][0m
random_ltd ............. [93m[NO][0m ....... [92m[OKAY][0m
[93m [WARNING] [0m sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.2
[93m [WARNING] [0m using untested triton version (2.2.0), only 1.0.0 is known to be compatible
sparse_attn ............ [93m[NO][0m ....... [93m[NO][0m
spatial_inference ...... [93m[NO][0m ....... [92m[OKAY][0m
transformer ............ [93m[NO][0m ....... [92m[OKAY][0m
stochastic_transformer . [93m[NO][0m ....... [92m[OKAY][0m
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.10/site-packages/torch']
torch version .................... 2.2.0+cu121
deepspeed install path ........... ['/opt/conda/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.14.0, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.2, cuda 12.1
shared memory (/dev/shm) size .... 64.00 MB
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: `Linux-5.15.133+-x86_64-with-glibc2.35`
- GPU count and types: 4 or 8 NVIDIA H100 80GB HBM3
- Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
- Python version: `3.10.14`
- Any other relevant info about your setup: `deepspeed.__version__ == 0.14.0`
**Launcher context**
`deepspeed pretrain.py --deepspeed ds_config_zero3.json`
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
I get the log `The device_map was not initialized. Setting device_map to {'':torch.cuda.current_device()}` while loading the model. I think it shouldn't set device_map as above when ZeRO3 is enabled correctly.
| closed | 2024-08-01T08:05:19Z | 2024-09-01T23:35:46Z | https://github.com/deepspeedai/DeepSpeed/issues/5819 | [
"bug",
"training"
] | echo-yi | 9 |
FlareSolverr/FlareSolverr | api | 663 | Chromium or Portables not supported when run from source | ### Have you checked our README?
- [X] I have checked the README
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.0.1
- Last working FlareSolverr version: 2.2.10
- Operating system: Windows 10
- Are you using Docker: [no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a proxy or VPN: [no]
- Are you using Captcha Solver: [no]
- If using captcha solver, which one:
- URL to test this issue: doesn't get that far
```
### Description
Follow steps listed here: https://github.com/FlareSolverr/FlareSolverr#from-source-code
Install chromium (NOT chrome) results in a portable zip file download and therefore, clean registry and system. BUT, how to set a path for the `chrome.exe`?
As it is this simply doesn't run. Please verify running from source instructions, the errors are clear.
### Logged Error Messages
```text
415, in __init__
browser = subprocess.Popen(
^^^^^^^^^^^^^^^^^
File "c:\Python311\Lib\subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "c:\Python311\Lib\subprocess.py", line 1433, in _execute_child
args = list2cmdline(args)
^^^^^^^^^^^^^^^^^^
File "c:\Python311\Lib\subprocess.py", line 608, in list2cmdline
for arg in map(os.fsdecode, seq):
File "<frozen os>", line 824, in fsdecode
TypeError: expected str, bytes or os.PathLike object, not NoneType
During handling of the above exception, another exception occurred:
```
```
Exception: Error getting browser User-Agent. expected str, bytes or os.PathLike object, not NoneType
```
```
### Screenshots
_No response_ | closed | 2023-01-07T01:31:15Z | 2023-01-10T00:07:24Z | https://github.com/FlareSolverr/FlareSolverr/issues/663 | [] | laendle | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,176 | [Feature Request]: API endpoint to get user metadata for Extra Networks and Checkpoints | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
In the WebUI users can set things like Activation Text, Preferred Weight, and Notes for Loras, Hypernetworks, etc. The `.json` files also contain details for which version of Stable Diffusion the Lora can be used for. It would be extremely valuable for projects with different frontends if that information could be accessed via the API. Plugins for Krita or addons for Blender could access presets the user had already saved on the server, without the user needing to remember all the activation text.
The current `/sdapi/v1/loras` endpoint returns a `metadata` section, but it doesn't include any of the `user_metadata` fields.
### Proposed workflow
1) `/sdapi/v1/sd-models` would include an "sd version" field for the models.
2) `/sdapi/v1/loras`, `/sdapi/v1/hypernetworks`, `/sdapi/v1/embeddings`, and `/sd_extra_networks/metadata` could all include the metadata that's saved in the `.json` files for these extra networks
3) A new endpoint to update the `.json` metadata - likely `/sd_extra_networks/user_metadata/'. Parameters would be a file path or the name of the Lora, as well as a post body.
For reference, here's what the `.json` documents currently look like
```
{
"description": "My Example Lora",
"sd version": "SD1",
"activation text": "really_cool, best_devs, plz_add_feature",
"preferred weight": 0.8,
"notes": ""
}
```
With those pieces of information, tools that interact with the API can give users a much better
### Additional information
_No response_ | open | 2024-07-09T05:07:28Z | 2024-07-09T05:07:28Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16176 | [
"enhancement"
] | DrCyanide | 0 |
widgetti/solara | jupyter | 478 | adding CORS middleware | What's the best way to add FastAPI CORS middleware to solara? I tried two approaches
### Method 1
Adding in the following code at the module level results in a circular import error
```
import solara
import solara.server.fastapi
from fastapi.middleware.cors import CORSMiddleware
solara.server.fastapi.app.add_middleware(CORSMiddleware, allow_origins=['http://localhost:3000'], allow_credentials=True, allow_methods=['*'], allow_headers=['*'])
@solara.component
def Page():
with solara.Column():
solara.Info("Hello")
solara.Button("Button")
Page()
```
Error:
```
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/server/starlette.py", line 47, in <module>
from . import app as appmod
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/server/app.py", line 407, in <module>
apps["__default__"] = AppScript(os.environ.get("SOLARA_APP", "solara.website.pages:Page"))
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/server/app.py", line 69, in __init__
app = self._execute()
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/server/app.py", line 116, in _execute
routes = [solara.autorouting._generate_route_path(self.path, first=True, initial_namespace=initial_namespace)]
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/autorouting.py", line 507, in _generate_route_path
module = source_to_module(subpath, initial_namespace=initial_namespace)
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/autorouting.py", line 37, in source_to_module
exec(ast, mod.__dict__)
File "/Users/Brian/Work/solara/test2.py", line 3, in <module>
import solara.server.fastapi
File "/Users/Brian/anaconda3/envs/jdaviz/lib/python3.10/site-packages/solara/server/fastapi.py", line 5, in <module>
app = FastAPI(routes=starlette.routes)
AttributeError: partially initialized module 'solara.server.starlette' has no attribute 'routes' (most likely due to a circular import). Did you mean: 'Route'?
```
### Method 2
Adding in the code within the Page component doesn't throw any errors but does not add the CORS to the app.
```
import solara
from fastapi.middleware.cors import CORSMiddleware
@solara.component
def Page():
import solara.server.fastapi
solara.server.fastapi.app.add_middleware(CORSMiddleware, allow_origins=['http://localhost:3000'], allow_credentials=True, allow_methods=['*'], allow_headers=['*'])
with solara.Column():
solara.Info("Hello")
solara.Button("Button")
Page()
``` | open | 2024-01-23T21:54:31Z | 2024-01-23T21:54:31Z | https://github.com/widgetti/solara/issues/478 | [] | havok2063 | 0 |
jupyterhub/repo2docker | jupyter | 1,000 | Updates to environment.yml file are ignored when (re)building the docker image | <!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
<!-- Use this section to clearly and concisely describe the bug. -->
It looks like the cache is used for the conda (mamba) command used to update the environment from the `environment.yml` file, even when the latter has been modified (in my case, removing a conda package and moving this dependency into the pip subsection instead).
#### Expected behaviour
<!-- Tell us what you thought would happen. -->
The environment in the newly updated docker image reflecting the changes made in `environment.yml`
#### Actual behaviour
<!-- Tell us what you actually happens. -->
No change in the environment installed in the newly updated image.
### How to reproduce
<!-- Use this section to describe the steps that a user would take to experience this bug. -->
1. build the image with repo2docker
2. update `environment.yml` like described above
3. rebuild the image with repo2docker
### Your personal set up
Tested both on Binder and using repo2docker-action.
For more context: https://github.com/fastscape-lem/s2s-future-dragonstone/issues/4
| closed | 2021-01-07T22:30:06Z | 2021-01-08T10:54:58Z | https://github.com/jupyterhub/repo2docker/issues/1000 | [] | benbovy | 2 |
mars-project/mars | pandas | 2,370 | [BUG] Arithmetic cannot process period type | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Arithmetic cannot process period type.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
In [1]: import pandas as pd
In [2]: import mars.dataframe as md
In [3]: s = md.Series(pd.period_range("2000-01-01", periods=10, freq="D"))
In [4]: left, right = s[2], s[7]
In [6]: ((s >= left) & (s <= right)).execute()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-a4d787f38c66> in <module>
----> 1 ((s >= left) & (s <= right)).execute()
~/Workspace/mars/mars/dataframe/arithmetic/__init__.py in call(df, other, **kw)
89 raise ValueError('Can only compare '
90 'identically-labeled DataFrame object')
---> 91 return wrap_notimplemented_exception(func)(df, other, **kw)
92 return call
93
~/Workspace/mars/mars/dataframe/utils.py in wrapper(*args, **kwargs)
844 def wrapper(*args, **kwargs):
845 try:
--> 846 return func(*args, **kwargs)
847 except NotImplementedError:
848 return NotImplemented
~/Workspace/mars/mars/dataframe/arithmetic/greater_equal.py in ge(df, other, axis, level)
50 def ge(df, other, axis='columns', level=None):
51 op = DataFrameGreaterEqual(axis=axis, level=level, lhs=df, rhs=other)
---> 52 return op(df, other)
~/Workspace/mars/mars/dataframe/arithmetic/core.py in __call__(self, x1, x2)
486 def __call__(self, x1, x2):
487 x1 = self._process_input(x1)
--> 488 x2 = self._process_input(x2)
489 if isinstance(x1, SERIES_TYPE) and isinstance(x2, DATAFRAME_TYPE):
490 # reject invoking series's op on dataframe
~/Workspace/mars/mars/dataframe/arithmetic/core.py in _process_input(x)
437 return DataFrame(x)
438 elif isinstance(x, (list, tuple, np.ndarray, TENSOR_TYPE)):
--> 439 return astensor(x)
440 raise NotImplementedError
441
~/Workspace/mars/mars/tensor/datasource/array.py in tensor(data, dtype, order, chunk_size, gpu, sparse)
132 if isinstance(data, TensorData):
133 data = Tensor(data)
--> 134 return data.astype(dtype or data.dtype, order=order, copy=False)
135 elif isinstance(data, (tuple, list)) and len(data) > 0 and \
136 all(isinstance(d, TENSOR_TYPE) for d in data):
~/Workspace/mars/mars/tensor/base/astype.py in _astype(tensor, dtype, order, casting, copy)
151 array([1, 2, 2])
152 """
--> 153 dtype = np.dtype(dtype)
154 tensor_order = get_order(order, tensor.order)
155
TypeError: Cannot interpret 'period[D]' as a data type
``` | closed | 2021-08-23T02:56:07Z | 2021-08-23T09:50:20Z | https://github.com/mars-project/mars/issues/2370 | [
"type: bug",
"mod: dataframe"
] | qinxuye | 0 |
python-restx/flask-restx | api | 266 | Hide fields from response based on some conditions | Suppose, I have the following Model.
```python
model = api.model('Model', {
'name': fields.String,
'address': fields.String,
'date_updated': fields.DateTime(dt_format='rfc822'),
})
```
For some users, I want to send,
```python
{
'name': 'Mr. abc',
'address': 'x'
}
```
Also, for some other users, I would like to hide some fields
```python
{
'name': 'Mr. abc',
'last_updated': '2050-12-30'
}
```
@noirbizarre, How is it possible now?
Thanks in advance.
| closed | 2020-12-18T16:28:22Z | 2021-02-03T01:15:29Z | https://github.com/python-restx/flask-restx/issues/266 | [
"question"
] | mhihasan | 4 |
tqdm/tqdm | jupyter | 1,429 | tqdm shows an update on every iteration, even with miniter>1 | Hello, thanks for the great library!
I am using `tqdm==4.64.1` with `python==3.10.8` on linux.
I see an issue with `miniter`, that is consistent with other reports at #1396 and #1381 (and I also tried to change `dynamic_miniters` but that one only seems to exist in the documentation, cf #1327).
The issue is that, while I have `miniter=240`, the first update is displayed for `i=200`, and then I get an update for ever iteration:

This is a script that reproduces the issue:
```python
import sys
from tqdm import tqdm, __version__
from time import sleep
print(__version__, sys.version, sys.platform)
N = 100000
progress_bar = tqdm(range(N), total=N, miniters=240)
# actual = this prints an update at i~180 and then for every iteration on
# expected = update at i=239, 479, etc
for i in progress_bar:
sleep(0.1)
progress_bar.set_postfix({'i':i}, refresh=False)
``` | open | 2023-02-16T16:52:48Z | 2024-08-23T09:48:03Z | https://github.com/tqdm/tqdm/issues/1429 | [] | mwouts | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,149 | can the cyclegan model be applied to paired images | I have some paired images, thus I want to know it the cyclegan model has a pair-image mode, can it be applied to paired images, thank you very much! | closed | 2020-09-14T11:54:32Z | 2020-09-18T07:22:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1149 | [] | yianzhongguo | 2 |
psf/requests | python | 6,231 | `❯` character becomes `â¯` in data after post | Code: https://github.com/pcroland/deew/blob/main/dev_scripts/post/doom9.py
On other sites it worked fine but on this site it changed the input character to gibberish. I guess it's related to urlencoding.
If I urlencode
```
Help:
❯ deew -h
```
urlencoder.org creates
```
Help%3A%0A%E2%9D%AF%20deew%20-h
```
while chrome creates
```
Help%3A%0A%u276F%20deew%20-h
```
Is there a way to mimic how Chrome encodes it? | closed | 2022-09-05T22:16:55Z | 2023-09-07T00:03:15Z | https://github.com/psf/requests/issues/6231 | [] | pcroland | 5 |
rthalley/dnspython | asyncio | 350 | Error: dns.resolver.NXDOMAIN: None of DNS query names exist | I installed the latest release and tried to replicate the code here https://github.com/rthalley/dnspython/blob/master/examples/mx.py and I get `dns.resolver.NXDOMAIN: None of DNS query names exist: nominum.com., nominum.com`.
I am running on Windows.
Do you have any leads? Thanks | closed | 2019-02-09T06:30:53Z | 2020-07-15T18:38:39Z | https://github.com/rthalley/dnspython/issues/350 | [
"Cannot Reproduce"
] | Murukaen | 5 |
graphql-python/graphene-sqlalchemy | graphql | 285 | Dicussion: Support mutations in graphene-sqlalchemy | This is going to be a discussion thread to debate whether it is good to implement mutations in graphene-sqlalchemy. There is definitely a scope for this feature and I think it's useful, so the real question is more towards how to implement it.
We can probably start with creating objects and then later expand to updating the various attributes.
There are discussions around this topic in https://github.com/graphql-python/graphene-sqlalchemy/pull/213 as well as https://github.com/graphql-python/graphene-sqlalchemy/issues/29. I'll copy the relevant items here so that we won't have to repeat them again.
Points to note:
- Have a class named `SQLAlchemyInputObjectType` and have model and exclude_fields as meta properties.
```python
class CreateUser(SQLAlchemyInputObjectType):
class Meta:
exclude_fields = ('id', )
model = UserModel
```
- No need to worry about hybrid or composite columns as they are basically created from other columns. So, we just need a mechanism to just accept the fields. So, a function named `construct_fields_for_input`.
- How to handle relationships ?
Since we are creating row entries for the table, we can probably accept `id` of the related table entry and convert it to the database id.
@jnak thoughts ? | closed | 2020-09-08T14:15:38Z | 2022-05-13T11:54:10Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/285 | [
"enhancement"
] | sreerajkksd | 7 |
kubeflow/katib | scikit-learn | 2,291 | Tuning API in Katib for LLMs | Recently, we implemented [a new `train` Python SDK API](https://github.com/kubeflow/training-operator/blob/master/docs/proposals/train_api_proposal.md) in Kubeflow Training Operator to easily fine-tune LLMs on multiple GPUs with predefined datasets provider, model provider, and HuggingFace trainer.
To continue our roadmap around LLMOps in Kubeflow, we want to give user functionality to tune HyperParameters of LLMs using simple Python SDK APIs: `tune`.
It requires to make appropriate changes to the Katib Python SDK which allows users to set model, dataset, and HyperParameters that they want to optimize for LLM.
We need to re-use existing Training Operator components that we used for `train` API: `storage-initializer`, `trainer`.
| closed | 2024-03-19T10:55:55Z | 2024-09-03T14:32:17Z | https://github.com/kubeflow/katib/issues/2291 | [
"kind/feature",
"area/gsoc"
] | andreyvelich | 7 |
jacobgil/pytorch-grad-cam | computer-vision | 245 | How is the keypoint detection model used | Hello, can you provide a tutorial for key point detection,thank you. | closed | 2022-05-04T04:07:31Z | 2022-09-20T10:15:42Z | https://github.com/jacobgil/pytorch-grad-cam/issues/245 | [] | XiaopinWang | 3 |
encode/apistar | api | 288 | Generated documentation mangles formatting in docstrings | Given
```python
def some_service():
"""Perform a service.
This docstring has two logical paragraphs."""
return "Hello!"
```
the generated documentation using `apistar.handlers.docs_urls` presents as
> Perform a service. This docstring has two logical paragraphs.
with a single visual paragraph.
While I'm not sure whether API Star should assume reST, Markdown, or something else, the inability to use something as fundamental as paragraphs in the published API documentation is pretty painful. | closed | 2017-09-16T00:10:29Z | 2018-04-11T12:02:50Z | https://github.com/encode/apistar/issues/288 | [] | ojacobson | 2 |
robotframework/robotframework | automation | 5,353 | Listener's start_user_keyword modifications only affect subsequent executions, not current keyword run | ### Problem Description
When modifying a keyword's body via the `start_user_keyword` listener method, changes do not take effect for the **current execution** of the keyword. The modified behavior only applies when the keyword is called **again** in subsequent executions. This prevents real-time keyword suppression/dynamic modification during test runs.
### Steps to Reproduce
1. Create a listener that modifies keyword definitions in `start_user_keyword`:
**KeywordSuppresor.py**
```python
class KeywordSuppressor:
ROBOT_LISTENER_API_VERSION = 3
def __init__(self, kw_name):
self.keyword_to_skip = kw_name
def start_user_keyword(self, data, implementation, result):
if data.name == self.keyword_to_skip:
resource = implementation.owner
for kw in resource.keywords:
if data.name == kw.name:
kw.body.clear()
kw.body.create_keyword(
"Log",
[
f"ROBOT LISTENER: KeywordSuppressor - the keyword '{kw.name}' is requested to be skipped."
],
)
2. Create Testcase and Resource file
**Keyword.resource**
```
*** Keywords ***
Resource Keyword1
Log Hello Resource Keyword1
Resource Keyword2
Log Hello Resource Keyword2
```
**Test.robot**
```
*** Settings ***
Resource Keyword.resource
*** Test Cases ***
Log Hello World1
Robot Keyword1
Resource Keyword1
Robot Keyword2
Resource Keyword2
Log Hello World2
Robot Keyword1
Resource Keyword1
Robot Keyword2
Resource Keyword2
Log Hello World3
Robot Keyword1
Resource Keyword1
Robot Keyword2
Resource Keyword2
*** Keywords ***
Robot Keyword1
Log Hello Robot Keyword1
Robot Keyword2
Log Hello Robot Keyword2
```
3. Call
`robot --pythonpath . -b rlog --listener ./xxx/KeywordSuppressor.py:'Resource Keyword1' ./xxx/Test.robot`
### Expected Behavior
Modifications made in `start_user_keyword` should immediately affect the **current execution** of the keyword being processed.
### Actual Behavior
Changes only take effect on **subsequent calls** to the keyword.

Need confirmed if this is intentional design or a limitation of the listener API
### Environment
Robotframework 7.1
Python Version 3.9.9
| open | 2025-03-05T07:41:20Z | 2025-03-06T10:01:28Z | https://github.com/robotframework/robotframework/issues/5353 | [] | Lavinu | 2 |
marshmallow-code/apispec | rest-api | 732 | ApiDoc Generation failing due to missing name parameter, Resolve Parameters bug? | I have a schema which contains all my query parameters for my endpoint (pagination, filtering, etc)
Based on resolve_parameters I expect it to work as documented. The Marshmallow Plugin will auto expand all the fields in the schema and result with a list of parameters.
https://apispec.readthedocs.io/en/latest/api_ext.html#apispec.ext.marshmallow.schema_resolver.SchemaResolver.resolve_parameters
So when I try to generate my documentation I get an error stating `name` property must exist. But from the resolve_parameters it should not be needed (as if I try to add it, then a single query parameter is created as a nested json object).
Am I understanding/configuring something incorrectly?
# Dependency Versions
```
marshmallow==3.14.1
apispec==5.1.1
apispec-webframeworks==0.5.2
Flask==2.0.2
```
# Example layout of doc generation
```
from apispec import APISpec
from apispec.ext.marshmallow import MarshmallowPlugin
from apispec_webframeworks.flask import FlaskPlugin
def docs():
spec = APISpec(
title=SERVICE_API_SPEC_TITLE,
info=dict(description=SERVICE_API_SPEC_DESCRIPTION),
version=SERVICE_API_SPEC_VERSION,
openapi_version=OPENAPI_VERSION,
plugins=[FlaskPlugin(), MarshmallowPlugin()],
security=[
{"oAuth": []},
{"bearerAuth": []},
]
)
spec.components.schema(
"QueryParamsSchema", schema=QueryParamsSchema)
spec.components.parameter(
'customerId',
'path',
component={
'name': 'customer_id',
'description': 'customer name',
'schema': {
'type': 'string',
}
}
)
for rule in sorted(current_app.url_map.iter_rules(), key=lambda x: str(x)):
if str(rule).startswith('/my_app'):
spec.path(view=current_app.view_functions[rule.endpoint])
return spec.to_yaml()
```
# Example doc string in flask endpoint
```
parameters:
- customerId
- in: query
schema: $ref: '#/components/schemas/QueryParamsSchema'
``` | closed | 2021-11-22T11:58:00Z | 2021-11-22T16:03:14Z | https://github.com/marshmallow-code/apispec/issues/732 | [] | arthurvanduynhoven | 6 |
suitenumerique/docs | django | 103 | ✨Error reporting / monitoring | ## Feature Request
We want to integrate Sentry with Docs, here an example PR: https://github.com/numerique-gouv/people/pull/378 | closed | 2024-06-28T12:57:33Z | 2024-11-25T08:46:15Z | https://github.com/suitenumerique/docs/issues/103 | [
"feature",
"docker",
"helm"
] | AntoLC | 0 |
chezou/tabula-py | pandas | 102 | Can read_pdf function return positions of table too? | It would be great if tabula.read_pdf(f_pdf, multiple_tables=True, lattice=True) returns positions of each extracted table.
Thanks! | closed | 2018-08-25T10:23:32Z | 2020-04-18T13:02:38Z | https://github.com/chezou/tabula-py/issues/102 | [] | meokbodizi | 7 |
writer/writer-framework | data-visualization | 385 | exclude generated files as storybook and app_templates from repository | Generated files like stories or application templates should be ignored by git. They will be rebuilt in commands that take advantage of them like `alfred build`.
*.gitignore*
```
src/streamsync/app_templates/*
src/streamsync/ui.py
src/ui/components.codegen.json
src/ui/src/stories/**
```
### build consistent pypi package : ``alfred build``
`alfred build` must provision the default and hello applications, as well as regenerate the `ui.py` file. The goal is to have a complete package before sending it to pypi.
### install dev environment : ``alfred install.dev``
`alfred install.dev` command installs the build poetry dependencies and generates the various files locally which are necessary during development such as the ui.py file, the stories
### run storybook : ``alfred npm.storybook``
`alfred npm.storybook` updates story files before launching storybook.
| closed | 2024-04-15T04:42:20Z | 2024-08-26T09:02:00Z | https://github.com/writer/writer-framework/issues/385 | [
"housekeeping"
] | FabienArcellier | 0 |
babysor/MockingBird | deep-learning | 992 | 社区预先训练好的75k合成器的百度网盘链接无法下载 | 文档上的社区预先训练好的75k合成器的百度网盘链接无法下载,哪位大哥给小弟提供一个可以下载的地址,万分感谢!!

| open | 2024-04-09T08:49:11Z | 2024-06-02T16:28:42Z | https://github.com/babysor/MockingBird/issues/992 | [] | womeiyoumz | 2 |
lux-org/lux | pandas | 30 | Better default opacity setting to prevent scatterplot datapoint occlusion | Some scatterplots have large number of datapoints and suffer from the problem of occlusion (hard to distinguish patterns since datapoints are too dense). We should change the default chart setting to make opacity lower based on number of datapoints. | open | 2020-07-17T03:05:05Z | 2020-09-19T00:45:20Z | https://github.com/lux-org/lux/issues/30 | [
"enhancement",
"easy"
] | dorisjlee | 0 |
koxudaxi/datamodel-code-generator | fastapi | 2,161 | The --collapse-root-models switch can cause "Cannot parse for target version Python 3.xx" errors | **Describe the bug**
I can reliably cause `datamodel-code-generator` to error when processing a jsonschema file which is otherwise valid, and which can be made to succeed by tweaking the schema very slightly in a way that doesn't fundamentally alter it.
**To Reproduce**
Example schema:
```json
{
"$ref": "#/definitions/LogicalExpression",
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"ValueExpression": {
"title": "ValueExpression",
"anyOf": [
{
"$ref": "#/definitions/ConditionalValueExpression"
}
]
},
"ConditionalValueExpression": {
"additionalProperties": false,
"title": "ConditionalValueExpression",
"properties": {
"default": {
"$ref": "#/definitions/ValueExpression"
}
},
"type": "object"
},
"LogicalExpression": {
"title": "LogicalExpression",
"anyOf": [
{
"$ref": "#/definitions/ValueExpression"
},
{
"type": "string"
}
]
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input schema2.json --output model.py --collapse-root-models --target-python-version=3.10
The input file type was determined to be: jsonschema
This can be specified explicitly with the `--input-file-type` option.
Traceback (most recent call last):
File "C:\Users\sean.mclemon\src\scratch\scm-schema-test\venv\lib\site-packages\datamodel_code_generator\__main__.py", line 476, in main
generate(
File "C:\Users\sean.mclemon\src\scratch\scm-schema-test\venv\lib\site-packages\datamodel_code_generator\__init__.py", line 485, in generate
results = parser.parse()
File "C:\Users\sean.mclemon\src\scratch\scm-schema-test\venv\lib\site-packages\datamodel_code_generator\parser\base.py", line 1474, in parse
body = code_formatter.format_code(body)
File "C:\Users\sean.mclemon\src\scratch\scm-schema-test\venv\lib\site-packages\datamodel_code_generator\format.py", line 238, in format_code
code = self.apply_black(code)
File "C:\Users\sean.mclemon\src\scratch\scm-schema-test\venv\lib\site-packages\datamodel_code_generator\format.py", line 246, in apply_black
return black.format_str(
File "src\black\__init__.py", line 1204, in format_str
File "src\black\__init__.py", line 1218, in _format_str_once
File "src\black\parsing.py", line 98, in lib2to3_parse
black.parsing.InvalidInput: Cannot parse for target version Python 3.10: 9:20: __root__: Union[, str] = Field(..., title='Expression')
```
**Expected behavior**
The command should complete successfully and generate classes in `model.py`. Interestingly the schema can be adjusted _very_ slightly in a way that is functionally identical and the code generation succeeds without any issue. If you just swap the order of the two types inside the `anyOf` in `LogicalExpression` (like the below) code generation will succeed.
```
"LogicalExpression": {
"title": "LogicalExpression",
"anyOf": [
{
"type": "string"
},
{
"$ref": "#/definitions/ValueExpression"
}
]
}
```
I know the schema may not make sense and may look stupid but I trimmed down a fairly large json schema to get a minimal example that reproduces the problem. And, as I said, it can be adjusted very trivially so that code generation succeeds.
**Version:**
- OS: Windows 10
- Python version: 3.10.11
- datamodel-code-generator version: 0.26.3
**Additional context**
Note that while I say the schema can be adjusted so that it works, this isn't a feasible workaround for us. We are consuming a pretty large schema file that is generated automatically - so identifying and changing things around that need to be changed in the schema to workaround the issue would be _really_ difficult.
And when I say it errors/succeeds - it looks like the code itself generates - it's just that when the `datatmodel-codegen` tool invokes `black` to reformat the code it fails because the type `Union[, str]` isn't valid Python. So during the collapse process I guess types get flattened and removed, but we still end up with _something_ there in their place. And presumably when the order of the `anyOf` is reversed we end up with `Union[str, ]` which is syntactically fine. | open | 2024-11-11T08:59:30Z | 2024-11-14T09:17:13Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2161 | [] | smcl | 0 |
alteryx/featuretools | data-science | 1,845 | Running DFS with Dask input can result in a NotImplementedError | Running the following example results in a `NotImplementedError`, due to handling of categorical columns inside `query_by_values`. The code seems to have been included as a work-around for a pandas issue.
The included CSV files are needed to run the repro code included. The included code is adapted from the `predict-next-purchase` open source demo in which this bug was identified.
[data.zip](https://github.com/alteryx/featuretools/files/7873277/data.zip)
See related Issue #1734
#### Code Sample
```python
import featuretools as ft
import dask.dataframe as dd
import pandas as pd
order_products = pd.read_csv("order_products_sample.csv")
orders = pd.read_csv("orders_sample.csv")
order_products = dd.from_pandas(order_products, npartitions=4)
orders = dd.from_pandas(orders, npartitions=4)
order_products_types = {
"order_id": "Categorical",
"reordered": "BooleanNullable",
"product_name": "Categorical",
"aisle_id": "Categorical",
"department": "Categorical",
"order_time": "Datetime",
"order_product_id": "Integer",
}
order_types = {
"order_id": "Integer",
"user_id": "Categorical",
"order_time": "Datetime",
}
es = ft.EntitySet("instacart_sample")
es.add_dataframe(dataframe_name="order_products",
dataframe=order_products,
index="order_product_id",
logical_types=order_products_types,
time_index="order_time")
es.add_dataframe(dataframe_name="orders",
dataframe=orders,
index="order_id",
logical_types=order_types,
time_index="order_time")
es.add_relationship("orders", "order_id", "order_products", "order_id")
es.normalize_dataframe(base_dataframe_name="orders", new_dataframe_name="users", index="user_id")
feature_matrix, features = ft.dfs(target_dataframe_name="users",
entityset=es,
verbose=True)
```
#### Full Stack Trace:
<details>
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-15-79fb37be6589> in <module>
----> 1 feature_matrix, features = ft.dfs(target_dataframe_name="users",
2 entityset=es,
3 verbose=True)
~/dev/featuretools/featuretools/utils/entry_point.py in function_wrapper(*args, **kwargs)
38 ep.on_error(error=e,
39 runtime=runtime)
---> 40 raise e
41
42 # send return value
~/dev/featuretools/featuretools/utils/entry_point.py in function_wrapper(*args, **kwargs)
30 # call function
31 start = time.time()
---> 32 return_value = func(*args, **kwargs)
33 runtime = time.time() - start
34 except Exception as e:
~/dev/featuretools/featuretools/synthesis/dfs.py in dfs(dataframes, relationships, entityset, target_dataframe_name, cutoff_time, instance_ids, agg_primitives, trans_primitives, groupby_trans_primitives, allowed_paths, max_depth, ignore_dataframes, ignore_columns, primitive_options, seed_features, drop_contains, drop_exact, where_primitives, max_features, cutoff_time_in_index, save_progress, features_only, training_window, approximate, chunk_size, n_jobs, dask_kwargs, verbose, return_types, progress_callback, include_cutoff_time)
276 return features
277
--> 278 feature_matrix = calculate_feature_matrix(features,
279 entityset=entityset,
280 cutoff_time=cutoff_time,
~/dev/featuretools/featuretools/computational_backends/calculate_feature_matrix.py in calculate_feature_matrix(features, entityset, cutoff_time, instance_ids, dataframes, relationships, cutoff_time_in_index, training_window, approximate, save_progress, verbose, chunk_size, n_jobs, dask_kwargs, progress_callback, include_cutoff_time)
299 include_cutoff_time=include_cutoff_time)
300 else:
--> 301 feature_matrix = calculate_chunk(cutoff_time=cutoff_time_to_pass,
302 chunk_size=chunk_size,
303 feature_set=feature_set,
~/dev/featuretools/featuretools/computational_backends/calculate_feature_matrix.py in calculate_chunk(cutoff_time, chunk_size, feature_set, entityset, approximate, training_window, save_progress, no_unapproximated_aggs, cutoff_df_time_col, target_time, pass_columns, progress_bar, progress_callback, include_cutoff_time, schema)
374 time_last,
375 training_window=training_window)
--> 376 _feature_matrix = calculator.run(ids,
377 progress_callback=update_progress_callback,
378 include_cutoff_time=include_cutoff_time)
~/dev/featuretools/featuretools/computational_backends/feature_set_calculator.py in run(self, instance_ids, progress_callback, include_cutoff_time)
109 target_dataframe = self.entityset[self.feature_set.target_df_name]
110
--> 111 self._calculate_features_for_dataframe(dataframe_name=self.feature_set.target_df_name,
112 feature_trie=feature_trie,
113 df_trie=df_trie,
~/dev/featuretools/featuretools/computational_backends/feature_set_calculator.py in _calculate_features_for_dataframe(self, dataframe_name, feature_trie, df_trie, full_dataframe_trie, precalculated_trie, filter_column, filter_values, parent_data, progress_callback, include_cutoff_time)
220 query_values = filter_values
221
--> 222 df = self.entityset.query_by_values(dataframe_name=dataframe_name,
223 instance_vals=query_values,
224 column_name=query_column,
~/dev/featuretools/featuretools/entityset/entityset.py in query_by_values(self, dataframe_name, instance_vals, column_name, columns, time_last, training_window, include_cutoff_time)
1400 # Note: Woodwork stores categorical columns with a `string` dtype for Koalas
1401 if dataframe.ww.columns[column_name].is_categorical and not is_instance(df, ks, 'DataFrame'):
-> 1402 categories = pd.api.types.CategoricalDtype(categories=dataframe[column_name].cat.categories)
1403 df[column_name] = df[column_name].astype(categories)
1404
~/dev/featuretools/env/lib/python3.8/site-packages/dask/dataframe/categorical.py in categories(self)
223 "`df.categorize()` beforehand to ensure known categories"
224 )
--> 225 raise NotImplementedError(msg)
226 return self._delegate_property(self._series._meta, "cat", "categories")
227
NotImplementedError: `df.column.cat.categories` with unknown categories is not supported. Please use `column.cat.as_known()` or `df.categorize()` beforehand to ensure known categories
```
</details>
#### Output of ``featuretools.show_info()``
<details>
Featuretools version: 1.3.0
Featuretools installation directory: /Users/nate.parsons/dev/featuretools/featuretools
SYSTEM INFO
-----------
python: 3.8.5.final.0
python-bits: 64
OS: Darwin
OS-release: 20.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
INSTALLED VERSIONS
------------------
numpy: 1.21.1
pandas: 1.3.2
tqdm: 4.62.0
PyYAML: 5.4.1
cloudpickle: 1.6.0
dask: 2021.7.2
distributed: 2021.7.2
psutil: 5.8.0
pip: 21.3.1
setuptools: 47.1.0
</details>
| closed | 2022-01-14T21:19:36Z | 2024-05-10T15:54:22Z | https://github.com/alteryx/featuretools/issues/1845 | [
"bug"
] | thehomebrewnerd | 1 |
SYSTRAN/faster-whisper | deep-learning | 1,169 | Version 1.1.0 has onnxruntime thread affinity crash | Updated from 1.0.3 to 1.1.0. Now an onnxruntime thread affinity crash occurs each time. Both versions run on a Nvidia A40 with 4 CPU cores, 48GB VRAM and 16GB RAM (on a private Replicate server). Shouldn't be a hardware issue. Our model config:
```
self.whisper_model = WhisperModel(
"large-v2",
device="cuda",
compute_type="float16",
cpu_threads=4,
num_workers=1
)
...
options = dict(
vad_filter=True,
vad_parameters=dict(min_silence_duration_ms=1000),
initial_prompt=prompt,
word_timestamps=True,
language=language,
log_progress=True,
hotwords=prompt
)
segments, transcript_info = self.whisper_model.transcribe(audio=audio_file, **options)
```
Also tried this:
```
import os
os.environ["ORT_DISABLE_CPU_AFFINITY"] = "1"
os.environ["OMP_NUM_THREADS"] = "4"
os.environ["OPENBLAS_NUM_THREADS"] = "4"
os.environ["MKL_NUM_THREADS"] = "4"
os.environ["VECLIB_MAXIMUM_THREADS"] = "4"
os.environ["NUMEXPR_NUM_THREADS"] = "4"
```
But to no avail. Any suggestions? Below the crash log.
> Loading large-v2 model...
> Done loading large-v2 model, took: 75.503 seconds
> Starting transcribing
> INFO:faster_whisper:Processing audio with duration 03:25.706
> 2024-11-22 19:33:53.322733977 [E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 785, index: 1, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
> INFO:faster_whisper:VAD filter removed 00:19.722 of audio
> DEBUG:faster_whisper:VAD filter kept the following audio segments: [00:00.048 -> 01:07.440], [01:07.984 -> 03:06.576]
> 0%| | 0/185.98 [00:00<?, ?seconds/s]DEBUG:faster_whisper:Processing segment at 00:00.000
> Traceback (most recent call last):
> File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/runner.py", line 417, in _handle_done
> f.result()
> File "/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py", line 451, in result
> return self.__get_result()
> File "/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
> raise self._exception
> cog.server.exceptions.FatalWorkerException: Prediction failed for an unknown reason. It might have run out of memory? (exitcode -6)
The cog.yaml with dependencies looks like this:
```
build:
gpu: true
system_packages:
- "ffmpeg"
- "libmagic1"
python_version: "3.10"
python_packages:
# Core ML packages
- "torch==2.3.0"
- "torchaudio==2.3.0"
- "faster-whisper==1.1.0"
- "pyannote-audio==3.3.1"
- "onnxruntime"
# API and utility packages
- "requests==2.31.0"
- "firebase-admin==6.4.0"
- "google-generativeai==0.3.2"
- "babel==2.14.0"
- "openai==1.12.0"
- "supabase==2.10.0"
- "kalyke-apns==1.0.3"
- "numpy<2.0.0"
run:
- "pip install --upgrade pip"
- "echo env is ready!"
predict: "predict.py:Predictor"
```
Also tried removing the onnxruntime dependency or setting it to a specific gpu version. But nothing fixes the issue. Anyone with ideas (@MahmoudAshraf97) ?
If the `cpu` is used as `device` on `WhisperModel` the onnxruntime error still shows in the logs but there is no crash and transcribing finishes successfully.
| closed | 2024-11-23T06:06:30Z | 2024-12-12T12:24:13Z | https://github.com/SYSTRAN/faster-whisper/issues/1169 | [] | Appfinity-development | 9 |
mherrmann/helium | web-scraping | 76 | helium.drag() | Hi, someone can help me with some examples on this?
I try everything but doesn't work for me.
helium.drag("Text Locator", to=helium.S(".dragzone"))
also
helium.drag(helium.Point(x, y), helium.Point(x, y)) and I got the coordinates from helium.Text(element).x | closed | 2022-01-10T20:16:00Z | 2022-01-17T10:51:54Z | https://github.com/mherrmann/helium/issues/76 | [] | petrisorionel | 0 |
Netflix/metaflow | data-science | 1,594 | Implement step function terminate | I was looking for a way to stop a running step function and ran across this TODO in metaflow/plugins/aws/step_functions/step_functions_client.py:
```
def terminate_execution(self, state_machine_arn, execution_arn):
# TODO
pass
```
Any chance of this getting implemented in the near future? | closed | 2023-10-17T22:06:48Z | 2024-02-27T15:23:33Z | https://github.com/Netflix/metaflow/issues/1594 | [] | martinbattentive | 2 |
cvat-ai/cvat | tensorflow | 8,613 | Add multiple selectable attribute values | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
Currently, the label attributes of cvat do not support multiple options. For example, checkbox can only select true or false, and radio can select only one value from multiple values. However, when labeling, you often need to select multiple values. For example, I have a label `chart`, and one of its attributes is `chart_type`, which contains multiple values (line, bar, pie, etc.). Some charts may contain multiple attributes at the same time. In this case, multiple selections are required. The current possible solution is to make each value a checkbox attribute, but this will increase the complexity of the attribute list.
### Describe the solution you'd like
Add an attribute that can select multiple values
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-10-30T10:16:03Z | 2024-10-30T11:06:20Z | https://github.com/cvat-ai/cvat/issues/8613 | [
"duplicate",
"enhancement"
] | huayecaibcc | 1 |
vitalik/django-ninja | pydantic | 327 | Get operation Id for any given request. | Is there an easy way to get `operation_id` for a given API request?
This is what I have so far:
```python
from django.urls import resolve
def get_operation_id(request: HttpRequest):
view_func, _, _ = resolve(request.path)
klass = view_func.__self__
operation, _ = klass._find_operation(request)
return operation.operation_id or klass.api.get_openapi_operation_id(operation)
```
I would like to use the `operation_id` for permissions checks.
```python
class NinjaAuth(SessionAuth):
"""
Reusing Django session authentication
"""
def authenticate(self, request: HttpRequest, key: Optional[str]) -> Optional[Any]:
user = super().authenticate(request=request, key=key)
if user is not None:
operation_id = get_operation_id(request=request)
perms = (operation_id,)
if user.has_perms(perms):
raise PermissionDenied("You do not have permission to perform this action.")
return user
``` | closed | 2022-01-14T20:08:53Z | 2022-07-02T15:26:57Z | https://github.com/vitalik/django-ninja/issues/327 | [] | mishbahr | 2 |
babysor/MockingBird | deep-learning | 407 | 我能在训练途中重新进行预处理吗 | 比如说我想增加或修改源音频以达到更好的训练效果,又不想从头训练的话,能不能实现这种效果? | closed | 2022-02-27T05:26:30Z | 2022-03-02T02:20:19Z | https://github.com/babysor/MockingBird/issues/407 | [] | yrsn509 | 2 |
HumanSignal/labelImg | deep-learning | 1,028 | How to add polygon shape instead of rectangle | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
- **OS:**
- **PyQt version:**
| open | 2024-02-18T13:12:47Z | 2024-02-18T13:12:47Z | https://github.com/HumanSignal/labelImg/issues/1028 | [] | chiru123-b | 0 |
getsentry/sentry | python | 86,828 | Widget builder should display more prominent on-demand warning | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
Open the widget builder and build an on-demand widget. e.g. selecting a custom tag
### Expected Result
There's a more prominent warning that my widget is now going to fetch on-demand data, and that the data may not be the same once it is saved.
### Actual Result
There will be a small warning icon next to the filter or group-by where the custom tag is selected, but it's not very noticeable.
### Product Area
Unknown
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-11T18:52:35Z | 2025-03-11T18:52:48Z | https://github.com/getsentry/sentry/issues/86828 | [
"Product Area: Dashboards"
] | narsaynorath | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.