repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
nonebot/nonebot2
fastapi
2,943
Plugin: nonebot-plugin-githubmodels
### PyPI ้กน็›ฎๅ nonebot-plugin-githubmodels ### ๆ’ไปถ import ๅŒ…ๅ githubmodels ### ๆ ‡็ญพ [] ### ๆ’ไปถ้…็ฝฎ้กน ```dotenv GITHUB_TOKEN="hxjxnfkdmzjs" ```
closed
2024-09-12T15:43:52Z
2024-09-12T15:51:08Z
https://github.com/nonebot/nonebot2/issues/2943
[ "Plugin" ]
lyqgzbl
2
CorentinJ/Real-Time-Voice-Cloning
deep-learning
1,115
I can't make sense of this error... can somebody please help me
I made it to step 5 of installation without problems, and even received "all test passed" when running `python demo_cli.py`, However, when i get to I get to launching the toolbox, `python demo_toolbox.py -d <datasets_root>` (where datasets_root points to train-clean-100 downloaded per step 4), I receive this error: ``` Traceback (most recent call last): File "/media/user/drive2/Documents/Real-Time-Voice-Cloning/demo_toolbox.py", line 5, in <module> from toolbox import Toolbox File "/media/user/drive2/Documents/Real-Time-Voice-Cloning/toolbox/__init__.py", line 11, in <module> from toolbox.ui import UI File "/media/user/drive2/Documents/Real-Time-Voice-Cloning/toolbox/ui.py", line 11, in <module> import umap File "/home/user/anaconda3/lib/python3.9/site-packages/umap/__init__.py", line 2, in <module> from .umap_ import UMAP File "/home/user/anaconda3/lib/python3.9/site-packages/umap/umap_.py", line 47, in <module> from pynndescent import NNDescent File "/home/user/anaconda3/lib/python3.9/site-packages/pynndescent/__init__.py", line 3, in <module> from .pynndescent_ import NNDescent, PyNNDescentTransformer File "/home/user/anaconda3/lib/python3.9/site-packages/pynndescent/pynndescent_.py", line 16, in <module> import pynndescent.sparse as sparse File "/home/user/anaconda3/lib/python3.9/site-packages/pynndescent/sparse.py", line 229, in <module> def sparse_mul(ind1, data1, ind2, data2): File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/decorators.py", line 219, in wrapper disp.compile(sig) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/dispatcher.py", line 965, in compile cres = self._compiler.compile(args, return_type) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/dispatcher.py", line 129, in compile raise retval File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/dispatcher.py", line 139, in _compile_cached retval = self._compile_core(args, return_type) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/dispatcher.py", line 152, in _compile_core cres = compiler.compile_extra(self.targetdescr.typing_context, File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler.py", line 693, in compile_extra return pipeline.compile_extra(func) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler.py", line 429, in compile_extra return self._compile_bytecode() File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler.py", line 497, in _compile_bytecode return self._compile_core() File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler.py", line 476, in _compile_core raise e File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler.py", line 463, in _compile_core pm.run(self.state) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 353, in run raise patched_exception File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 341, in run self._runPass(idx, pass_inst, state) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler_lock.py", line 35, in _acquire_compile_lock return func(*args, **kwargs) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 296, in _runPass mutated |= check(pss.run_pass, internal_state) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/compiler_machinery.py", line 269, in check mangled = func(compiler_state) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/typed_passes.py", line 105, in run_pass typemap, return_type, calltypes, errs = type_inference_stage( File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/typed_passes.py", line 83, in type_inference_stage errs = infer.propagate(raise_errors=raise_errors) File "/home/user/anaconda3/lib/python3.9/site-packages/numba/core/typeinfer.py", line 1086, in propagate raise errors[0] numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) - Resolution failure for literal arguments: No implementation of function Function(<function impl_append at 0x7f87bea19670>) found for signature: >>> impl_append(ListType[int32], int32) There are 2 candidate implementations: - Of which 2 did not match due to: Overload in function 'impl_append': File: numba/typed/listobject.py: Line 592. With argument(s): '(ListType[int32], int32)': Rejected as the implementation raised a specific error: TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'ListStatus': Cannot determine Numba type of <class 'shibokensupport.enum_310.EnumMeta'> File "../../../../../home/user/anaconda3/lib/python3.9/site-packages/numba/typed/listobject.py", line 602: def impl(l, item): <source elided> status = _list_append(l, casteditem) if status == ListStatus.LIST_OK: ^ raised from /home/user/anaconda3/lib/python3.9/site-packages/numba/core/typeinfer.py:1480 - Resolution failure for non-literal arguments: None During: resolving callee type: BoundFunction((<class 'numba.core.types.containers.ListType'>, 'append') for ListType[int32]) During: typing of call at /home/user/anaconda3/lib/python3.9/site-packages/pynndescent/sparse.py (244) File "../../../../../home/user/anaconda3/lib/python3.9/site-packages/pynndescent/sparse.py", line 244: def sparse_mul(ind1, data1, ind2, data2): <source elided> if val != 0: result_ind.append(j1) ^ ``` Can somebody please help me make sense of this issue? Did I go wrong somewhere in the installation?
closed
2022-09-22T07:52:13Z
2023-01-08T08:55:12Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1115
[]
ikesaber
0
ultralytics/yolov5
pytorch
12,897
Running Hyperparameter Evolution raises ValueError
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component Training, Evolution ### Bug I was trying to train my custom model locally on `Nvidia RTX 3050` but it raises a ValueError. I checked and it raises the same error on coco128 dataset. This is the dump: ``` (.venv-cuda121) PS C:\workspace\adis\yolov5> python train.py --img 640 --batch 4 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache --evolve train: weights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=3, batch_size=4, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=300, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False github: up to date with https://github.com/ultralytics/yolov5 YOLOv5 v7.0-296-gae4ef3b2 Python-3.11.1 torch-2.2.2+cu121 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.01041, hsv_s=0.54703, hsv_v=0.27739, degrees=0.0, translate=0.04591, scale=0.75544, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=0.85834, mixup=0.04266, copy_paste=0.0, anchors=3 Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 runs in Comet Overriding model.yaml anchors with anchors=3 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 2 115712 models.common.C3 [128, 128, 2] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 1182720 models.common.C3 [512, 512, 1] 9 -1 1 656896 models.common.SPPF [512, 512, 5] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]], [128, 256, 512]] Model summary: 214 layers, 7235389 parameters, 7235389 gradients, 16.6 GFLOPs Transferred 348/349 items from yolov5s.pt AMP: checks passed optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias train: Scanning C:\workspace\adis\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 128/128 [00:00<?, ?it/s train: Caching images (0.1GB ram): 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 128/128 [00:00<00:00, 1829.21it/s] val: Scanning C:\workspace\adis\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 128/128 [00:00<?, ?it/s] AutoAnchor: 0.36 anchors/target, 0.097 Best Possible Recall (BPR). Anchors are a poor fit to dataset , attempting to improve... AutoAnchor: WARNING Extremely small objects found: 3 of 929 labels are <3 pixels in size AutoAnchor: Running kmeans for 9 anchors on 928 points... AutoAnchor: Evolving anchors with Genetic Algorithm: fitness = 0.6715: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1000/1000 [00:00<00:00, 2909.40it/s] AutoAnchor: thr=0.25: 0.9925 best possible recall, 3.71 anchors past thr AutoAnchor: n=9, img_size=640, metric_all=0.261/0.672-mean/best, past_thr=0.478-mean: 11,11, 20,27, 51,57, 125,86, 92,175, 140,287, 280,226, 378,368, 549,444 AutoAnchor: Done (optional: update model *.yaml to use these anchors in the future) Plotting labels to runs\evolve\exp4\labels.jpg... Image sizes 640 train, 640 val Using 4 dataloader workers Logging results to runs\evolve\exp4 Starting training for 3 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0%| | 0/32 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\workspace\adis\yolov5\train.py", line 848, in <module> main(opt) File "C:\workspace\adis\yolov5\train.py", line 754, in main results = train(hyp.copy(), opt, device, callbacks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\train.py", line 356, in train for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\tqdm\std.py", line 1181, in __iter__ for obj in iterable: File "C:\workspace\adis\yolov5\utils\dataloaders.py", line 239, in __iter__ yield next(self.iterator) ^^^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\dataloader.py", line 631, in __next__ data = self._next_data() ^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\dataloader.py", line 1346, in _next_data return self._process_data(data) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\dataloader.py", line 1372, in _process_data data.reraise() File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\_utils.py", line 722, in reraise raise exception ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\_utils\worker.py", line 308, in _worker_loop data = fetcher.fetch(index) ^^^^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\_utils\fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\workspace\adis\yolov5\.venv-cuda121\Lib\site-packages\torch\utils\data\_utils\fetch.py", line 51, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] ~~~~~~~~~~~~^^^^^ File "C:\workspace\adis\yolov5\utils\dataloaders.py", line 777, in __getitem__ img, labels = mixup(img, labels, *self.load_mosaic(random.choice(self.indices))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\random.py", line 369, in choice if not seq: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` If this is an Nvidia related bug, then this is my info from nvidia-smi ``` Mon Apr 8 20:41:10 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 546.21 Driver Version: 546.21 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3050 ... WDDM | 00000000:01:00.0 Off | N/A | | N/A 36C P0 7W / 40W | 0MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ ### Environment YOLOv5 v7.0-296-gae4ef3b2 Python-3.11.1 torch-2.2.2+cu121 CUDA:0 (NVIDIA GeForce RTX 3050 Laptop GPU, 4096MiB) ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2024-04-08T15:12:26Z
2024-10-20T19:43:15Z
https://github.com/ultralytics/yolov5/issues/12897
[ "bug", "Stale" ]
RAHUL01-09
3
microsoft/nni
deep-learning
5,031
How to set useActivateGpu=true in remote mode?
**Describe the issue**: I can use the NNI normally with Remote mode on the CPU, although it is very slow. When I try to use NNI to run the program on GPU under the remote mode, the program is always waiting but didn't run at all. One possible reason is that all of the GPUs in the remote machine are partly occupied and the NNI won't use the GPU until the GPU is totally free. When using the local environment, NNI allows me to set useActivateGpu=true to use the working GPU. However, in remote mode, it will throw the error that **"AttributeError: RemoteConfig does not have field(s) useactivegpu"** So I want to know how to set the config to use active GPU under remote Configuration. **Environment**: - NNI version: 2.7 - Training service (local|remote|pai|aml|etc): remote - Client OS: Ubuntu 20.04 - Server OS (for remote mode only): Ubuntu 20.04 - Python version: 3.8.5 - PyTorch/TensorFlow version: PyTorch v1.8.1 - Is conda/virtualenv/venv used?: use conda environment - Is running in Docker?: No **Configuration**: - Experiment config (remember to remove secrets!): ``` maxTrialNumber: 20 trialCommand: python main.py trialCodeDirectory: . trialGpuNumber: 2 trialConcurrency: 4 tuner: name: TPE classArgs: optimize_mode: maximize trainingService: useActiveGpu: true platform: remote reuseMode: true # gpuIndices: '0' machineList: - host: xxx.xxx.xx.xx user: xxx ssh_key_file: ~/.ssh/id_rsa ``` - Search space: ``` { "lr": {"_type": "choice", "_value": [0.002, 0.001, 0.0005]}, "l2": {"_type": "choice", "_value": [1e-5, 2e-5, 5e-5]} } ``` **Log message**: - nnimanager.log: - dispatcher.log: - nnictl stdout and stderr: ``` Traceback (most recent call last): File "/home/xxx/.miniconda3/envs/torch/bin/nnictl", line 8, in <module> sys.exit(parse_args()) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/tools/nnictl/nnictl.py", line 497, in parse_args args.func(args) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/tools/nnictl/launcher.py", line 77, in create_experiment config = ExperimentConfig.load(config_file) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/experiment/config/base.py", line 140, in load config = cls(**data) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/experiment/config/experiment_config.py", line 104, in __init__ self.training_service = utils.load_training_service_config(self.training_service) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/experiment/config/utils/internal.py", line 157, in load_training_service_config return cls(**config) File "/home/xxx/.miniconda3/envs/torch/lib/python3.8/site-packages/nni/experiment/config/base.py", line 93, in __init__ raise AttributeError(f'{class_name} does not have field(s) {fields}') AttributeError: RemoteConfig does not have field(s) useactivegpu ``` <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
closed
2022-07-30T06:43:34Z
2022-09-07T10:42:16Z
https://github.com/microsoft/nni/issues/5031
[ "user raised", "support", "remote" ]
unikcc
1
absent1706/sqlalchemy-mixins
sqlalchemy
32
Does BaseModel.set_session(session) only run once?
Or BaseModel.set_session(session) needs to run for every request?
closed
2020-01-11T07:11:46Z
2020-03-31T16:21:00Z
https://github.com/absent1706/sqlalchemy-mixins/issues/32
[]
scil
1
pytorch/vision
machine-learning
8,389
Compiling resize_image: function interpolate not_implemented
### ๐Ÿ› Describe the bug I am compiling a method (mode=default, fullgrph=True), which calls torchvision.transforms.v2.functional.resize_image. However, I receive an error, which indicates that the interpolate method is not implemented. I am using pytorch lightning and weirdly this only happens during validation. It works fine during training. ``` Failed running call_function <function interpolate at 0x7f60c39593a0>(*(FakeTensor(..., device='cuda:0', size=(75, 3, 256, 256)),), **{'size': [224, 224], 'mode': 'bicubic', 'align_corners': False, 'antialias': True}): Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented: - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'> For more information, try re-running with TORCH_LOGS=not_implemented from user code: File ... File "/some/python/file.py", line 54, in encode x = tv_func.resize_image( File ".../miniconda3/envs/deepmotion3/lib/python3.11/site-packages/torchvision/transforms/v2/functional/_geometry.py", line 260, in resize_image image = interpolate( Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True TypeError: Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented: - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'> For more information, try re-running with TORCH_LOGS=not_implemented The above exception was the direct cause of the following exception: RuntimeError: Failed running call_function <function interpolate at 0x7f60c39593a0>(*(FakeTensor(..., device='cuda:0', size=(75, 3, 256, 256)),), **{'size': [224, 224], 'mode': 'bicubic', 'align_corners': False, 'antialias': True}): Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented: - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'> For more information, try re-running with TORCH_LOGS=not_implemented During handling of the above exception, another exception occurred: File "/some/python/file.py", line 299, in validation_step loss = self.shared_step(step_batch, train=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/some/python/file.py", line 586, in run trainer.fit(model, datamodule, ckpt_path=opt.resume_from_checkpoint) File "/some/python/file.py", line 702, in <module> run(opt, trainer, datamodule, model) torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function interpolate at 0x7f60c39593a0>(*(FakeTensor(..., device='cuda:0', size=(75, 3, 256, 256)),), **{'size': [224, 224], 'mode': 'bicubic', 'align_corners': False, 'antialias': True}): Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented: - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'> For more information, try re-running with TORCH_LOGS=not_implemented from user code: File ... File "/some/python/file.py", line 54, in encode x = tv_func.resize_image( File ".../miniconda3/envs/deepmotion3/lib/python3.11/site-packages/torchvision/transforms/v2/functional/_geometry.py", line 260, in resize_image image = interpolate( Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` ### Versions Collecting environment information... PyTorch version: 2.2.1 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB GPU 1: NVIDIA A100-PCIE-40GB GPU 2: NVIDIA A100-PCIE-40GB GPU 3: NVIDIA A100-PCIE-40GB GPU 4: NVIDIA A100-PCIE-40GB GPU 5: NVIDIA A100-PCIE-40GB GPU 6: NVIDIA A100-PCIE-40GB GPU 7: NVIDIA A100-PCIE-40GB Nvidia driver version: 535.171.04 cuDNN version: Probably one of the following: /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.0 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: AuthenticAMD Model name: AMD EPYC 7452 32-Core Processor CPU family: 23 Model: 49 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 Stepping: 0 Frequency boost: enabled CPU max MHz: 2350.0000 CPU min MHz: 1500.0000 BogoMIPS: 4700.09 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es Virtualization: AMD-V L1d cache: 2 MiB (64 instances) L1i cache: 2 MiB (64 instances) L2 cache: 32 MiB (64 instances) L3 cache: 256 MiB (16 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-31 NUMA node1 CPU(s): 32-63 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled Vulnerability Spec rstack overflow: Mitigation; SMT disabled Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] pytorch-lightning==2.1.3 [pip3] torch==2.2.1 [pip3] torch-fidelity==0.3.0 [pip3] torchaudio==2.2.1 [pip3] torchdiffeq==0.2.3 [pip3] torchmetrics==1.3.2 [pip3] torchvision==0.17.1 [pip3] triton==2.2.0 [conda] blas 1.0 mkl [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] mkl 2023.1.0 h213fc3f_46344 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] mkl_fft 1.3.8 py311h5eee18b_0 [conda] mkl_random 1.2.4 py311hdb19cb5_0 [conda] numpy 1.26.4 py311h08b1b3b_0 [conda] numpy-base 1.26.4 py311hf175353_0 [conda] pytorch 2.2.1 py3.11_cuda12.1_cudnn8.9.2_0 pytorch [conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch [conda] pytorch-lightning 2.1.3 pyhd8ed1ab_0 conda-forge [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch-fidelity 0.3.0 pypi_0 pypi [conda] torchaudio 2.2.1 py311_cu121 pytorch [conda] torchdiffeq 0.2.3 pypi_0 pypi [conda] torchmetrics 1.3.2 pyhd8ed1ab_0 conda-forge [conda] torchtriton 2.2.0 py311 pytorch [conda] torchvision 0.17.1 py311_cu121 pytorch
open
2024-04-19T18:04:08Z
2024-04-29T13:10:51Z
https://github.com/pytorch/vision/issues/8389
[]
treasan
1
JaidedAI/EasyOCR
deep-learning
629
Why are there some pictures with errors?
The same code, Why are there some pictures with errors? Here is the error message ``` CUDA not available - defaulting to CPU. Note: This module is much faster with a GPU. Traceback (most recent call last): File "D:/code/python/base/test1.py", line 6, in <module> result = reader.readtext('invoice.png') File "D:\python\base\lib\site-packages\easyocr\easyocr.py", line 397, in readtext filter_ths, y_ths, x_ths, False, output_format) File "D:\python\base\lib\site-packages\easyocr\easyocr.py", line 325, in recognize image_list, max_width = get_image_list(h_list, f_list, img_cv_grey, model_height = imgH) File "D:\python\base\lib\site-packages\easyocr\utils.py", line 540, in get_image_list maximum_y,maximum_x = img.shape AttributeError: 'NoneType' object has no attribute 'shape' Process finished with exit code 1 ``` **this is error code** ```python import easyocr import time start = time.time() reader = easyocr.Reader(['ch_sim','en']) result = reader.readtext('ๅ‘็ฅจ.png') for word in result: print(word[1]) end = time.time() print(end - start) ``` **And when I change it to this it can work** ```python import easyocr import time start = time.time() reader = easyocr.Reader(['ch_sim','en']) img = open('invoice.png','rb') source = img.read() result = reader.readtext(source) for word in result: print(word[1]) end = time.time() print(end - start) ```
closed
2021-12-24T10:22:22Z
2022-08-07T05:01:56Z
https://github.com/JaidedAI/EasyOCR/issues/629
[]
jinhuiDing
0
pykaldi/pykaldi
numpy
90
why do i get different result for computing fbank-feature between pykaldi and kaldi
i install the pykaldi from source. then compare the result for using pykaldi to result for using compute-fbank-feats in kaldi with test.wav 1.code(use pykaldi) sf_bank=16000.0 m3 = SubVector(mean(s3, axis=0)) f3=fbank.compute_features(m3,sf_fbank,1.00) feature: 13.5067 10.8341 10.0170 ... 13.6531 13.5450 13.8087 6.6261 5.5527 8.3816 ... 10.7848 10.2027 8.7894 7.0736 8.8376 10.0984 ... 10.7526 9.8521 8.4278 ... โ‹ฑ ... 9.5450 10.1321 10.1961 ... 9.6800 9.4741 9.2639 8.4494 7.7848 7.4247 ... 10.3842 10.7975 9.4761 4.6003 4.6669 8.1002 ... 10.0861 9.9920 9.1505 2.kaldi set sample_frq=16000.0 in fbank.conf compute-fbank-feats --verbose=0 --config=conf/fbank.conf scp,p:testfile.scp ark,t:- |less feature: 13.50692 10.83265 10.02174 ...13.64816 13.54409 13.81221 6.617604 5.457501 8.411386 ...10.66659 10.40139 8.803407 7.078373 8.840377 10.10209 ...10.81204 9.477433 8.874421 ......
closed
2019-03-15T09:43:38Z
2019-03-19T05:53:05Z
https://github.com/pykaldi/pykaldi/issues/90
[]
liuchenbaidu
6
jupyter/docker-stacks
jupyter
2,083
After starting the container with NB_USER=root, NB_UID=0, and NB_GID=0, $HOME environment variable is still /home/jovyan
### What docker image(s) are you using? datascience-notebook ### Host OS system Ubuntu 22.04 ### Host architecture x86_64 ### What Docker command are you running? docker run -d --rm --user root -p 8888:8888 -e JUPYTER_TOKEN=123 -e NB_UID=0 -e NB_GID=0 -e NB_USER=root -e NOTEBOOK_ARGS="--allow-root" quay.io/jupyter/datascience-notebook:2024-01-16 ### How to Reproduce the problem? 1. Start the container using the comment above and exec into the container ``` ubuntu$ docker run -d --rm --user root -p 8888:8888 -e JUPYTER_TOKEN=123 -e NB_UID=0 -e NB_GID=0 -e NB_USER=root -e NOTEBOOK_ARGS="--allow-root" quay.io/jupyter/datascience-notebook:2024-01-16 025ef42b70c01de329f8b6371feb008525a0393284b3d873df7642d396f2ff2d ``` 2. Exec into the container ``` ubuntu$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 025ef42b70c0 quay.io/jupyter/datascience-notebook:2024-01-16 "tini -g -- start-noโ€ฆ" 8 seconds ago Up 7 seconds (healthy) 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp happy_sutherland 70058a13a501 resero-svc-studies:latest "/bin/bash" 27 hours ago Up 27 hours resero-devcontainer ubuntu$ docker exec -u root happy_sutherland bash ``` 3. Check the user ``` ubuntu$ whoami ubuntu ``` 4. Check $HOME (base) root@025ef42b70c0:~# echo $HOME /home/jovyan NOTE: /etc/passwd is fine ``` (base) root@025ef42b70c0:~# cat /etc/passwd root:x:0:0:root:/home/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin _apt:x:100:65534::/nonexistent:/usr/sbin/nologin jovyan:x:1000:100::/home/jovyan:/bin/bash ``` ### Command output _No response_ ### Expected behavior `echo $HOME` should be `/home/root` ### Actual behavior `echo $HOME` is `/home/jovyan` ### Anything else? _No response_ ### Latest Docker version - [X] I've updated my Docker version to the latest available, and the issue persists
closed
2024-01-17T12:12:14Z
2024-01-17T12:31:08Z
https://github.com/jupyter/docker-stacks/issues/2083
[ "type:Bug" ]
anil-resero
3
Zeyi-Lin/HivisionIDPhotos
machine-learning
158
ๆƒณ่ฏทๆ•™ไธ‹ๅ„ไฝๅคงไฝฌ๏ผŒnodejsๆœ‰็›ธๅ…ณ็š„ๅบ“ๅฏไปฅๅฎž็Žฐ่ฏไปถ็…งๅ›พ็‰‡ๅค„็†็š„ๅŠŸ่ƒฝๅ˜›
closed
2024-09-21T11:45:49Z
2024-10-04T12:00:12Z
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/158
[]
Chandie-Zhang
3
deepfakes/faceswap
deep-learning
1,022
Windows installer broken
**Crash reports MUST be included when reporting bugs.** (check) CPU Supports SSE4 Instructions (check) Completed check for installed applications (check) Setting up for: cpu Downloading Miniconda3... Installing Miniconda3. This will take a few minutes... Miniconda3 installed. Initializing Conda... Creating Conda Virtual Environment... Error Creating Conda Virtual Environment Install Aborted **Describe the bug** Installer is broken **To Reproduce** Steps to reproduce the behavior: 1. Download installer (Windows) from github 2. Click "Install" 3. See error **Expected behavior** Just install **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: Windows 8.1 - Python Version 3.8.3rc1 - Conda Version I first time hear about it , i never instaled it. - Commit ID What ? - **Additional context** _--===+ Hardware +===--_ Intel Pentium (R) n3530 Intel HD Graphics (Bay Trail) 4GB (SODIMM) [DDR3] HDD 500GB UEFI **Crash Report** Faceswap is not installing , so they have no directory for storing crash report
closed
2020-05-11T07:01:49Z
2020-05-13T11:32:29Z
https://github.com/deepfakes/faceswap/issues/1022
[]
BlueONn
1
tflearn/tflearn
data-science
640
a little error in the doc page
At doc page [get_started](http://tflearn.org/getting_started/), the first example of topic 'Layers': ``` with tf.name_scope('conv1'): W = tf.Variable(tf.random_normal([5, 5, 1, 32]), dtype=tf.float32, name='Weights') b = tf.Variable(tf.random_normal([32]), dtype=tf.float32, name='biases') x = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') x = tf.add_bias(W, b) x = tf.nn.relu(x) ``` the fivth line should be `x = tf.add_bias(x, b)`.
closed
2017-02-28T16:04:25Z
2017-03-16T22:47:36Z
https://github.com/tflearn/tflearn/issues/640
[]
cooljacket
0
microsoft/nni
pytorch
5,338
Add a parameter in generate_scenario for the file 'scenario.txt' instead of harcoding
<!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: It would be great in the function https://github.com/microsoft/nni/blob/e85f029bd4e4b1bdf3e679893fb6447e4d6b2c79/nni/algorithms/hpo/smac_tuner/convert_ss_to_scenario.py#L192 to add a parameter for `scenario.txt` instead of hardcoding it. **Why is this needed**: I'm using SMAC tuner in a Databricks compute where I can't write to `.`, I can only write to `/tmp/`. If I could parametrize the file, then I would be able to use SMAC. **Without this feature, how does current nni work**๏ผš SMAC won't work in Databricks and probably on Synapse (I haven't tried, but it is my guess) **Components that may involve changes**: **Brief description of your proposal if any**: Difficult fix: Change the signature of [generate_scenario](https://github.com/microsoft/nni/blob/e85f029bd4e4b1bdf3e679893fb6447e4d6b2c79/nni/algorithms/hpo/smac_tuner/convert_ss_to_scenario.py#L114), and then also the SMAC arguments. Easy fix: Instead of hardcoding `scenario.txt` hardcode `/tmp/scenario.txt`. I'll send this PR in case you want it
closed
2023-02-07T12:28:14Z
2023-02-27T16:05:23Z
https://github.com/microsoft/nni/issues/5338
[]
miguelgfierro
0
ageitgey/face_recognition
python
721
Issue with dLib installation with Cuda and AVX support
* face_recognition version: Latest * Python version: 3.7 * Operating System: Linux with GPU ### Description-- Please help - cmake version 2.8.12 Trying to install dLib with AVX and Cuda support Command tried- Paste the command(s) you ran and the output. python3 setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA File "/usr/local/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "setup.py", line 119, in run self.build_extension(ext) File "setup.py", line 153, in build_extension subprocess.check_call(cmake_setup, cwd=build_folder) File "/usr/local/lib/python3.7/subprocess.py", line 328, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '/root/dlib-19.9/tools/python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/root/dlib-19.9/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE=/usr/local/bin/python3', '-DUSE_AVX_INSTRUCTIONS=yes', '-DDLIB_USE_CUDA=yes', '-DCMAKE_BUILD_TYPE=Release']' returned non-zero exit status 1. Then after i tired builing the dlib to just give a chance in case it builds- command - cd dlib mkdir build; cd build; cmake ..; -- The CXX compiler identification is unknown CMake Error: your CXX compiler: "CMAKE_CXX_COMPILER-NOTFOUND" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. CMake Error: your CXX compiler: "CMAKE_CXX_COMPILER-NOTFOUND" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. -- Using CMake version: 2.8.12.2 -- Compiling dlib version: 19.16.99 CMake Error at dlib/cmake_utils/set_compiler_specific_options.cmake:130 (if): if given arguments: "Clang" "MATCHES" Unknown arguments specified Call Stack (most recent call first): dlib/CMakeLists.txt:27 (include) -- Configuring incomplete, errors occurred! See also "/root/dlib/build/CMakeFiles/CMakeOutput.log". See also "/root/dlib/build/CMakeFiles/CMakeError.log".
open
2019-01-22T10:02:03Z
2019-01-22T10:02:03Z
https://github.com/ageitgey/face_recognition/issues/721
[]
74981
0
apachecn/ailearning
python
484
ๆ— ็›‘็ฃ็ฎ—ๆณ•ใ€้œ€่ฆๅฎŒๅ–„+่กฅๅ……ใ€‘
ๅŠ็›‘็ฃๅญฆไน (Semi-Supervised Learning,SSL)็ฑปๅฑžไบŽๆœบๅ™จๅญฆไน (Machine Learning,ML)ใ€‚ ## ไธ€ MLๆœ‰ไธค็งๅŸบๆœฌ็ฑปๅž‹็š„ๅญฆไน ไปปๅŠก๏ผš > 1.็›‘็ฃๅญฆไน (Supervised Learning,SL) ย ย ย ย ๆ นๆฎ่พ“ๅ…ฅ-่พ“ๅ‡บๆ ทๆœฌๅฏนL={(x1,y1),ยทยทยท,(xl,yl)}ๅญฆไน ่พ“ๅ…ฅๅˆฐ่พ“ๅ‡บ็š„ๆ˜ ๅฐ„f:X->Y,ๆฅ้ข„ๆต‹ๆต‹่ฏ•ๆ ทไพ‹็š„่พ“ๅ‡บๅ€ผใ€‚SLๅŒ…ๆ‹ฌๅˆ†็ฑป(Classification)ๅ’Œๅ›žๅฝ’(Regression)ไธค็ฑปไปปๅŠก๏ผŒๅˆ†็ฑปไธญ็š„ๆ ทไพ‹xiโˆˆRm(่พ“ๅ…ฅ็ฉบ้—ด)๏ผŒ็ฑปๆ ‡็ญพyiโˆˆ{c1,c2,ยทยทยท,cc},cjโˆˆN;ๅ›žๅฝ’ไธญ็š„่พ“ๅ…ฅxiโˆˆRm๏ผŒ่พ“ๅ‡บyiโˆˆR(่พ“ๅ‡บ็ฉบ้—ด)ใ€‚ > 2. ๆ— ็›‘็ฃๅญฆไน (Unsupervised Learning,UL) ย  ย  ๅˆฉ็”จๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹U={x1,ยทยทยท,xn}ๆ‰€ๅŒ…ๅซ็š„ไฟกๆฏๅญฆไน ๅ…ถๅฏนๅบ”็š„็ฑปๆ ‡็ญพYu=[y1ยทยทยทyn]T,็”ฑๅญฆไน ๅˆฐ็š„็ฑปๆ ‡็ญพไฟกๆฏๆŠŠๆ ทไพ‹ๅˆ’ๅˆ†ๅˆฐไธๅŒ็š„็ฐ‡(Clustering)ๆˆ–ๆ‰พๅˆฐ้ซ˜็ปด่พ“ๅ…ฅๆ•ฐๆฎ็š„ไฝŽ็ปด็ป“ๆž„ใ€‚ULๅŒ…ๆ‹ฌ่š็ฑป(Clistering)ๅ’Œ้™็ปด(Dimensionality Reduction)ไธค็ฑปไปปๅŠกใ€‚ ## ไบŒ ๅŠ็›‘็ฃๅญฆไน (Semi-Supervised Learning,UL) ย ย ย ย ๅœจ่ฎธๅคšML็š„ๅฎž้™…ๅบ”็”จไธญ๏ผŒๅพˆๅฎนๆ˜“ๆ‰พๅˆฐๆตท้‡็š„ๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹๏ผŒไฝ†้œ€่ฆไฝฟ็”จ็‰นๆฎŠ่ฎพๅค‡ๆˆ–็ป่ฟ‡ๆ˜‚่ดตไธ”็”จๆ—ถ้žๅธธ้•ฟ็š„ๅฎž้ชŒ่ฟ‡็จ‹่ฟ›่กŒไบบๅทฅๆ ‡่ฎฐๆ‰่ƒฝๅพ—ๅˆฐๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌ๏ผŒ็”ฑๆญคไบง็”Ÿไบ†ๆžๅฐ‘้‡็š„ๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌๅ’Œ่ฟ‡ๅ‰ฉ็š„ๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹ใ€‚ๅ› ๆญค๏ผŒไบบไปฌๅฐ่ฏ•ๅฐ†ๅคง้‡็š„ๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹ๅŠ ๅ…ฅๅˆฐๆœ‰้™็š„ๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌไธญไธ€่ตท่ฎญ็ปƒๆฅ่ฟ›่กŒๅญฆไน ๏ผŒๆœŸๆœ›่ƒฝๅฏนๅญฆไน ๆ€ง่ƒฝ่ตทๅˆฐๆ”น่ฟ›็š„ไฝœ็”จ๏ผŒ็”ฑๆญคไบง็”Ÿไบ†SSL๏ผŒๅฆ‚ๅฆ‚ๅ›พ๏ผ‘ๆ‰€็คบใ€‚SSL้ฟๅ…ไบ†ๆ•ฐๆฎๅ’Œ่ต„ๆบ็š„ๆตช่ดน๏ผŒๅŒๆ—ถ่งฃๅ†ณไบ†SL็š„ ๆจกๅž‹ๆณ›ๅŒ–่ƒฝๅŠ›ไธๅผบๅ’ŒUL็š„ๆจกๅž‹ไธ็ฒพ็กฎ็ญ‰้—ฎ้ข˜ใ€‚ ![image](https://user-images.githubusercontent.com/9199175/53324336-803ea480-391b-11e9-9887-3cceb9b92959.png) > 1.ๅŠ็›‘็ฃๅญฆไน ไพ่ต–็š„ๅ‡่ฎพ SSL็š„ๆˆ็ซ‹ไพ่ต–ไบŽๆจกๅž‹ๅ‡่ฎพ๏ผŒๅฝ“ๆจกๅž‹ๅ‡่ฎพๆญฃ็กฎๆ—ถ๏ผŒๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹่ƒฝๅคŸๅธฎๅŠฉๆ”น่ฟ›ๅญฆไน ๆ€ง่ƒฝใ€‚SSLไพ่ต–็š„ๅ‡่ฎพๆœ‰ไปฅไธ‹๏ผ“ไธช๏ผš ๏ผˆ๏ผ‘๏ผ‰ๅนณๆป‘ๅ‡่ฎพ๏ผˆSmoothness Assumption๏ผ‰ ย ย ย ย ไฝไบŽ็จ ๅฏ†ๆ•ฐๆฎๅŒบๅŸŸ็š„ไธคไธช่ท็ฆปๅพˆ่ฟ‘็š„ๆ ทไพ‹็š„็ฑปๆ ‡็ญพ็›ธไผผ๏ผŒไนŸๅฐฑๆ˜ฏ่ฏด๏ผŒๅฝ“ไธคไธชๆ ทไพ‹่ขซ็จ ๅฏ†ๆ•ฐๆฎๅŒบๅŸŸไธญ็š„่พน่ฟžๆŽฅๆ—ถ๏ผŒๅฎƒไปฌๅœจๅพˆๅคง็š„ๆฆ‚็އไธ‹ๆœ‰็›ธๅŒ็š„็ฑปๆ ‡็ญพ๏ผ›็›ธๅๅœฐ๏ผŒๅฝ“ไธคไธชๆ ทไพ‹่ขซ็จ€็–ๆ•ฐๆฎๅŒบๅŸŸๅˆ†ๅผ€ๆ—ถ๏ผŒๅฎƒไปฌ็š„็ฑปๆ ‡็ญพ่ถ‹ไบŽไธๅŒ๏ผŽย  ๏ผˆ๏ผ’๏ผ‰่š็ฑปๅ‡่ฎพ๏ผˆCluster Assumption๏ผ‰ ย ย ย ย ๅฝ“ไธคไธชๆ ทไพ‹ไฝไบŽๅŒไธ€่š็ฑป็ฐ‡ๆ—ถ๏ผŒๅฎƒไปฌๅœจๅพˆๅคง็š„ๆฆ‚็އไธ‹ๆœ‰็›ธๅŒ็š„็ฑปๆ ‡็ญพ๏ผŽ่ฟ™ไธชๅ‡่ฎพ็š„็ญ‰ไปทๅฎšไน‰ไธบไฝŽๅฏ†ๅบฆๅˆ†็ฆปๅ‡่ฎพ๏ผˆLow Sensity Separationย Assumption๏ผ‰๏ผŒๅณๅˆ†็ฑป ๅ†ณ็ญ–่พน็•Œๅบ”่ฏฅ็ฉฟ่ฟ‡็จ€็–ๆ•ฐๆฎๅŒบๅŸŸ๏ผŒ่€Œ้ฟๅ…ๅฐ†็จ ๅฏ†ๆ•ฐ ๆฎๅŒบๅŸŸ็š„ๆ ทไพ‹ๅˆ†ๅˆฐๅ†ณ็ญ–่พน็•Œไธคไพง๏ผŽ ย ๏ผˆ๏ผ“๏ผ‰ๆตๅฝขๅ‡่ฎพ๏ผˆManifoldย Assumption๏ผ‰ ย ย ย ย ๅฐ†้ซ˜็ปดๆ•ฐๆฎๅตŒๅ…ฅๅˆฐไฝŽ็ปดๆตๅฝขไธญ๏ผŒๅฝ“ไธคไธชๆ ทไพ‹ไฝไบŽไฝŽ็ปดๆตๅฝขไธญ็š„ไธ€ไธชๅฐๅฑ€้ƒจ้‚ปๅŸŸๅ†…ๆ—ถ๏ผŒๅฎƒไปฌๅ…ทๆœ‰็›ธไผผ็š„็ฑปๆ ‡็ญพใ€‚่ฎธๅคšๅฎž้ชŒ็ ”็ฉถ่กจๆ˜Žๅฝ“SSLไธๆปก่ถณ่ฟ™ไบ›ๅ‡่ฎพๆˆ–ๆจกๅž‹ๅ‡่ฎพไธๆญฃ็กฎๆ—ถ๏ผŒๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹ไธไป…ไธ่ƒฝๅฏนๅญฆไน ๆ€ง่ƒฝ่ตทๅˆฐๆ”น่ฟ›ไฝœ็”จ๏ผŒๅ่€ŒไผšๆถๅŒ–ๅญฆไน ๆ€ง่ƒฝ๏ผŒๅฏผ่‡ด SSL็š„ๆ€ง่ƒฝไธ‹้™๏ผŽไฝ†ๆ˜ฏ่ฟ˜ๆœ‰ไธ€ไบ›ๅฎž้ชŒ่กจๆ˜Ž๏ผŒๅœจไธ€ไบ›็‰นๆฎŠ็š„ๆƒ…ๅ†ตไธ‹ๅณไฝฟๆจกๅž‹ๅ‡่ฎพๆญฃ็กฎ๏ผŒๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹ไนŸๆœ‰ๅฏ่ƒฝๆŸๅฎณๅญฆไน ๆ€ง่ƒฝใ€‚ > 2.ๅŠ็›‘็ฃๅญฆไน ็š„ๅˆ†็ฑปย  ย ย ย ย SSLๆŒ‰็…ง็ปŸ่ฎกๅญฆไน ็†่ฎบ็š„่ง’ๅบฆๅŒ…ๆ‹ฌ็›ดๆŽจ ๏ผˆTransductive ๏ผ‰SSLๅ’Œๅฝ’็บณ๏ผˆInductive๏ผ‰SSLไธค็ฑปๆจกๅผใ€‚็›ดๆŽจ SSLๅชๅค„็†ๆ ทๆœฌ็ฉบ้—ดๅ†…็ป™ๅฎš็š„่ฎญ็ปƒๆ•ฐๆฎ๏ผŒๅˆฉ็”จ่ฎญ็ปƒๆ•ฐๆฎไธญๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌๅ’Œๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹่ฟ›่กŒ่ฎญ็ปƒ๏ผŒ้ข„ๆต‹่ฎญ็ปƒๆ•ฐๆฎไธญๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹็š„็ฑปๆ ‡็ญพ๏ผ›ๅฝ’็บณSSLๅค„็†ๆ•ดไธชๆ ทๆœฌ็ฉบ้—ดไธญๆ‰€ๆœ‰็ป™ๅฎšๅ’Œๆœช็Ÿฅ็š„ๆ ทไพ‹๏ผŒๅŒๆ—ถๅˆฉ็”จ่ฎญ็ปƒๆ•ฐๆฎไธญๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌๅ’Œๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹๏ผŒไปฅๅŠๆœช็Ÿฅ็š„ๆต‹่ฏ•ๆ ทไพ‹ไธ€่ตท่ฟ›่กŒ่ฎญ็ปƒ๏ผŒไธไป…้ข„ๆต‹่ฎญ็ปƒๆ•ฐๆฎไธญๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹็š„็ฑปๆ ‡็ญพ๏ผŒๆ›ดไธป่ฆ็š„ๆ˜ฏ้ข„ๆต‹ๆœช็Ÿฅ็š„ๆต‹่ฏ•ๆ ทไพ‹็š„็ฑปๆ ‡็ญพใ€‚ไปŽไธๅŒ็š„ๅญฆไน ๅœบๆ™ฏ็œ‹๏ผŒSSLๅฏๅˆ†ไธบ๏ผ”ๅคง็ฑป๏ผšย  ๏ผˆ๏ผ‘๏ผ‰ๅŠ็›‘็ฃๅˆ†็ฑป ๏ผˆSemi-Supervised Classification๏ผ‰ ย ย ย ย ๅœจๆ— ็ฑปๆ ‡็ญพ็š„ๆ ทไพ‹็š„ๅธฎๅŠฉไธ‹่ฎญ็ปƒๆœ‰็ฑปๆ ‡ ็ญพ็š„ๆ ทๆœฌ๏ผŒ่Žทๅพ—ๆฏ”ๅช็”จๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌ่ฎญ็ปƒๅพ—ๅˆฐ็š„ๅˆ†็ฑปๅ™จๆ€ง่ƒฝๆ›ดไผ˜็š„ๅˆ†็ฑปๅ™จ๏ผŒๅผฅ่กฅๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌไธ่ถณ็š„็ผบ้™ท๏ผŒๅ…ถไธญ็ฑปๆ ‡็ญพyiๅ–ๆœ‰้™็ฆปๆ•ฃๅ€ผyiโˆˆ{c1,c2,ยทยทยท,cc},cjโˆˆNใ€‚ ๏ผˆ๏ผ’๏ผ‰ๅŠ็›‘็ฃๅ›žๅฝ’๏ผˆSemi-Supervised Regression๏ผ‰ ย ย ย ย ๅœจๆ— ่พ“ๅ‡บ็š„่พ“ๅ…ฅ็š„ๅธฎๅŠฉไธ‹่ฎญ็ปƒๆœ‰่พ“ๅ‡บ็š„่พ“ๅ…ฅ๏ผŒ่Žทๅพ—ๆฏ”ๅช็”จๆœ‰่พ“ๅ‡บ็š„่พ“ๅ…ฅ่ฎญ็ปƒๅพ—ๅˆฐ็š„ๅ›žๅฝ’ๅ™จๆ€ง่ƒฝๆ›ดๅฅฝ็š„ๅ›žๅฝ’ๅ™จ๏ผŒๅ…ถไธญ่พ“ๅ‡บyi ๅ–่ฟž็ปญๅ€ผ yiโˆˆ๏ผฒใ€‚ย  ๏ผˆ๏ผ“๏ผ‰ๅŠ็›‘็ฃ่š็ฑป๏ผˆSemi-Supervised Clustering๏ผ‰ ย ย ย ย ๅœจๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌ็š„ไฟกๆฏๅธฎๅŠฉไธ‹่Žทๅพ—ๆฏ”ๅช็”จๆ— ็ฑปๆ ‡ ็ญพ็š„ๆ ทไพ‹ๅพ—ๅˆฐ็š„็ป“ๆžœๆ›ดๅฅฝ็š„็ฐ‡๏ผŒๆ้ซ˜่š็ฑปๆ–นๆณ•็š„็ฒพๅบฆใ€‚ ๏ผˆ๏ผ”๏ผ‰ๅŠ็›‘็ฃ้™็ปด๏ผˆSemi-Supervised Dimensionality Reduction๏ผ‰ ย ย ย ย ๅœจๆœ‰็ฑปๆ ‡็ญพ็š„ๆ ทๆœฌ็š„ไฟกๆฏๅธฎๅŠฉไธ‹ๆ‰พๅˆฐ้ซ˜็ปด่พ“ๅ…ฅๆ•ฐๆฎ็š„ไฝŽ็ปด็ป“ๆž„๏ผŒๅŒๆ—ถไฟๆŒๅŽŸๅง‹้ซ˜็ปดๆ•ฐๆฎๅ’Œๆˆๅฏน็บฆๆŸ๏ผˆPair-Wise Constraints๏ผ‰็š„็ป“ๆž„ไธๅ˜๏ผŒๅณๅœจ้ซ˜็ปด็ฉบ้—ดไธญๆปก่ถณๆญฃ็บฆๆŸ๏ผˆMust-Link Constraints๏ผ‰็š„ๆ ทไพ‹ๅœจไฝŽ็ปด็ฉบ้—ดไธญ็›ธ่ทๅพˆ่ฟ‘๏ผŒๅœจ้ซ˜็ปด็ฉบ้—ดไธญๆปก่ถณ่ดŸ็บฆๆŸ๏ผˆCannot-Linkย Constraints๏ผ‰็š„ๆ ทไพ‹ๅœจไฝŽ็ปด็ฉบ้—ดไธญ่ท็ฆปๅพˆ่ฟœใ€‚ ![image](https://user-images.githubusercontent.com/9199175/53324326-79b02d00-391b-11e9-8e4d-87c1e59562d2.png) --- ๅŽŸๆ–‡ๅœฐๅ€๏ผšhttps://blog.csdn.net/jiusake/article/details/80016171
closed
2019-02-25T08:38:34Z
2021-09-07T17:45:35Z
https://github.com/apachecn/ailearning/issues/484
[]
jiangzhonglian
0
fbdesignpro/sweetviz
data-visualization
45
Add box plots
version: 1.0.3 date: Jul 22, 2020 Currently "sweetviz" only has bar-charts for visualizations. For medium-size data analysis (such as titanic or Boston housing) it is not much costly to show box-plots as well as bar-plots. For a larger dataset, it can be made optional in `config.ini`file and can also be determined file size to make it true or false. For example: ``` if file size < 50MB: show boxplots and bar plots else: show only bar plots ```
open
2020-07-22T16:04:55Z
2020-07-23T14:56:45Z
https://github.com/fbdesignpro/sweetviz/issues/45
[ "feature request" ]
bhishanpdl
0
graphistry/pygraphistry
jupyter
440
publish cucat
it should be clear how to get a versioned cucat ideas: - [x] pypi / pip For now, instead, git tag so versioned pip install github ... tag <--- for now, use semvar: https://semver.org/
open
2023-02-20T23:32:51Z
2023-12-07T06:13:24Z
https://github.com/graphistry/pygraphistry/issues/440
[ "enhancement", "p4", "infra" ]
lmeyerov
1
microsoft/unilm
nlp
1,573
Unable to use finetuned LayoutLMV3 for object detection task model for testing
**Describe** Model I am using (LayoutLMV3): I have sucessfully finetuned LayoutLMV3 model on custom dataset similar to publaynet dataset on object detection task , it saves a .pth model but when I try to use it for eval using this script : python train_net.py --config-file cascade_layoutlmv3.yaml --eval-only --num-gpus 8 \ MODEL.WEIGHTS /path/to/layoutlmv3-base-finetuned-publaynet/model_final.pth \ OUTPUT_DIR /path/to/layoutlmv3-base-finetuned-publaynet I get error : [06/30 11:16:16 detectron2]: Full config saved to /content/output_dir/config.yaml file /content/output_dir/config.json not found Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py", line 558, in get_config_dict user_agent=user_agent, File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 1506, in cached_path raise EnvironmentError(f"file {url_or_filename} not found") OSError: file /content/output_dir/config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/content/unilm/layoutlmv3/examples/object_detection/train_net.py", line 122, in args=(args,), File "/usr/local/lib/python3.7/dist-packages/detectron2/engine/launch.py", line 82, in launch main_func(*args) File "/content/unilm/layoutlmv3/examples/object_detection/train_net.py", line 90, in main model = MyTrainer.build_model(cfg) File "/content/unilm/layoutlmv3/examples/object_detection/ditod/mytrainer.py", line 553, in build_model model = build_model(cfg) File "/usr/local/lib/python3.7/dist-packages/detectron2/modeling/meta_arch/build.py", line 22, in build_model model = META_ARCH_REGISTRY.get(meta_arch)(cfg) File "/usr/local/lib/python3.7/dist-packages/detectron2/config/config.py", line 189, in wrapped explicit_args = _get_args_from_config(from_config_func, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/detectron2/config/config.py", line 245, in _get_args_from_config ret = from_config_func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/detectron2/modeling/meta_arch/rcnn.py", line 72, in from_config backbone = build_backbone(cfg) File "/usr/local/lib/python3.7/dist-packages/detectron2/modeling/backbone/build.py", line 31, in build_backbone backbone = BACKBONE_REGISTRY.get(backbone_name)(cfg, input_shape) File "/content/unilm/layoutlmv3/examples/object_detection/ditod/backbone.py", line 168, in build_vit_fpn_backbone bottom_up = build_VIT_backbone(cfg) File "/content/unilm/layoutlmv3/examples/object_detection/ditod/backbone.py", line 154, in build_VIT_backbone config_path=config_path, image_only=cfg.MODEL.IMAGE_ONLY, cfg=cfg) File "/content/unilm/layoutlmv3/examples/object_detection/ditod/backbone.py", line 84, in init config = AutoConfig.from_pretrained(config_path) File "/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py", line 558, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py", line 575, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for '/content/output_dir/'. Make sure that: '/content/output_dir/' is a correct model identifier listed on 'https://huggingface.co/models' (make sure '/content/output_dir/' is not a path to a local directory with something else, in that case) or '/content/output_dir/' is the correct path to a directory containing a config.json file
open
2024-06-12T10:47:55Z
2024-10-16T02:36:49Z
https://github.com/microsoft/unilm/issues/1573
[]
maniyarsuyash
1
roboflow/supervision
deep-learning
1,376
mAP for small, medium and large objects
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Description I saw an issue online where the user demanded calculation of MeanAveragePrecision for small, medium and large objects for HBB and OBB detection models. I thought that maybe this will be a good idea to expand this in supervision library. We can discuss about this issue and maybe potential approach on how to achieve this. ### Use case _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [X] Yes I'd like to help by submitting a PR!
closed
2024-07-17T15:15:41Z
2024-07-17T16:16:32Z
https://github.com/roboflow/supervision/issues/1376
[ "enhancement" ]
Bhavay-2001
2
aidlearning/AidLearning-FrameWork
jupyter
74
่ฏท้—ฎๅฏไปฅๆ”ฏๆŒ้ข„่ฃ…ๆœ€ๆ–ฐ็‰ˆ็š„wps office for linux armๅ—๏ผŒๅธŒๆœ›ๅœจๅคงๅฑๅฎ‰ๅ“ๅนณๆฟไธŠ่ƒฝๆœ‰ๅŠžๅ…ฌ่ƒฝๅŠ›
่ฏท้—ฎๅฏไปฅๆ”ฏๆŒ้ข„่ฃ…ๆœ€ๆ–ฐ็‰ˆ็š„wps office for linux armๅ—๏ผŒๅธŒๆœ›ๅœจๅคงๅฑๅฎ‰ๅ“่ƒฝๆœ‰ๅŠžๅ…ฌ่ƒฝๅŠ› ![ๆทฑๅบฆๆˆชๅ›พ_้€‰ๆ‹ฉๅŒบๅŸŸ_20200118220535](https://user-images.githubusercontent.com/58220227/72664976-ce322e80-3a3e-11ea-99a2-8993f8ec8852.png)
closed
2020-01-18T14:07:06Z
2020-02-11T17:38:37Z
https://github.com/aidlearning/AidLearning-FrameWork/issues/74
[]
zihaoxingstudy1
3
LAION-AI/Open-Assistant
machine-learning
3,109
Support user-level OAuth plugin authentication
Will support plugins for which `ai-plugin.json` contains: ``` "auth": { "type": "oauth" }, ``` - [x] Plugin has client ID and secret, securely store encrypted version of client secret, store client ID - [x] Mechanism for plugin author receiving verification token - [x] Redirect user to plugin auth URL - [x] When user is redirected back to OA, exchange received code for an access token See OpenAI spec for [OAuth plugin authentication](https://platform.openai.com/docs/plugins/authentication/oauth). Once backend support is merged, a new issue can be opened for the frontend to support it
open
2023-05-09T19:48:03Z
2023-06-06T15:12:53Z
https://github.com/LAION-AI/Open-Assistant/issues/3109
[ "inference" ]
olliestanley
0
dynaconf/dynaconf
django
1,055
Multiple cast validators get discarded
Hi there, thanks for this great project, **Describe the bug** After defining a dynaconf `cast` validator, attempting to define subsequent dynaconf `cast` validators on the same variable get discarded while they should also be taken into account **To Reproduce** 1. Having the following a.toml file: **a.toml** ```toml var = "1" ``` 2. And the following sample code: **a.py** ```python from dynaconf import Dynaconf, Validator settings = Dynaconf(settings_file="a.toml") def a(v): print("called a") return v def b(v): print("called b") return v validators = [ Validator("var", cast=a), Validator("var", cast=b), ] settings.validators.register(*validators) settings.validators.validate_all() ``` 3. Executing `a.py` under a virtualenv with `dynaconf` `3.2.4` installed: ```bash $ python3 a.py called a ``` I would expect to see `called a` and then an additional line `called b` after that. **Additional context** This bug is because of `Validator("var", cast=a)` and `Validator("var", cast=b)` comparing equal with the current definition of [`__eq__`](https://github.com/dynaconf/dynaconf/blob/0390393c27a7ef27104bbda2426b3382dcc7fb9f/dynaconf/validator.py#L155-L169): ```python def __eq__(self, other: object) -> bool: if self is other: return True if type(self).__name__ != type(other).__name__: return False identical_attrs = ( getattr(self, attr) == getattr(other, attr) for attr in EQUALITY_ATTRS ) if all(identical_attrs): return True return False ``` and with ```python EQUALITY_ATTRS = ( "names", "must_exist", "when", "condition", "operations", "envs", ) ``` Since `cast` is not in the `EQUALITY_ATTRS`, validators on the same name with just a cast defined are considered equal and the second validator are skipped altogether (see [this part of the code](https://github.com/dynaconf/dynaconf/blob/0390393c27a7ef27104bbda2426b3382dcc7fb9f/dynaconf/validator.py#L463)) I think a one-line fix for this would be to add the `cast` attribute to the EQUALITY_ATTRS.
closed
2024-02-20T12:57:19Z
2024-03-25T17:08:52Z
https://github.com/dynaconf/dynaconf/issues/1055
[ "bug" ]
nikoskoukis-slamcore
0
hzwer/ECCV2022-RIFE
computer-vision
203
ไฝฟ็”จColab่ฟ่กŒๆ—ถ้‡ๅˆฐไบ†้—ฎ้ข˜
![image](https://user-images.githubusercontent.com/44726668/136382162-fa837865-c6d9-45fb-ad2d-836e1ea9d3b0.png) ``` INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 7) and torch==1.3.0 because these package versions have conflicting dependencies. The conflict is caused by: The user requested torch==1.3.0 torchvision 0.7.0 depends on torch==1.6.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies ```
closed
2021-10-07T12:16:57Z
2021-10-08T05:13:11Z
https://github.com/hzwer/ECCV2022-RIFE/issues/203
[]
Neycrol
1
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,216
Regression on Export/Download of Files introduced in 4.9.1
**Describe the bug** On selecting single/multiple reports and clicking "Export", I receive a popup message which says "Error!". When I look at the network tab, I can see that I'm getting an error "Method Not Implemented". Likewise, when I attempt to download an attachment on the report, it opens a new tab, which notifies me that I'm "Not Authenticated" with an error code of 10. **To Reproduce** Steps to reproduce the behavior: 1. Log in as recipient 2. Click on reports tab 3. Select one or multiple records 4. Click export 5. Error appears 6. Click on single report 7. Navigate to the attached files 8. Click Download 9. New tab opens with error. **Log Details** ``` - - [15/Apr/2022:21:03:22 +0000] "POST /api/token HTTP/1.1" 201 127 0ms - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.79 Safari/537.36" - -[15/Apr/2022:21:03:22 +0000] "GET /api/rfile/cdb5480d-2122-4ada-8eff-9ed2f0d6f303?token=9b56570fc8fa7bb929d862a75cc94fc3829cf8dd36671af083bcea9b0f962c58 HTTP/1.1" 412 82 0ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET / HTTP/1.1" 200 1804 1ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /css/styles.min.css HTTP/1.1" 200 53361 26ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /js/scripts.min.js HTTP/1.1" 200 259617 90ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /api/public HTTP/1.1" 200 2886 1ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /data/favicon.ico HTTP/1.1" 200 5703 6ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /lib/js/locale/angular-locale_en.js HTTP/1.1" 200 955 1ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /l10n/en HTTP/1.1" 200 9513 4ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /webfonts/metropolis-all-700-normal.woff2 HTTP/1.1" 200 26456 1ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /webfonts/metropolis-all-400-normal.woff2 HTTP/1.1" 200 24180 2ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /data/logo.png HTTP/1.1" 200 6869 2ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:05:51 +0000] "GET /webfonts/fa-solid-900.woff2 HTTP/1.1" 200 154296 26ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:00 +0000] "POST /api/token HTTP/1.1" 201 127 0ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:05 +0000] "POST /api/authentication HTTP/1.1" 201 196 4254ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:05 +0000] "GET /data/favicon.ico HTTP/1.1" 200 5703 4ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:05 +0000] "GET /api/preferences HTTP/1.1" 200 508 13ms - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.79 Safari/537.36" ]- - [15/Apr/2022:21:06:07 +0000] "GET /data/favicon.ico HTTP/1.1" 200 5703 11ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:07 +0000] "GET /api/preferences HTTP/1.1" 200 508 27ms - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.79 Safari/537.36" - - [15/Apr/2022:21:06:07 +0000] "GET /api/rtips HTTP/1.1" 200 873 41ms - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.79 Safari/537.36" - - [15/Apr/2022:21:06:10 +0000] "POST /api/token HTTP/1.1" 201 126 0ms - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.79 Safari/537.36" - - [15/Apr/2022:21:06:10 +0000] "GET /api/rtips/133b72f7-b854-4db2-8dcc-4c7b966cc1f8/export?token=1f402e0a739e1065d61c98b640fb87a02c1ac3f81aea3f6e3d81d5d34c637779 HTTP/1.1" 412 82 0ms - "[REMOVED_USER_AGENT]" - - [15/Apr/2022:21:06:15 +0000] "POST /api/rtips/133b72f7-b854-4db2-8dcc-4c7b966cc1f8/export HTTP/1.1" 501 85 -1ms - "[REMOVED_USER_AGENT]" ``` **Expected behavior** Successful export of record or download of file. **Desktop (please complete the following information):** - Ubuntu 20.08, accessing the recipient dashboard from Macbook iOS 12.0.1
closed
2022-04-15T20:32:32Z
2022-04-16T10:02:06Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3216
[ "T: Bug", "C: Client", "C: Backend" ]
jennycalendar
3
Farama-Foundation/PettingZoo
api
1,150
Passing the Parallel API tests in PettingZoo for custom multi-agent environment?
### Question ``` from pettingzoo.test import ( parallel_api_test, parallel_seed_test, max_cycles_test, performance_benchmark, ) ``` I have a custom multiagent environment that extends **ParallelEnv**, and since I passed the *parallel_api_test*, I plan to pass the other ones as well before starting training: 1. **parallel_seed_test** ``` ... File "D:\anaconda3\Lib\site-packages\pettingzoo\test\seed_test.py", line 139, in parallel_seed_test check_environment_deterministic_parallel(env1, env2, num_cycles) File "D:\anaconda3\Lib\site-packages\pettingzoo\test\seed_test.py", line 108, in check_environment_deterministic_parallel assert data_equivalence(actions1, actions2), "Incorrect action seeding" ``` I have no idea how to pass this one. I tried adding `np.random.seed()` statements in my observation_space and action_space functions, but I don't know how to get deterministic actions. Please advise. Are there any steps I can follow to pass the seed test and make my environment results reproducible? 2. **max_cycles_test** ``` ... File "D:\anaconda3\Lib\site-packages\pettingzoo\test\max_cycles_test.py", line 6, in max_cycles_test parallel_env = mod.parallel_env(max_cycles=max_cycles) ^^^^^^^^^^^^^^^^ AttributeError: 'MultiAgentHighway' object has no attribute 'parallel_env' ``` I'm not sure how to use this? I have `end_of_sim` as the maximum number of steps in the simulation before which the simulation is closed forcefully? 3. **performance_benchmark**: Had to convert my `ParallelEnv` to `AEC` with `parallel_to_aec()` to use this. ``` 2466.7955100048803 turns per second 123.33977550024402 cycles per second Finished performance benchmark ``` How do I evaluate these numbers? Please advise. Thank you in advance :)
closed
2023-12-25T01:31:37Z
2024-01-09T15:11:35Z
https://github.com/Farama-Foundation/PettingZoo/issues/1150
[ "question" ]
hridayns
3
DistrictDataLabs/yellowbrick
scikit-learn
1,088
AttributeError: 'KMeans' object has no attribute 'show'
**Describe the bug** I am getting this AttributeError: 'KMeans' object has no attribute 'show' when implementing Elbow Method for KMeans clustering. The elbow graph is plotted after showing the error. **To Reproduce** ```python from sklearn.cluster import KMeans from sklearn.datasets import make_blobs from yellowbrick.cluster import KElbowVisualizer # Generate synthetic dataset with 8 random clusters X, y = make_blobs(n_samples=1000, n_features=12, centers=8, random_state=42) # Instantiate the clustering model and visualizer model = KMeans() visualizer = KElbowVisualizer(model, k=(4,12)) visualizer.fit(X) # Fit the data to the visualizer visualizer.show() # Finalize and render the figure ``` **Traceback** ``` <ipython-input-69-f375a0067df6> in <module>() 14 15 visualizer.fit(X) # Fit the data to the visualizer ---> 16 visualizer.show() # Finalize and render the figure /usr/local/lib/python3.6/dist-packages/yellowbrick/utils/wrapper.py in __getattr__(self, attr) 40 def __getattr__(self, attr): 41 # proxy to the wrapped object ---> 42 return getattr(self._wrapped, attr) AttributeError: 'KMeans' object has no attribute 'show' ``` **Desktop (please complete the following information):** - OS: Windows 10 - Python Version 3.6, google colab - Yellowbrick Version 0.9.1 **Additional context** Nope.
closed
2020-07-28T13:38:10Z
2020-07-28T16:26:30Z
https://github.com/DistrictDataLabs/yellowbrick/issues/1088
[]
Gdkmak
1
ultralytics/ultralytics
deep-learning
19,316
Inquiry Regarding Licensing for Commercial Use of YOLO with Custom Training Tool
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hello, I have developed a user-friendly tool (UI) that enables code-free training of YOLO models on custom datasets. I am planning to commercialize this tool and would like to clarify the licensing requirements. Specifically, I would like to know: Do I need to obtain a commercial license to use YOLO within my tool, which will be marketed to customers for training custom models? If my customers use the tool to train models and implement them in their production systems, will they also require a separate license? Your guidance on the licensing implications for both the tool provider (myself) and the end users (my customers) would be highly appreciated. Thank you in advance for your assistance. I look forward to your response. Dorra ### Additional _No response_
open
2025-02-19T16:45:49Z
2025-02-24T07:08:11Z
https://github.com/ultralytics/ultralytics/issues/19316
[ "question", "enterprise" ]
DoBacc
2
xinntao/Real-ESRGAN
pytorch
340
raise ValueError("Number of processes must be at least 1")
### Can anyone help me, why is this? โ”Œโ”€[Michael@code-me] - [~/Real-ESRGAN] - [Thu May 26, 00:40] โ””โ”€[$]> python3 inference_realesrgan_video.py -i /Users/Michael/Downloads/aa.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 Traceback (most recent call last): File "/Users/Michael/Real-ESRGAN/inference_realesrgan_video.py", line 362, in <module> main() File "/Users/Michael/Real-ESRGAN/inference_realesrgan_video.py", line 354, in main run(args) File "/Users/Michael/Real-ESRGAN/inference_realesrgan_video.py", line 272, in run pool = ctx.Pool(num_process) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/pool.py", line 205, in __init__ raise ValueError("Number of processes must be at least 1") ValueError: Number of processes must be at least 1`
open
2022-05-25T16:55:29Z
2022-09-07T14:42:14Z
https://github.com/xinntao/Real-ESRGAN/issues/340
[]
smoosex
3
lucidrains/vit-pytorch
computer-vision
230
Attention maps for PiT
@lucidrains maybe it's only a typo, but in the PiT example, it is said that the attention maps are also outputed, but in fact it is not we only get the predictions. Could help me get those attention maps ? Thanks in advance
open
2022-08-09T10:08:08Z
2022-08-09T10:08:08Z
https://github.com/lucidrains/vit-pytorch/issues/230
[]
Maxlanglet
0
geex-arts/django-jet
django
396
persian calendar need
in Iran and many other cuntres
open
2019-06-03T19:57:42Z
2019-06-03T19:57:42Z
https://github.com/geex-arts/django-jet/issues/396
[]
shahriardn
0
Yorko/mlcourse.ai
scikit-learn
22
ะ ะตัˆะตะฝะธะต ะฒะพะฟั€ะพัะฐ 5.11 ะฝะต ัั‚ะฐะฑะธะปัŒะฝะพ
ะ”ะฐะถะต ะฟั€ะธ ะฒั‹ัั‚ะฐะฒะปะตะฝะฝั‹ั… random_state ะฟะฐั€ะฐะผะตั‚ั€ะฐั…, best_score ะปัƒั‡ัˆะตะน ะผะพะดะตะปะธ ะพั‚ะปะธั‡ะฐะตั‚ัั ะพั‚ ะฒะฐั€ะธะฐะฝั‚ะพะฒ ะฒ ะพั‚ะฒะตั‚ะฐั…. ะŸะพะดั‚ะฒะตั€ะถะดะตะฝะพ ะทะฐะฟัƒัะบะพะผ ะฝะตัะบะพะปัŒะบะธะผะธ ัƒั‡ะฐัั‚ะฝะธะบะฐะผะธ. ะ’ะพะทะผะพะถะฝะพ ะฒะปะธััŽั‚ ะบะพะฝะบั€ะตั‚ะฝั‹ะต ะฒะตั€ัะธะธ ะฟะฐะบะตั‚ะพะฒ ะฝะฐ ั€ะฐัั‡ะตั‚ั‹. ะœะพะณัƒ ะฟั€ะธะปะพะถะธั‚ัŒ ipynb, ะฝะฐ ะบะพั‚ะพั€ะพะผ ะฒะพัะฟั€ะพะธะทะฒะพะดะธั‚ัั.
closed
2017-04-03T08:43:37Z
2017-04-03T08:52:22Z
https://github.com/Yorko/mlcourse.ai/issues/22
[]
coodix
2
FlareSolverr/FlareSolverr
api
988
[yggtorrent] (testing) Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version: 3.3.10 - Last working FlareSolverr version: 3.3.10 - Operating system: OMV - Are you using Docker: [yes/no] yes - FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 - Are you using a VPN: [yes/no] no - Are you using a Proxy: [yes/no] no - Are you using Captcha Solver: [yes/no] no - If using captcha solver, which one: - URL to test this issue: ``` ### Description in jackett I run the test on yggtorrent and get this error ![image](https://github.com/FlareSolverr/FlareSolverr/assets/130233722/8e63b36a-b849-4d47-92ea-95cf6f21609d) ### Logged Error Messages ```text Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Message: unknown error: net::ERR_CONNECTION_REFUSED\n (Session info: chrome=119.0.6045.123)\nStacktrace:\n#0 0x555fc69c5d23 <unknown>\n#1 0x555fc6693a3e <unknown>\n#2 0x555fc668c5a6 <unknown>\n#3 0x555fc667e345 <unknown>\n#4 0x555fc667fd20 <unknown>\n#5 0x555fc667e8b4 <unknown>\n#6 0x555fc667d717 <unknown>\n#7 0x555fc667d5ac <unknown>\n#8 0x555fc667c097 <unknown>\n#9 0x555fc667c4e4 <unknown>\n#10 0x555fc6696ff2 <unknown>\n#11 0x555fc671e8a6 <unknown>\n#12 0x555fc6705c92 <unknown>\n#13 0x555fc671e288 <unknown>\n#14 0x555fc6705a43 <unknown>\n#15 0x555fc66cfbc6 <unknown>\n#16 0x555fc66d0fb2 <unknown>\n#17 0x555fc699ada7 <unknown>\n#18 0x555fc699de8d <unknown>\n#19 0x555fc699d938 <unknown>\n#20 0x555fc699e3a5 <unknown>\n#21 0x555fc698d1df <unknown>\n#22 0x555fc699e732 <unknown>\n#23 0x555fc6977666 <unknown>\n#24 0x555fc69b6e95 <unknown>\n#25 0x555fc69b707b <unknown>\n#26 0x555fc69c52af <unknown>\n#27 0x7f4b8c43dea7 start_thread\n: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Message: unknown error: net::ERR_CONNECTION_REFUSED\n (Session info: chrome=119.0.6045.123)\nStacktrace:\n#0 0x555fc69c5d23 <unknown>\n#1 0x555fc6693a3e <unknown>\n#2 0x555fc668c5a6 <unknown>\n#3 0x555fc667e345 <unknown>\n#4 0x555fc667fd20 <unknown>\n#5 0x555fc667e8b4 <unknown>\n#6 0x555fc667d717 <unknown>\n#7 0x555fc667d5ac <unknown>\n#8 0x555fc667c097 <unknown>\n#9 0x555fc667c4e4 <unknown>\n#10 0x555fc6696ff2 <unknown>\n#11 0x555fc671e8a6 <unknown>\n#12 0x555fc6705c92 <unknown>\n#13 0x555fc671e288 <unknown>\n#14 0x555fc6705a43 <unknown>\n#15 0x555fc66cfbc6 <unknown>\n#16 0x555fc66d0fb2 <unknown>\n#17 0x555fc699ada7 <unknown>\n#18 0x555fc699de8d <unknown>\n#19 0x555 ``` ### Screenshots ![image](https://github.com/FlareSolverr/FlareSolverr/assets/130233722/17628d4c-030a-4b1f-935d-dac54b945cad)
closed
2023-11-28T00:42:44Z
2023-11-28T15:46:21Z
https://github.com/FlareSolverr/FlareSolverr/issues/988
[ "more information needed" ]
tifo71
8
tflearn/tflearn
data-science
1,094
KeyError: "The name 'Momentum' refers to an Operation not in the graph
When I load a pretrained tflean model, an keyError is rased. File "C:\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1810, in import_meta_graph **kwargs) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 696, in import_scoped_meta_graph ops.prepend_name_scope(value, scope_to_prepend_to_names)) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3035, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3095, in _as_graph_element_locked "graph." % repr(name)) KeyError: "The name 'Momentum' refers to an Operation not in the graph."
open
2018-10-20T05:46:43Z
2019-03-21T17:26:20Z
https://github.com/tflearn/tflearn/issues/1094
[]
hongminli
2
public-apis/public-apis
api
3,843
NASA API website
I just looked at the NASA API website and there it asks me to register with an API key. Maybe I am getting something wrong here, but if not I would love to fix this. I haven't really done much open source work, but I would love to change that. Here is the link:ย https://api.nasa.gov/ Someone who wants to join and help can click it.
closed
2024-04-28T02:57:55Z
2024-04-28T03:39:27Z
https://github.com/public-apis/public-apis/issues/3843
[]
Fooooooooool
3
svc-develop-team/so-vits-svc
pytorch
89
[Help]: 4.0 ไธๅทฅไฝœใ€‚ๅœจ่ฝฌๆขๅŽ็š„้Ÿณ้ข‘ไธญๅผ•ๅ…ฅไธ้œ€่ฆ็š„ๅคฑ็œŸใ€‚ๆบ้Ÿณ้ซ˜ๆœชๆญฃ็กฎ่ฝฌๆขใ€‚4.0 Not working. introducing unwanted distortion in converted audio. source pitch not properly converted.
### Please check the checkboxes below. - [X] I have read *[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README.md)* and *[Quick solution in wiki](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)* carefully. - [X] I have been troubleshooting issues through various search engines. The questions I want to ask are not common. - [X] I am NOT using one click package / environment package. ### OS version Linux e2e-99-151 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux ### GPU +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100 80G... Off | 00000000:01:01.0 Off | On | | N/A 36C P0 93W / 300W | 10627MiB / 81920MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------+-----------+-----------------------+ | GPU GI CI MIG | Memory-Usage | Vol| Shared | | ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG| | | | ECC| | |==================+======================+===========+=======================| | 0 3 0 0 | 8324MiB / 19968MiB | 14 0 | 1 0 1 0 0 | | | 2MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 4 0 1 | 6MiB / 19968MiB | 14 0 | 1 0 1 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 5 0 2 | 6MiB / 19968MiB | 14 0 | 1 0 1 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 6 0 3 | 2290MiB / 19968MiB | 14 0 | 1 0 1 0 0 | | | 2MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 3 0 16814 C /root/anaconda3/bin/python 8312MiB | | 0 6 0 5768 C python3 2276MiB | +-----------------------------------------------------------------------------+ ### Python version Python 3.8.16 ### PyTorch version Name: torch Version: 1.13.1 Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: packages@pytorch.org License: BSD-3 Location: /root/anaconda3/envs/SOVITS/lib/python3.8/site-packages Requires: nvidia-cublas-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-runtime-cu11, nvidia-cudnn-cu11, typing-extensions Required-by: fairseq, torchaudio, torchvision, triton ### Branch of sovits 4.0(Default) ### Dataset source (Used to judge the dataset quality) Recorded in recording studio ### Where thr problem occurs or what command you executed python inference_main.py -m "logs/44k/G_200000.pth" -c "configs/config.json" -n "source.wav" -t 0 -s "aki" -a -cr 0.5 ### Problem description introducing unwanted distortion in converted audio. source pitch not properly converted. 4.0 version not working properly. Please help me 4.0 ไธๅทฅไฝœใ€‚ๅœจ่ฝฌๆขๅŽ็š„้Ÿณ้ข‘ไธญๅผ•ๅ…ฅไธ้œ€่ฆ็š„ๅคฑ็œŸใ€‚ๆบ้Ÿณ้ซ˜ๆœชๆญฃ็กฎ่ฝฌๆขใ€‚ ่ฏทๅธฎๆˆ‘ ### Log ```python python inference_main.py -m "logs/44k/G_200000.pth" -c "configs/config.json" -n "source.wav" -t 0 -s "aki" -a -cr 0.5 load model(s) from hubert/checkpoint_best_legacy_500.pt INFO:fairseq.tasks.text_to_speech:Please install tensorboardX: pip install tensorboardX INFO:fairseq.tasks.hubert_pretraining:current directory is /root/Experiments/NewExperiments/so-vits-svc-4.0-mean-spk-emb INFO:fairseq.tasks.hubert_pretraining:HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', ' fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': Fals e, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False} INFO:fairseq.models.hubert.hubert:HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activati on_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_l ayerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_ first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_tem p': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, ' mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'm ask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1 , 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': Fals e, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', ' pos_enc_type': 'abs', 'fp16': False} load INFO:root:Loaded checkpoint 'logs/44k/G_200000.pth' (iteration 574) spk_list ===> ['sai_dharam_tej'] #=====segment start, 5.82s====== vits use time:1.1171600818634033 #=====segment start, 3.4s====== vits use time:0.19401907920837402 #=====segment start, 0.025s====== jump empty segment #=====segment start, 4.6s====== vits use time:0.269317626953125 #=====segment start, 5.0s====== vits use time:0.1455228328704834 #=====segment start, 4.28s====== vits use time:0.12343764305114746 #=====segment start, 0.012s====== jump empty segment #=====segment start, 8.92s====== vits use time:0.17324042320251465 #=====segment start, 5.66s====== vits use time:0.14951491355895996 #=====segment start, 1.66s====== vits use time:0.1753239631652832 #=====segment start, 0.013s====== jump empty segment #=====segment start, 7.78s====== vits use time:0.14623379707336426 #=====segment start, 6.74s====== vits use time:0.12516403198242188 #=====segment start, 0.009s====== jump empty segment #=====segment start, 5.1s====== vits use time:0.18128347396850586 #=====segment start, 0.002s====== jump empty segment #=====segment start, 6.12s====== vits use time:0.1354503631591797 #=====segment start, 6.46s====== vits use time:0.1435403823852539 #=====segment start, 7.08s====== vits use time:0.12348651885986328 #=====segment start, 4.66s====== vits use time:0.1376965045928955 #=====segment start, 5.56s====== vits use time:0.1471116542816162 #=====segment start, 0.013s====== jump empty segment #=====segment start, 7.1s====== vits use time:0.18711566925048828 #=====segment start, 0.01s====== jump empty segment #=====segment start, 5.484s====== vits use time:0.13010954856872559 #=====segment start, 6.54s====== vits use time:0.11691427230834961 #=====segment start, 8.96s====== vits use time:0.20655536651611328 #=====segment start, 6.52s====== vits use time:0.17476463317871094 #=====segment start, 5.14s====== vits use time:0.18201518058776855 #=====segment start, 5.56s====== vits use time:0.10942959785461426 #=====segment start, 13.42s====== vits use time:0.26901769638061523 #=====segment start, 0.009s====== jump empty segment ``` ### Screenshot `so-vits-svc` and `logs/44k` folders and paste here # SOURCE ![source_sovits](https://user-images.githubusercontent.com/35978784/227768099-d438d57a-3c35-4cc6-b9bc-5698155d49dd.png) # CONVERTED ![converted_highlighted](https://user-images.githubusercontent.com/35978784/227768599-6fa9d773-be04-4f67-bc07-955f46da813a.jpeg) ### Supplementary description Almost all the converted samples pitch not converted properly. it is a major issue. please check this immediately friends ๅ‡ ไนŽๆ‰€ๆœ‰่ฝฌๆขๅŽ็š„ๆ ทๆœฌ้Ÿณ้ซ˜้ƒฝๆฒกๆœ‰ๆญฃ็กฎ่ฝฌๆขใ€‚่ฟ™ๆ˜ฏไธ€ไธช้‡ๅคง้—ฎ้ข˜ใ€‚่ฏทๆœ‹ๅ‹ไปฌ็ซ‹ๅณๆŸฅ็œ‹
open
2023-03-26T09:57:16Z
2023-04-10T07:46:36Z
https://github.com/svc-develop-team/so-vits-svc/issues/89
[ "help wanted" ]
MuruganR96
6
proplot-dev/proplot
matplotlib
288
Manually specify `title` and `abc` coordinate positions
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). --> ### Description Hi, is it possible to make the abc labels slightly offset to the left from the axis? This would probably be a negative position. <img width="306" alt="image" src="https://user-images.githubusercontent.com/8291800/134991597-a6722340-8aab-4878-a345-928175343d40.png"> I was hoping to have the (b) moved slightly left so that I can center the title without the two texts crashing into each other. I tried the following after having all of my "imshow" and other formatting code run: ```python ax = axes[2] aobj = ax._title_dict['abc'] print(aobj.get_position()) # prints (0, 1.0) # no effect aobj.set_x(-.25) # no effect abc = ax.get_children()[1] abc.set_position((-.25, 1.0)) ``` I couldn't figure out what was running to overwrite these positions, but I assume it's something internal to proplot to make the layout nice and orderly. ### Proplot version ``` >>> import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version) 3.3.0 0.9.1 ```
open
2021-09-27T22:08:14Z
2022-07-08T15:54:21Z
https://github.com/proplot-dev/proplot/issues/288
[ "feature" ]
scottstanie
6
aiortc/aioquic
asyncio
148
Datagrams only getting sent every 15s
Hello, I'm playing around with forwarding audio/video using aioquic as a server. Since I want the data to be delivered as quickly as possibly and unreliably, I'm using datagrams. aioquic basically receives datagrams from a "sender", and then sends them to however many "subscribers." While the sender -> server datagrams are being received synchronously and the server -> subscriber sends are being called synchronously, I'm seeing on the client / in Wireshark that the server -> subscriber datagrams are only getting bulk sent every 15s. It looks like this is due to the ack_at timer. Instead, I'd like for the datagrams to be sent immediately upon `send_datagram_frame`. Does anyone have any ideas of what I can do to fix this? The code is below (it's based on [this example](https://github.com/GoogleChrome/samples/blob/gh-pages/quictransport/quic_transport_server.py)). ``` import argparse from datetime import datetime import random import asyncio import io import os import struct import urllib.parse from collections import defaultdict from typing import Dict, Optional from aioquic.asyncio import QuicConnectionProtocol, serve from aioquic.quic.configuration import QuicConfiguration from aioquic.quic.connection import QuicConnection, END_STATES from aioquic.quic.events import StreamDataReceived, StreamReset, DatagramFrameReceived, QuicEvent, ConnectionTerminated from aioquic.tls import SessionTicket BIND_ADDRESS = '::1' BIND_PORT = 4433 ALLOWED_ORIGINS = {'localhost'} streams = defaultdict(dict) class SendHandler: def __init__(self, stream_id, connection) -> None: self.stream_id = stream_id print('New send', self.stream_id) self.connection = connection def quic_event_received(self, event: QuicEvent) -> None: if isinstance(event, DatagramFrameReceived): print(' from', self.stream_id) subs = streams[self.stream_id] for sub in subs.values(): sub.handle(event) if isinstance(event, ConnectionTerminated): print('ConnectionTerminated send', self.stream_id) self.close() def close(self) -> None: print('Send closed', self.stream_id) class SubHandler: def __init__(self, stream_id, connection) -> None: self.stream_id = stream_id self.sub_id = random.random() print('New sub', self.stream_id, self.sub_id) self.connection = connection streams[self.stream_id][self.sub_id] = self def handle(self, event: QuicEvent) -> None: print(' f to', self.stream_id, self.sub_id) self.connection.send_datagram_frame(event.data) print(self.connection.get_timer()) def close(self) -> None: print('Sub closed', self.stream_id, self.sub_id) del streams[self.stream_id][self.sub_id] def quic_event_received(self, event: QuicEvent) -> None: if isinstance(event, ConnectionTerminated): print('ConnectionTerminated sub', self.stream_id, self.sub_id) self.close() # QuicTransportProtocol handles the beginning of a QuicTransport connection: it # parses the incoming URL, and routes the transport events to a relevant # handler. It does that by waiting for a # client indication (a special stream with protocol headers), and buffering all # unrelated events until the client indication can be fully processed. class QuicTransportProtocol(QuicConnectionProtocol): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self.pending_events = [] self.handler = None self.client_indication_data = b'' def quic_event_received(self, event: QuicEvent) -> None: try: if self.is_closing_or_closed(): if self.handler: print('closing?') self.handler.close() self.handler = None return if self.handler is not None: self.handler.quic_event_received(event) return if isinstance(event, StreamDataReceived) and event.stream_id == 2: self.client_indication_data += event.data if event.end_stream: self.process_client_indication() if self.is_closing_or_closed(): return # Pass all buffered events into the handler now that it's # available. for e in self.pending_events: self.handler.quic_event_received(e) self.pending_events.clear() else: self.pending_events.append(event) except Exception as e: if self.handler: print('closing bc exception:', e) self.handler.close() self.handler = None self.close() # Client indication follows a "key-length-value" format, where key and # length are 16-bit integers. See # https://tools.ietf.org/html/draft-vvv-webtransport-quic-01#section-3.2 def parse_client_indication(self, bs): while True: prefix = bs.read(4) if len(prefix) == 0: return # End-of-stream reached. if len(prefix) != 4: raise Exception('Truncated key-length tag') key, length = struct.unpack('!HH', prefix) value = bs.read(length) if len(value) != length: raise Exception('Truncated value') yield (key, value) def process_client_indication(self) -> None: KEY_ORIGIN = 0 KEY_PATH = 1 indication = dict( self.parse_client_indication(io.BytesIO( self.client_indication_data))) origin = urllib.parse.urlparse(indication[KEY_ORIGIN].decode()) path = urllib.parse.urlparse(indication[KEY_PATH]).decode() # Verify that the origin host is allowed to talk to this server. This # is similar to the CORS (Cross-Origin Resource Sharing) mechanism in # HTTP. See <https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS>. if origin.hostname not in ALLOWED_ORIGINS: raise Exception('Wrong origin specified') # Dispatch the incoming connection based on the path specified in the # URL. pieces = path.path.split("/") print('got a client indication', path.path, pieces) stream_id = pieces[2] if pieces[1] == 'send': self.handler = SendHandler(stream_id, self._quic) elif pieces[1] == 'sub': self.handler = SubHandler(stream_id, self._quic) else: raise Exception('Unknown path') def is_closing_or_closed(self) -> bool: return self._quic._close_pending or self._quic._state in END_STATES if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('certificate') parser.add_argument('key') args = parser.parse_args() configuration = QuicConfiguration( alpn_protocols=['wq-vvv-01'], is_client=False, max_datagram_frame_size=1500, ) configuration.load_cert_chain(args.certificate, args.key) print('Running loop') loop = asyncio.get_event_loop() loop.run_until_complete( serve( BIND_ADDRESS, BIND_PORT, configuration=configuration, create_protocol=QuicTransportProtocol, )) loop.run_forever() ```
closed
2020-11-03T18:45:10Z
2021-11-21T17:55:48Z
https://github.com/aiortc/aioquic/issues/148
[]
dsafreno
7
ITCoders/Human-detection-and-Tracking
numpy
11
Port to C++
**Port this repo to C++** - code should be written in C++. - make a separate branch for this. - functionality should be same and performance should be better. - useful scripts in scripts folder should also be ported. - multiple pull requests are allowed for this but make sure that you are on correct path :smile: :smile:
closed
2016-10-21T18:41:38Z
2017-05-18T20:44:20Z
https://github.com/ITCoders/Human-detection-and-Tracking/issues/11
[]
arpit1997
6
OFA-Sys/Chinese-CLIP
computer-vision
162
ๆขฏๅบฆ็ดฏ็งฏไธญ็š„้—ฎ้ข˜
ๅ…ณไบŽๆขฏๅบฆ็ดฏ็งฏ็š„ไปฃ็ ๏ผŒๆœ‰ๅ‡ ็‚นไธๅคชๆ˜Ž็™ฝ๏ผŒๆƒณ่ฏทๆ•™ไธ€ไธ‹ใ€‚ 1. accum_image_features ๅ’Œaccum_text_featuresๅทฒ็ปๅพ—ๅˆฐไบ†๏ผŒไธบไป€ไนˆๅœจget_lossไธญๅˆ่ฆ้‡ๆ–ฐ่ฎก็ฎ—ไธ€ไธชbatch็š„็‰นๅพใ€‚่ฟ™ๆ ท็š„่ฏ๏ผŒๆ‰€ๆœ‰็š„็‰นๅพ้ƒฝ่ขซ่ฎก็ฎ—ไบ†ไธคๆฌก๏ผŒ้€ ๆˆ่ต„ๆบ็š„ๆตช่ดน 2. get_loss่ฟ›่กŒไบ†accum_freqๆฌก๏ผŒไฝ†ๆ˜ฏๆฏๆฌก็š„accum_image_featuresๅ’Œaccum_text_featuresๅนถๆฒกๆœ‰ๅ‘็”Ÿๅ˜ๅŒ–ใ€‚loss็š„่ฎก็ฎ—ๆ˜ฏๅฆๆ˜ฏ้‡ๅคไบ†๏ผŸ ็›ธๅ…ณไปฃ็ ๅฆ‚ไธ‹๏ผš # First, cache the features without any gradient tracking. with torch.no_grad(): with autocast(enabled=(args.precision == "amp")): chunk_image_features, chunk_text_features, _ = model(images, texts) accum_image_features.append(chunk_image_features) accum_text_features.append(chunk_text_features) accum_images.append(images) accum_texts.append(texts) # If (i + 1) % accum_freq is not zero, move on to the next batch. if ((i + 1) % args.accum_freq) > 0: # FIXME this makes data time logging unreliable when accumulating continue # Now, ready to take gradients for the last accum_freq batches. # Re-do the forward pass for those batches, and use the cached features from the other batches as negatives. # Call backwards each time, but only step optimizer at the end. optimizer.zero_grad() for j in range(args.accum_freq): images = accum_images[j] texts = accum_texts[j] with autocast(enabled=(args.precision == "amp")): # `total_loss` and `acc` are coarsely sampled, taking only the last result in the loop. # Although each result should be the same in theory, it will be slightly different in practice total_loss, acc = get_loss(model, images, texts, loss_img, loss_txt, args, accum_image_features, accum_text_features, j) if args.precision == "amp": scaler.scale(total_loss).backward() else: total_loss.backward() if args.precision == "amp": scaler.step(optimizer) scaler.update() else: optimizer.step() `
open
2023-07-14T08:59:21Z
2023-07-29T08:07:41Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/162
[]
ChaoLi977
1
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
240
Not want to add a section
If I don't have a title blacklist, do I delete "word1" and "word2" and leave that area empty? Do I put "N/A"? Let me know. I also don't want to put in my GPA, can I leave that slot blank as well?
closed
2024-09-02T18:18:45Z
2024-09-03T16:04:40Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/240
[]
pacman20011
1
thtrieu/darkflow
tensorflow
996
bash: flow: command not found
[root@localhost darkflow]# flow -h bash: flow: command not found [root@localhost darkflow]# sudo flow -h sudo: flow: command not found [root@localhost darkflow]# python3 setup.py build_ext running build_ext [root@localhost darkflow]# sudo pip3 install -e . Obtaining file:///var/tmp/darkflow Installing collected packages: darkflow Found existing installation: darkflow 1.0.0 Uninstalling darkflow-1.0.0: Successfully uninstalled darkflow-1.0.0 Running setup.py develop for darkflow Successfully installed darkflow You are using pip version 9.0.3, however version 19.0.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. [root@localhost darkflow]# flow -h bash: flow: command not found what's wrong ?
open
2019-03-09T09:37:56Z
2020-05-13T07:52:46Z
https://github.com/thtrieu/darkflow/issues/996
[]
bewithme
4
BeanieODM/beanie
pydantic
769
[BUG] Pylance strict mode: type of "delete_all" is partially unknown
When using pylance in strict mode a lot of static document methods have a typing issue ``` Type of "delete_all" is partially unknown Type of "delete_all" is "(session: ClientSession | None = None, bulk_writer: BulkWriter | None = None, **pymongo_kwargs: Unknown) -> Coroutine[Any, Any, DeleteResult | None]"Pylance[reportUnknownMemberType](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportUnknownMemberType) ``` This is not only for `delete_all` but also for: - `delete` - `insert` - `insert_many` - ... And probably more **To Reproduce** In VSCode, set pylance type checking mode to `strict` ```python class Sample(Document): name: str age: int is_active: bool _id: Optional[PydanticObjectId] = None Sample.delete_all() # pylance issue ``` **Expected behavior** I would like to be able to use beanie document functions while using pylance strict type checking mode. Is there a way to fix this apart from ignoring typing on the whole line with `# type: ignore`?
closed
2023-11-08T15:32:00Z
2024-10-07T18:29:45Z
https://github.com/BeanieODM/beanie/issues/769
[ "Stale" ]
dotKokott
4
lukas-blecher/LaTeX-OCR
pytorch
323
incompatible error
Cannot mix incompatible Qt library (6.5.3) with this library (6.5.2)
open
2023-10-11T12:59:12Z
2023-10-11T14:27:24Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/323
[]
KarnanBala
1
modin-project/modin
pandas
7,191
Fix ASV after changing default branch: "master" -> "main"
closed
2024-04-16T18:28:16Z
2024-04-16T20:40:16Z
https://github.com/modin-project/modin/issues/7191
[ "Benchmarking ๐Ÿ", "Testing ๐Ÿ“ˆ", "P0" ]
anmyachev
0
scanapi/scanapi
rest-api
435
Remove generic exception and raise a more dedicated exception that isn't very common for Class EndpointNode::run()
The [run method ](https://github.com/scanapi/scanapi/blob/main/scanapi/tree/endpoint_node.py#L94-L94) in the Class EndpointNode's catches very generic Exception. We would need to be precise in catching a particular Exception so it is likely to include many unrelated errors too. ``` def run(self): for request in self._get_requests(): try: yield request.run() except Exception as e: error_message = f"\nError to make request `{request.full_url_path}`. \n{str(e)}\n" logger.error(error_message) session.exit_code = ExitCode.REQUEST_ERROR continue ```
closed
2021-07-29T12:43:27Z
2022-04-10T13:01:17Z
https://github.com/scanapi/scanapi/issues/435
[ "Refactor", "Code Quality" ]
Pradhvan
1
tradingstrategy-ai/web3-ethereum-defi
pytest
5
Get Aave lending and borrow rates directly from on-chain
[Aave has deployed its v3 version on Ethereum mainnet and Polygon](https://docs.aave.com/developers/getting-started/readme). [See Ethereum markets - MetaMask required](https://app.aave.com/?marketName=proto_mainnet) <img width="1310" alt="image" src="https://user-images.githubusercontent.com/49922/177447328-c157fa9e-7a6e-4b0e-9264-c52535f4b101.png"> The market has two sides - What lenders receive as a payment, called supply APR - What borrowers must pay for having a loan, called borrow APR ## Business use case - Aave lending markets can be used to build a short position of tokens - By knowing how much borrowing costs, we know what is the cost of maintaining a short position We need to be able to answer the questions - How much having a loan for a token e.g. [ETH](https://app.aave.com/reserve-overview/?underlyingAsset=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2&marketName=proto_mainnet) would have cost us historically - Opening amoun - Opening date - Closing date - Interest payments accumulated (how to calculate?) - How much having a loan for a token would cost us right now (current interest rate) Note that the interest rate is paid in the pool native token. E.g. ETH will accrue ETH interest and needs to be converted to US Dollar in some point. ## Task - Create a proof of concept Aave lending and borrow rate fetcher in Python - It must be a self-contained notebook similar to [Uniswap v3 price example](https://web3-ethereum-defi.readthedocs.io/tutorials/uniswap-v3-price-analysis.html) - Study how Aave v3 lending and borrow rates are reported. The rates are variable over time. - Make short README documentation of available events - Use a [event reader](https://web3-ethereum-defi.readthedocs.io/api/_autosummary_block_reader/eth_defi.event_reader.reader.html) Python module to read this data to a CSV file - Plot a graph using Plotly for the lending and borrow rate of one market, e.g. [ETH](https://app.aave.com/reserve-overview/?underlyingAsset=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2&marketName=proto_mainnet) - The graph matches what we have on the Aave website ## References - See https://github.com/PatrickAlphaC/aave_web3_py - See [Aave v2 lending rate query using Subgraph](https://github.com/tradingstrategy-ai/ethlisbon/blob/master/aave-lending-rate-query.py) - See [Aave v2 Python tests](https://github.com/aave/tests-protocol-v2-sigmaprime/blob/3eabcdb79ae7a90d4bd3f1a0cf91e4c6d8f9b2ed/tests/test_lending_pool.py) - [Third party Aave Python client](https://github.com/PathX-Projects/Aave-DeFi-Client) Aave displays borrow rate on [its main UI](https://app.aave.com/reserve-overview/?underlyingAsset=0x0bc529c00c6401aef6d220be8c6ea1667f6ad93e&marketName=proto_mainnet): <img width="853" alt="image" src="https://user-images.githubusercontent.com/49922/177447759-f468749f-577a-4647-a385-2090be9f1e23.png"> Aave is internally displaying this information on its website using [RatesHistory API](https://github.com/aave/aave-api/blob/master/src/services/RatesHistory.ts). [Aave internal MongoDB schema is here](https://github.com/aave/aave-api/blob/70dde8a8119dfbdf33fd0708af18776a794a2b40/src/repositories/mongodb/models/Rate.ts#L4). It is using a [cron job to update rates](https://github.com/aave/aave-api/blob/master/src/cron-jobs/reserveHistory-updates.ts) by reading them from [Subgraph](https://github.com/aave/aave-api/blob/master/src/services/RatesHistory.ts#L158). [Aave Subgraph code is an undocumented mess](https://github.com/aave/aave-api/blob/master/src/repositories/subgraph/v2Client.ts). [The rate is probably determined by this event](https://github.com/aave/aave-v3-core/blob/master/contracts/interfaces/IPool.sol#L197). See [GithubSearch](https://github.com/search?q=ReserveDataUpdated&type=code) for ReserveDataUpdated.
closed
2022-03-06T13:13:40Z
2022-09-25T03:41:26Z
https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/5
[]
miohtama
0
pallets-eco/flask-sqlalchemy
sqlalchemy
567
Add "first" and "last" to Pagination class?
I'm new to Flask, and was wondering about a feature that I think would be useful to add to the Pagination class. Unless I'm missing it, there doesn't seem to be a built-in way to get the numbers of the items you're viewing on the page itself. That is, if you want to display "387 records found; displaying 26โ€“50", you'd have to do calculations to get the "26" and "50". I think something along these lines would work: ``` @property def first(self): """The number of the first item on the current page""" if self.total == 0: first = 0 else: first = ((self.page - 1) * self.per_page) + 1 return first @property def last(self): """The number of the last item on the current page""" if self.page == self.pages: last = self.total else: last = self.page * self.per_page return last ``` Then you could do something like (in jinja2): ```{{ pagination.total }} records found; displaying {{ pagination.first }} โ€“ {{pagination.last }}``` I'm new to Python programming, and I don't know enough about testing, running changed code, etc., to feel comfortable submitting this as a pull request, so I hope it's OK that I just posted this as an Issue. Thanks.
closed
2017-11-17T16:39:16Z
2022-10-03T00:21:38Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/567
[ "pagination" ]
jessesheidlower
1
deepinsight/insightface
pytorch
2,249
The warning from `onnxruntime::VerifyEachNodeIsAssignedToAnEp`
I got this when I try to run insightface on Windows10 with CUDA/cuDNN: ``` 2023-02-18 00:41:44.1541586 [W:onnxruntime:, session_state.cc:1136 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf. 2023-02-18 00:41:44.1608231 [W:onnxruntime:, session_state.cc:1138 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments. ``` The warning msg came from [here](https://github.com/microsoft/onnxruntime/blob/4bb95d7690bac6b25622dbef5b711c15ffb00eee/onnxruntime/core/framework/session_state.cc#L1136). I know it seems not like the business of insightface, but according to the comments above the codes about generating warning msg: > If the user explicitly included the CPU provider anyway, then remain silent, but if it was implicitly added, and unexpected fallback happened to a non-preferred provider, warn the user. Seems that if I set `['CUDAExecutionProvider', 'CPUExecutionProvider']` as `provider` arg of `insightface.app.FaceAnalysis()`, this warning msg should not be raised, but it would be raised whether I remove `'CPUExecutionProvider'` from `providers` or not. Any checking tips about this issue?
closed
2023-02-17T17:10:32Z
2024-07-12T16:12:45Z
https://github.com/deepinsight/insightface/issues/2249
[]
Chris-fullerton
7
lexiforest/curl_cffi
web-scraping
121
ERROR: Failed building wheel for curl_cffi
When trying to install **curl_cffi** in Termux, via the `pip install curl_cffi` command, I get the error: ``` Collecting curl_cffi Using cached curl_cffi-0.5.7.tar.gz (27 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: cffi>=1.12.0 in /data/data/com.termux/files/usr/lib/python3.11/site-packages (from curl_cffi) (1.15.1) Requirement already satisfied: pycparser in /data/data/com.termux/files/usr/lib/python3.11/site-packages (from cffi>=1.12.0->curl_cffi) (2.21) Building wheels for collected packages: curl_cffi Building wheel for curl_cffi (pyproject.toml) ... error error: subprocess-exited-with-error ร— Building wheel for curl_cffi (pyproject.toml) did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€> [91 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-aarch64-cpython-311 creating build/lib.linux-aarch64-cpython-311/curl_cffi copying curl_cffi/[const.py](https://const.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi copying curl_cffi/[build.py](https://build.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi copying curl_cffi/[curl.py](https://curl.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi copying curl_cffi/[aio.py](https://aio.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi copying curl_cffi/[init.py](https://init.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi creating build/lib.linux-aarch64-cpython-311/curl_cffi/requests copying curl_cffi/requests/[cookies.py](https://cookies.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi/requests copying curl_cffi/requests/[errors.py](https://errors.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi/requests copying curl_cffi/requests/[headers.py](https://headers.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi/requests copying curl_cffi/requests/[init.py](https://init.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi/requests copying curl_cffi/requests/[session.py](https://session.py/) -> build/lib.linux-aarch64-cpython-311/curl_cffi/requests running egg_info writing curl_cffi.egg-info/PKG-INFO writing dependency_links to curl_cffi.egg-info/dependency_links.txt writing requirements to curl_cffi.egg-info/requires.txt writing top-level names to curl_cffi.egg-info/top_level.txt reading manifest file 'curl_cffi.egg-info/SOURCES.txt' reading manifest template '[MANIFEST.in](https://manifest.in/)' warning: no files found matching 'curl_cffi/cacert.pem' warning: no files found matching 'curl_cffi/_wrapper.*' warning: no files found matching 'curl_cffi/include/curl/*' adding license file 'LICENSE' writing manifest file 'curl_cffi.egg-info/SOURCES.txt' /data/data/com.termux/files/usr/tmp/pip-build-env-wydui0is/overlay/lib/python3.11/site-packages/setuptools/command/build_py.py:204: _Warning: Package 'curl_cffi.ffi' is absent from the packages configuration. !! **************** ############################ # Package would be ignored # ############################ Python recognizes 'curl_cffi.ffi' as an importable package[^1], but it is absent from setuptools' packages configuration. This leads to an ambiguous overall configuration. If you want to distribute this package, please make sure that 'curl_cffi.ffi' is explicitly added to the packages configuration field. Alternatively, you can also rely on setuptools' discovery methods (for example by using find_namespace_packages(...)/find_namespace: instead of find_packages(...)/find:). You can read more about "package discovery" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html If you don't want 'curl_cffi.ffi' to be distributed and are already explicitly excluding 'curl_cffi.ffi' via find_namespace_packages(...)/find_namespace or find_packages(...)/find, you can try to use exclude_package_data, or include-package-data=False in combination with a more fine grained package-data configuration. You can read more about "package data files" on setuptools documentation page: - https://setuptools.pypa.io/en/latest/userguide/datafiles.html [^1]: For Python, any directory (with suitable naming) can be imported, even if it does not contain any .py files. On the other hand, currently there is no concept of package data directory, all directories are treated like packages. **************** !! check.warn(importable) creating build/lib.linux-aarch64-cpython-311/curl_cffi/ffi copying curl_cffi/ffi/cdef.c -> build/lib.linux-aarch64-cpython-311/curl_cffi/ffi copying curl_cffi/ffi/shim.c -> build/lib.linux-aarch64-cpython-311/curl_cffi/ffi copying curl_cffi/ffi/shim.h -> build/lib.linux-aarch64-cpython-311/curl_cffi/ffi running build_ext generating cffi module 'build/temp.linux-aarch64-cpython-311/curl_cffi._wrapper.c' creating build/temp.linux-aarch64-cpython-311 building 'curl_cffi._wrapper' extension creating build/temp.linux-aarch64-cpython-311/build creating build/temp.linux-aarch64-cpython-311/build/temp.linux-aarch64-cpython-311 creating build/temp.linux-aarch64-cpython-311/curl_cffi creating build/temp.linux-aarch64-cpython-311/curl_cffi/ffi aarch64-linux-android-clang -DNDEBUG -g -fwrapv -O3 -Wall -fstack-protector-strong -O3 -fstack-protector-strong -O3 -fPIC -Icurl_cffi/include -Icurl_cffi/ffi -I/data/data/com.termux/files/usr/include/python3.11 -c build/temp.linux-aarch64-cpython-311/curl_cffi._wrapper.c -o build/temp.linux-aarch64-cpython-311/build/temp.linux-aarch64-cpython-311/curl_cffi._wrapper.o build/temp.linux-aarch64-cpython-311/curl_cffi._wrapper.c:884:10: error: call to undeclared function 'curl_easy_impersonate'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] return curl_easy_impersonate(x0, x1, x2); ^ build/temp.linux-aarch64-cpython-311/curl_cffi._wrapper.c:928:14: error: call to undeclared function 'curl_easy_impersonate'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] { result = curl_easy_impersonate(x0, x1, x2); } ^ 2 errors generated. error: command '/data/data/com.termux/files/usr/bin/aarch64-linux-android-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for curl_cffi Failed to build curl_cffi ERROR: Could not build wheels for curl_cffi, which is required to install pyproject.toml-based projects ```
closed
2023-09-08T09:14:38Z
2023-09-08T09:21:02Z
https://github.com/lexiforest/curl_cffi/issues/121
[ "duplicate" ]
Gertasan
1
AntonOsika/gpt-engineer
python
893
The term 'gpt-engineer' is not recognized
hey guys i'm on windows 11 and I installed via python 'pip install gpt-engineer' everything seems to install fine but now I get this in PowerShell ``` gpt-engineer : The term 'gpt-engineer' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. ``` how do I add gpt-engineer in the path?
closed
2023-12-08T21:35:41Z
2023-12-09T01:32:09Z
https://github.com/AntonOsika/gpt-engineer/issues/893
[ "documentation", "triage" ]
wp-coin
2
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
486
[FEATURE]: Need an option to hide GPA crap at top of resume
### Feature summary Need an option to hide GPA crap at top of resume ### Feature description Need an option to hide GPA crap at top of resume ### Motivation I graduated from college 30 years ago, GPA is completely irrelevant for me. ### Alternatives considered _No response_ ### Additional context _No response_
closed
2024-10-06T22:58:45Z
2024-10-25T05:29:18Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/486
[ "enhancement" ]
ralyodio
0
plotly/dash
dash
3,148
Dash 3.0 feedback
Thanks for making the dash 3.0 pre-release available. ๐ŸŽ‰ Just a couple questions and comments: - Are you still planning on removing the `_timestamp` props? https://github.com/plotly/dash/issues/3055 - The `dcc.Dropdown` is still using `defaultProps` which is causing a warning in the console. There are other console warning with `dcc.Dropdown`, but those aren't new, just wondering if there are plans to fix those too. - In `dcc.Loading`, the `custom_component` shows a typehints warning if you use components other than `html` components. Feature request: - In DMC, we are relying on `renderDashComponents` from the dash-extensions.js library to render components as props defined in `children`. For more details see: https://github.com/plotly/dash/pull/3066#issuecomment-2544729487. Philippe mentioned he could add a `render(component, path)` to `dash_component_api` quite easily, but wanted to wait for another release.
closed
2025-02-03T15:37:26Z
2025-03-06T21:34:49Z
https://github.com/plotly/dash/issues/3148
[ "P1" ]
AnnMarieW
31
killiansheriff/LovelyPlots
matplotlib
3
imshow
Well done for creating this library and getting it compatible with Adobe Illustrator (spent too many hours of my life fixing figures!). Are you planning to expand on other plots like [imshow](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html)? It does export correctly, but color scheme doesn't seem to change. Also, you may need color schemes which are appropriate for showing linear gradient changes, like heat maps, Matlab's jet, etc.
closed
2022-07-27T22:09:11Z
2022-07-28T19:29:51Z
https://github.com/killiansheriff/LovelyPlots/issues/3
[]
danchitnis
2
ymcui/Chinese-BERT-wwm
nlp
70
ๅœจ้ข„่ฎญ็ปƒRoberta็š„ๆ—ถๅ€™้œ€่ฆๅƒๅŽŸๆฅ่ฎญ็ปƒBertไธ€ๆ ทๅŠ ไธŠCLS SEP SEP, ่ฟ˜ๆ˜ฏ็›ดๆŽฅCLS SEP
@ymcui
closed
2019-11-07T10:18:28Z
2019-11-08T04:03:21Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/70
[]
xiongma
2
keras-rl/keras-rl
tensorflow
343
Where is the environment specified in DQNAgent
I am trying to understand where exactly in the DQNAgent is the environment specified. I see that there is mentions of self.step, self.recent_observation, self.recent_action, reward and terminal but I don't see where are these being generated. I am trying to develop my own environment and am trying to understand how it will pass through the system. Also, in the statement dqn.fit(env, nb_steps=3000, visualize=False, verbose=1) I don't understand how is the environment passed into the function. Thank you!
closed
2019-10-18T19:12:50Z
2020-01-24T03:01:03Z
https://github.com/keras-rl/keras-rl/issues/343
[ "wontfix" ]
kdawar1
1
ray-project/ray
tensorflow
50,850
[Dashboard] Serve Grafana panels shows metrics from multiple clusters instead of filtering on SessionName or ray_io_cluster
### What happened + What you expected to happen **Context** We are running multiple Ray clusters on version 2.41.0 and sending metrics to a single c**ommon Thanos instance**. Each Ray cluster is launched by the Kuberay operator We are launching multiple Ray clusters by creating RayCluster and RayService custom resources **Observed** The Ray Dashboard Overview tab, "Cluster Utilization card" shows CPU, memory and Disk metrics for the ray cluster on which the Dashboard process is running. We noticed that the Dashboard frontend code queries Thanos and includes a filter on SessionName However, in the Serve tab, the "QPS per application", "Error QPS per application" panels include applications from multiple clusters We noticed that the frontend Thanos queries do not filter on SessionName or ray_io_cluster **Expected** _The Grafana panels in the Serve tab of the Ray Dashboard should only show applications for the Ray Cluster on which the Dashboard hosted instead of showing all applications running on every Ray cluster_ **Relevant code** I dug into the code base and believe that these code lines are relevant. The Cluster Utilization card in the Overview tab has a SessionName filter https://github.com/ray-project/ray/blob/f100fe8da7875f982ce7487b86984debaca04ee4/python/ray/dashboard/client/src/pages/overview/cards/ClusterUtilizationCard.tsx#L55 The queries in the Serve tab do not https://github.com/ray-project/ray/blob/f100fe8da7875f982ce7487b86984debaca04ee4/python/ray/dashboard/client/src/pages/serve/ServeDeploymentMetricsSection.tsx#L186 ### Versions / Dependencies Kuberay operator 1.2.2 Ray 2.41.0 Python 3.11.11 Thanos 0.37 Grafana 10.4.4 OpenShift 4.13 ### Reproduction script These env are set in the RayCluster and RayService Kubernetes custom resources - name: RAY_GRAFANA_IFRAME_HOST value: "https://grafana.apps.uat/.<redacted_root_domain>" - name: RAY_GRAFANA_HOST value: "http://grafana.monitoring.svc" - name: RAY_PROMETHEUS_NAME value: "Thanos" - name: RAY_PROMETHEUS_HOST value: "http://thanos-query.monitoring.svc.cluster.local:9090" ### Issue Severity Medium: It is a significant difficulty but I can work around it.
open
2025-02-24T07:26:58Z
2025-02-24T17:24:20Z
https://github.com/ray-project/ray/issues/50850
[ "bug", "dashboard", "triage" ]
frenoid
0
explosion/spaCy
deep-learning
13,039
spacy.cli.download is no longer available
In the last release (3.7.0), the ability to call `spacy.cli.download(MODEL)` is no longer available. Is there another way to download models through a Python script, or can the spacy.cli package be reintroduced? [In our use case](https://github.com/microsoft/presidio/blob/7400dc4b357595406954e13c0ecbdee4b27e5cd8/presidio-analyzer/install_nlp_models.py#L54), we propose the use of downloaded spacy models by default. If the user inputted a spacy model name which isn't installed, we wish to install it lazily, therefore the CLI command isn't a good fit for us. Potentially an outcome of #12962 ## How to reproduce the behaviour ```python import spacy spacy.cli.download("en_core_web_lg") # AttributeError: module 'spacy' has no attribute 'cli' ``` ## Your Environment - **spaCy version:** 3.7.0 - **Platform:** macOS-14.0-arm64-arm-64bit - **Python version:** 3.9.16 - **Pipelines:** el_core_news_lg (3.5.0), en_core_web_lg (3.6.0), sv_core_news_sm (3.6.0), en_core_web_sm (3.6.0), en_core_web_trf (3.5.0), es_core_news_sm (3.5.0), es_core_news_md (3.5.0), en_core_web_md (3.5.0)
closed
2023-10-04T10:17:42Z
2023-11-04T00:02:14Z
https://github.com/explosion/spaCy/issues/13039
[ "bug", "feat / cli" ]
omri374
5
pydantic/pydantic-core
pydantic
913
Maximum value of `int` field does not reflect bytes used when subclassing
Noticed this issue when working with documents from MongoDB (using `motor`). It returns integers as [BSON types](https://pymongo.readthedocs.io/en/stable/api/bson/int64.html), which are subclasses of the `int` type. Minimal reproduction examples: (pydantic 2.2.1) ```py from pydantic import BaseModel class MyInt(int): ... class Test(BaseModel): id: int x = 1046862536621953054 a = Test(id=MyInt(x)) print(a.id) assert x == a.id # Fails, a.id is equal to 1046862536621953024 ``` ```py from pydantic import BaseModel, ConfigDict class MyInt(int): ... class Test(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) id: MyInt x = 1046862536621953054 a = Test(id=MyInt(x)) print(a.id) assert x == a.id # Passes ``` ```py from pydantic import BaseModel class MyInt(int): ... class Test(BaseModel): id: int x = 1046862536621953054 a = Test(id=int(MyInt(x))) print(a.id) assert x == a.id # Passes ``` The first example should either not allow passing the type in, or it should get the value correctly (same behaviour as in v1). Selected Assignee: @samuelcolvin
closed
2023-08-21T21:59:15Z
2023-08-23T12:50:32Z
https://github.com/pydantic/pydantic-core/issues/913
[ "unconfirmed" ]
NiceAesth
0
OpenInterpreter/open-interpreter
python
1,063
When refreshing screen it stores each snapshopt to scroll history on the terminal
### Describe the bug when running anything in the open-interpreter terminal installed from pip in linux, we get a history of EVERY "screen refersh" happened so far, this is annoying for scrolling, let me give you a screenshot ### Reproduce instal open interpreter on linux ask to code something scroll up ### Expected behavior when refreshing screen it should overwrite what happened before and not clear screen by scrolling down, if that is not possible, then you can have it work ala vim. ### Screenshots ![image](https://github.com/KillianLucas/open-interpreter/assets/2415206/8fb9dc0d-adf0-43a0-8310-e34fb4a5ec4a) ### Open Interpreter version 0.2.0 ### Python version 3.10.12 ### Operating System name and version Mint 21.3 ### Additional context _No response_
closed
2024-03-09T12:24:27Z
2024-03-20T01:22:17Z
https://github.com/OpenInterpreter/open-interpreter/issues/1063
[ "Bug" ]
Kreijstal
1
biolab/orange3
scikit-learn
6,100
Basic information about how data objects in Orange3 are handled in memory / tips for profiling add-on memory performance
<!-- Thanks for taking the time to submit a feature request! For the best chance at our team considering your request, please answer the following questions to the best of your ability. --> ## **What's your use case?** <!-- In other words, what's your pain point? --> <!-- Is your request related to a problem, or perhaps a frustration? --> <!-- Tell us the story that led you to write this request. --> I'm developing add-on for Orange, at now mostly to add features for data preparation (e.g. before the data becomes a Data or DataFrame). The ideal scenario would be have a subset of boring, low level file operations (such as KNIME or pentaho-kettle have) to make data cleaning before it become some sort of a tabular format good enough to be imported traditionally with Orange. > **Boring internals, not really need for this issue** >> The strategy I'm doing to be able to allow raw data preparation before converting to orange is a two tyoes, `FileRAW` and `FileRAWCollection` which mostly only have identifiers which explain how to find the real files (or directory with real files) on disk. In other words, I'm already somewhat using a way to pass the information between widgets, but it still following the philosophy of _"In Linux and UNIX, everything is a file"_ in a literal sense. In this sense, even if eventually we here add features such as using pandas to convert a FileRAW to another FileRAW the end result will release memory as soon as it stops. To advantage of this development approach, the optimizations are mostly generic to how to handle memory with python (or pandas) >> >> For now, the add-on is able to use abstract low level `pandas.read_table`, `pandas.read_csv`, `pandas.read_excel`, `pandas.read_feather`, `pandas.read_fwf`, `pandas.read_html`, `pandas.read_json`, `pandas.json_normalize`, `pandas.read_orc`, `pandas.read_parquet`, `pandas.read_sas`, ` pandas.read_spss`, `pandas.read_stata`, `pandas.read_xml` to a dataframe, and I discovered some function in your code that convert data frames to Orange Table format. >> >> Note: is explicitly out of my plans "reinvent the wheel" of what Orange3 do. For example, maybe one "smart default" if users are reusing workflows from someone else, but the data they have now is way much bigger, would be slice like 25% or 10% of the data and warn the user to optimize the types before passing for orange. So the user could know by next steps what could be optimized on previous ones until it fits the memory. However, my **challenge becomes know how Orange deal with memory** before releasing the add-on for general use. For sake of this issue, while I still need to deal with interface "freezing" for long downloads (this article in on my todo list https://orange3.readthedocs.io/projects/orange-development/en/latest/tutorial-responsive-gui.html). **However, as long as the user have disk space, the "importer" to generate the allow user add gigabyte size files on disk**. And even for data which would on a 1:1 fits on memory, by using only pandas without proper optimized data types, it is easily to also use way too many memory. ## **What's your proposed solution?** With all this context said, I think two questions could solve it 1. **Did exist some way like** `import logging; log = logging.getLogger(__name__); log.exception(get_memory_size_of(self.data))**` **in which the get_memory_size_of is something I can use to know the Orange3 internals?** If this alone is not sufficient, maybe there's something you here already use, which would list all data objects sizes and which widgets created then or sort of?* 2. **There's some general summary like how Orange manage memory?** I assume it will reuse as much as possible one Data from a previous widget. I know low level way computer works, and I'm conformable with python, but not with GUIs or QT and I'm aware long-running scripts for like nodejs can memory leak. 1. I think my main question here is... what happens if I generate different objects (such as Data and DataFrame) as output from an Widget, but the will never attach the DataFrame input of my widget to another widget, the way Orange works will free memory of the outputs which are not used by anything else? Does the `self.Outputs.data_frame.send(self.data_frame)` is smart to discard the memory no widgets wants? 1. This question is relevant, because if is the case, I will avoid creating too much outputs for all potential widgets that would make use of it. So, for me would be easier to workaround (even if take 10's of hours) than wait for something be implemented/tested on Orange3 ## **Are there any alternative solutions?** Orange3 is actually quite fantastic to protect errors in specific widgets to blow up entire interface, but this don't work for memory-related issues. So I think is better to make the widgets on this add-on that prepare data for use with the Orange be aware of it to protect orange. But for now, think that most of what the data preparation steps are doing is... a visual frontend for what would be possible doing with one-time operation with python (not just pandas). Also, maybe point for another topic, but since `FileRAW` and `FileRAWCollection` just have codes to represent physical files on disk, this strategy could be used as lazy loading for other widgets. I just started with extension development around 2 weeks, so by now I'm mostly dealing with QT and deal with the basics, but I think would be feasible to export `FileRAW` to some Dask object or something you have.
closed
2022-08-19T00:15:11Z
2022-09-10T06:07:48Z
https://github.com/biolab/orange3/issues/6100
[]
fititnt
3
thp/urlwatch
automation
611
How to install on MacOS in 2021?
Last year I had this working perfectly, but as soon as 2021 started something went wrong with my installation (MacOS Mojave) and I couldn't run urlwatch anymore. I uninstalled it with `python3 -m pip uninstall urlwatch` to see if reinstalling a newer version would help, but after running `python3 -m pip install --upgrade urlwatch` it still did not link properly to my /usr/local/bin and I was unable to run it (not found) (adding the `--user` flag did not help, it just put it in my /library/ for some reason) I attempted to link it myself with `ln -s /usr/local/lib/python3.9/site-packages/urlwatch /usr/local/bin/urlwatch` but I can't get it to work, it keeps saying `zsh: permission denied: urlwatch` even if I use chmod -R to change the permissions on both items. I think the install documentation might need a bit of a touch up.
closed
2021-01-09T17:40:42Z
2021-01-11T19:44:04Z
https://github.com/thp/urlwatch/issues/611
[]
Kezzsim
1
junyanz/pytorch-CycleGAN-and-pix2pix
pytorch
948
Newer versions of CycleGAN?
Hi. I have worked with CycleGAN the last year and I really like it!. But since cyclegan is from 2017 it is almost 3 years old now. Do you know if there are some "better" versions of CycleGAN in 2020? Or some other extensions of GANs that does the image to image translation just as CycleGAN but just more optimal?
closed
2020-03-07T12:54:57Z
2020-03-31T06:41:12Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/948
[]
kpagels
4
polakowo/vectorbt
data-visualization
585
Exit after N days?
Using `from_signals_ method, is it possible to have a boolean signal for entries with a timed exit N days after entry? Simply shifting the entries dataframe forward by N rows doesn't work because the dataframe may have superfluous entry signals that are not used to actually enter a position because an open position already exists. In this situation, shifting the entries dataframe forward creates false exit signals that lead to premature exit.
closed
2023-04-17T08:48:20Z
2024-03-16T10:44:15Z
https://github.com/polakowo/vectorbt/issues/585
[]
posidonius
1
JaidedAI/EasyOCR
deep-learning
936
AttributeError: 'numpy.float64' object has no attribute 'lower'
Hi. I am trying to train a model. I have created a dataset as required and I tried to run the training script. This is the result: ``` File "Downloads/EasyOCR-master/trainer/start_train.py", line 30, in <module> train(opt, amp=False) File "Downloads/EasyOCR-master/trainer/train.py", line 40, in train train_dataset = Batch_Balanced_Dataset(opt) File "Downloads/EasyOCR-master/trainer/dataset.py", line 56, in __init__ _dataset, _dataset_log = hierarchical_dataset(root=opt.train_data, opt=opt, select_data=[selected_d]) File "Downloads/EasyOCR-master/trainer/dataset.py", line 132, in hierarchical_dataset dataset = OCRDataset(dirpath, opt) File "Downloads/EasyOCR-master/trainer/dataset.py", line 164, in __init__ if re.search(out_of_char, label.lower()): AttributeError: 'numpy.float64' object has no attribute 'lower' ``` I found that this is correlated with my dataset containing only numbers (so the labels.csv are only numbers). In fact, If I try to modify my dataset and add some letters, the error is gone. But I don't need to add letters, my dataset are composed by numbers only. I think I need to modify dataset.py in some way. Furthermore, for some reason, when I trained another model some weeks ago with a similar dataset containing only numbers, I hadn't this error. I don't know what happened in meantime. I only modified some model parameters in opt and nothing more. Thanks.
closed
2023-01-25T09:14:06Z
2024-02-13T11:07:46Z
https://github.com/JaidedAI/EasyOCR/issues/936
[]
proclaim5584
1
modin-project/modin
pandas
6,718
Reimplement the `_axes_lengths` property to avoid materializing both axes at the same time
Source: https://github.com/modin-project/modin/pull/6700#pullrequestreview-1714797952
closed
2023-11-07T12:12:00Z
2023-11-07T14:42:39Z
https://github.com/modin-project/modin/issues/6718
[ "Performance ๐Ÿš€" ]
anmyachev
0
jmcnamara/XlsxWriter
pandas
640
Feature request: String formatting in chart title
While adding text to chart title, natively in Excel, some string elements could be formatted differently, despite 'general' format of the chart title, eg. while having regular font, some letters can be italic/bold, etc. Is this feature somewhere hidden in existing module or taken into consideration for development?
closed
2019-07-01T14:10:56Z
2020-09-19T19:59:59Z
https://github.com/jmcnamara/XlsxWriter/issues/640
[ "feature request", "long term" ]
oskaruchanski
2
ultralytics/yolov5
deep-learning
13,341
Significant Variations in Training Results with Same Dataset and Parameters
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi everyone, Weโ€™ve encountered a noticeable discrepancy in the performance metrics when training the same model (yolov8n.pt) on the same dataset but with different hardware and similar training parameters. The results, specifically the mAP (50-95), vary significantly across different setups. #### Base Model: yolov8n.pt #### Training Parameters: | No. | Hardware | Epochs | Batch Size | mAP (50-95) | |---|---|---|---|---| | 1 | A6000(48GB vram) | 100 | 16 | 0.961 | | 2 | 4090(24GB vram) | 100 | 12 | 0.93 | | 3 | 4090(24GB vram) | 150 | 12 | 0.92 | | 4 | L20(48GB vram) | 100 | 16 | 0.976 | Weโ€™ve also tried enabling or disabling `coslr`, but it seems to have little to no effect on the outcome. Could anyone shed light on what might be causing this inconsistency? Additionally, what strategies could we adopt to achieve better performance on more limited hardware setups? Thank you in advance for your help! ### Additional _No response_
open
2024-10-03T03:47:05Z
2024-11-09T06:29:48Z
https://github.com/ultralytics/yolov5/issues/13341
[ "question" ]
timiil
2
PokemonGoF/PokemonGo-Bot
automation
5,622
virtualenv does not exist (may be exits)
$ ./run.sh Virtualenv does not exits Run: ./setup.sh -i
closed
2016-09-22T21:33:30Z
2016-09-24T17:24:39Z
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5622
[]
PeshBG
8
indico/indico
sqlalchemy
6,416
Custom menu pages title leaked when access is deined
**Describe the bug** The custom pages you can add to event menus all have an id. You can cycle through them by visiting e.g. https://events.example.com/event/12/page/345 which will redirect to https://events.example.com/event/12/page/345-sekrit-page . If the user doesn't have permission for that page, they can still read the title via the URL slug. **To Reproduce** Steps to reproduce the behavior: 1. Create custom pages that have access restrictions 2. Visit the URL in a private window 3. See redirected URL title **Expected behavior** Access restrictions should be applied before the redirect occurs. If the user has no access, the URL should not redirect.
closed
2024-06-24T19:31:19Z
2024-06-25T16:08:51Z
https://github.com/indico/indico/issues/6416
[ "bug" ]
kewisch
3
whitphx/streamlit-webrtc
streamlit
1,362
Server to client media playback with frame-based processing
Many of the examples in this repo show client to server media sinks (mic / video capture), which have frame based callback processing. I am looking to do server to client media playback, with frame based callback processing. This would be useful for real-time audio playback with real-time processing. After searching through this discussion https://discuss.streamlit.io/t/new-component-streamlit-webrtc-a-new-way-to-deal-with-real-time-media-streams/8669, and the example pages in streamlit-webrtc, I have not been able to find an example of this. To be specific, I am looking to do the following: 1. Load an audio file (server) 2. Start playback (from server to client), frame by frame 3. Process each frame (before it is sent to the client) via a callback (processing should occur on the server, for example ML inference) 4. Playback processed audio frame to client 5. Continue in real-time This example uses the MediaPlayer class from aiortc: https://github.com/whitphx/streamlit-webrtc/blob/ff697dc0fb85df58dec2307251e57105ebb737bb/pages/8_media_files_streaming.py#L9. However it does not seem that this provides any sort of callback on the stream (at the audio frame level). Digging deeper, the MediaPlayer class has a MediaStreamTrack instance (https://aiortc.readthedocs.io/en/latest/api.html#aiortc.MediaStreamTrack) which has a `recv` callback method for each frame. Would the correct approach be to create a new subclass of MediaStreamTrack and write a custom `recv` for the required processing? I found this related thread: https://github.com/aiortc/aiortc/issues/571 Is this functionality supported currently? I would appreciate any guidance here. Thanks heaps!
open
2023-08-25T00:19:15Z
2024-10-30T10:31:19Z
https://github.com/whitphx/streamlit-webrtc/issues/1362
[]
jamjambles
3
jupyter/nbviewer
jupyter
141
Intelligently handle dropbox link
One of my favorite things to do is share notebooks that I have in my Dropbox using nbviewer. The only mildly annoying thing is that I have to manually change the url that dropbox gives me (`www.dropbox.com/...`) to (`dl.dropbox.com/...`) to force dropbox to cough up the file itself instead of serving their web interface. Since I imagine this is a fairly common usecase, would it be hard to handle this substitution on the nbviewer side?
closed
2013-12-10T23:40:37Z
2014-01-14T17:51:54Z
https://github.com/jupyter/nbviewer/issues/141
[ "type:Enhancement" ]
mwaskom
5
google/trax
numpy
976
TPU deadlock
### Description Hello, I am trying to train reformer model using Trax and JAX. The training seems to be fine on Google Colab, but when I run it on google cloud server + TPU, it hangs on the "trax.supervised.Trainer". The warning is as follows: `2020-08-26 17:46:37.421334: W external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.cc:601] TPU Execute is taking a long time. This might be due to a deadlock between multiple TPU cores or a very slow program.` ### Environment information ``` Ubuntu $ pip freeze | grep trax trax==1.3.4 $ pip freeze | grep tensor mesh-tensorflow==0.1.16 tensor2tensor==1.15.7 tensorboard==2.3.0 tensorboard-plugin-wit==1.7.0 tensorflow==2.3.0 tensorflow-addons==0.11.1 tensorflow-datasets==3.2.1 tensorflow-estimator==2.3.0 tensorflow-gan==2.0.0 tensorflow-hub==0.9.0 tensorflow-metadata==0.23.0 tensorflow-probability==0.7.0 tensorflow-text==2.3.0 $ pip freeze | grep jax jax==0.1.75 jaxlib==0.1.52 $ python -V Python 3.6.10 :: Anaconda, Inc. ``` ### For bugs: reproduction and error logs # Steps to reproduce: ... ``` import requests import os from jax.config import config config.FLAGS.jax_xla_backend = "tpu_driver" config.FLAGS.jax_backend_target = "grpc://" + "10.206.164.18" print(config.FLAGS.jax_backend_target) from tensorflow.compat.v1.io.gfile import GFile import gin import os import jax import trax from trax.data import inputs import numpy as np import jax.numpy as jnp from scipy.special import softmax import sentencepiece as spm from sentencepiece import SentencePieceProcessor import random, glob, os def fake_data(): with open("vocab.txt",'w') as f: f.write("[MASK]\nL\nA\nG\nV\nE\nS\nI\nK\nR\nD\nT\nP\nN\nQ\nF\nY\nM\nH\nC\nW\nX\nU\nB\nZ\nO") if not os.path.exists('dataset'): os.makedirs('dataset') with open("dataset/train_0.txt",'w') as f: for i in range(50): f.write("M A F S A E D V L K E Y D R R R R M E A L L L S L Y Y P N D R K L L D Y K E W S P P R V Q V E C P K A P V E W N N P P S E K G L I V G H F S G I K Y K G E K A Q A S E V D V N K M C C W V S K F K D A M R R Y Q G I Q T C K I P G K V L S D L D A K I K A Y N L T V E G V E G F V R Y S R V T K Q H V A A F L K E L R H S K Q Y E N V N L I H Y I L T D K R V D I Q H L E K D L V K D F K A L V E S A H R M R Q G H M I N V K Y I L Y Q L L K K H G H G P D G P D I L T V K T G S K G V L Y D D S F R K I Y T D L G W K F T P L\n") f.write("M S I I G A T R L Q N D K S D T Y S A G P C Y A G G C S A F T P R G T C G K D W D L G E Q T C A S G F C T S Q P L C A R I K K T Q V C G L R Y S S K G K D P L V S A E W D S R G A P Y V R C T Y D A D L I D T Q A Q V D Q F V S M F G E S P S L A E R Y C M R G V K N T A G E L V S R V S S D A D P A G G W C R K W Y S A H R G P D Q D A A L G S F C I K N P G A A D C K C I N R A S D P V Y Q K V K T L H A Y P D Q C W Y V P C A A D V G E L K M G T Q R D T P T N C P T Q V C Q I V F N M L D D G S V T M D D V K N T I N C D F S K Y V P P P P P P K P T P P T P P T P P T P P T P P T P P T P P T P R P V H N R K V M F F V A G A V L V A I L I S T V R W\n") f.write("M A S N T V S A Q G G S N R P V R D F S N I Q D V A Q F L L F D P I W N E Q P G S I V P W K M N R E Q A L A E R Y P E L Q T S E P S E D Y S G P V E S L E L L P L E I K L D I M Q Y L S W E Q I S W C K H P W L W T R W Y K D N V V R V S A I T F E D F Q R E Y A F P E K I Q E I H F T D T R A E E I K A I L E T T P N V T R L V I R R I D D M N Y N T H G D L G L D D L E F L T H L M V E D A C G F T D F W A P S L T H L T I K N L D M H P R W F G P V M D G I K S M Q S T L K Y L Y I F E T Y G V N K P F V Q W C T D N I E T F Y C T N S Y R Y E N V P R P I Y V W V L F Q E D E W H G Y R V E D N K F H R R Y M Y S T I L H K R D T D W V E N N P L K T P A Q V E M Y K F L L R I S Q L N R D G T G Y E S D S D P E N E H F D D E S F S S G E E D S S D E D D P T W A P D S D D S D W E T E T E E E P S V A A R I L E K G K L T I T N L M K S L G F K P K P K K I Q S I D R Y F C S L D S N Y N S E D E D F E Y D S D S E D D D S D S E D D C\n") f.write("M Y Q A I N P C P Q S W Y G S P Q L E R E I V C K M S G A P H Y P N Y Y P V H P N A L G G A W F D T S L N A R S L T T T P S L T T C T P P S L A A C T P P T S L G M V D S P P H I N P P R R I G T L C F D F G S A K S P Q R C E C V A S D R P S T T S N T A P D T Y R L L I T N S K T R K N N Y G T C R L E P L T Y G I\n") f.write("M A R P L L G K T S S V R R R L E S L S A C S I F F F L R K F C Q K M A S L V F L N S P V Y Q M S N I L L T E R R Q V D R A M G G S D D D G V M V V A L S P S D F K T V L G S A L L A V E R D M V H V V P K Y L Q T P G I L H D M L V L L T P I F G E A L S V D M S G A T D V M V Q Q I A T A G F V D V D P L H S S V S W K D N V S C P V A L L A V S N A V R T M M G Q P C Q V T L I I D V G T Q N I L R D L V N L P V E M S G D L Q V M A Y T K D P L G K V P A V G V S V F D S G S V Q K G D A H S V G A P D G L V S F H T H P V S S A V E L N Y H A G W P S N V D M S S L L T M K N L M H V V V A E E G L W T M A R T L S M Q R L T K V L T D A E K D V M R A A A F N L F L P L N E L R V M G T K D S N N K S L K T Y F E V F E T F T I G A L M K H S G V T P T A F V D R R W L D N T I Y H M G F I P W G R D M R F V V E Y D L D G T N P F L N T V P T L M S V K R K A K I Q E M F D N M V S R M V T S\n") f.write("M N A K Y D T D Q G V G R M L F L G T I G L A V V V G G L M A Y G Y Y Y D G K T P S S G T S F H T A S P S F S S R Y R Y\n") f.write("M R Y T V L I A L Q G A L L L L L L I D D G Q G Q S P Y P Y P G M P C N S S R Q C G L G T C V H S R C A H C S S D G T L C S P E D P T M V W P C C P E S S C Q L V V G L P S L V N H Y N C L P N Q C T D S S Q C P G G F G C M T R R S K C E L C K A D G E A C N S P Y L D W R K D K E C C S G Y C H T E A R G L E G V C I D P K K I F C T P K N P W Q L A P Y P P S Y H Q P T T L R P P T S L Y D S W L M S G F L V K S T T A P S T Q E E E D D Y\n") f.write("M Q N P L P E V M S P E H D K R T T T P M S K E A N K F I R E L D K K P G D L A V V S D F V K R N T G K R L P I G K R S N L Y V R I C D L S G T I Y M G E T F I L E S W E E L Y L P E P T K M E V L G T L E S C C G I P P F P E W I V M V G E D Q C V Y A Y G D E E I L L F A Y S V K Q L V E E G I Q E T G I S Y K Y P D D I S D V D E E V L Q Q D E E I Q K I R K K T R E F V D K D A Q E F Q D F L N S L D A S L L S\n") f.write("M D S L N E V C Y E Q I K G T F Y K G L F G D F P L I V D K K T G C F N A T K L C V L G G K R F V D W N K T L R S K K L I Q Y Y E T R C D I K T E S L L Y E I K G D N N D E I T K Q I T G T Y L P K E F I L D I A S W I S V E F Y D K C N N I I I N Y F V N E Y K T M D K K T L Q S K I N E V E E K M Q K L L N E K E E E L Q E K N D K I D E L I L F S K R M E E D R K K D R E M M I K Q E K M L R E L G I H L E D V S S Q N N E L I E K V D E Q V E Q N A V L N F K I D N I Q N K L E I A V E D R A P Q P K Q N L K R E R F I L L K R N D D Y Y P Y Y T I R A Q D I N A R S A L K R Q K N L Y N E V S V L L D L T C H P N S K T L Y V R V K D E L K Q K G V V F N L C K V S I S N S K I N E E E L I K A M E T I N D E K R D V\n") with open("dataset/train_1.txt",'w') as f: for i in range(50): f.write("M A F S A E D V L K E Y D R R R R M E A L L L S L Y Y P N D R K L L D Y K E W S P P R V Q V E C P K A P V E W N N P P S E K G L I V G H F S G I K Y K G E K A Q A S E V D V N K M C C W V S K F K D A M R R Y Q G I Q T C K I P G K V L S D L D A K I K A Y N L T V E G V E G F V R Y S R V T K Q H V A A F L K E L R H S K Q Y E N V N L I H Y I L T D K R V D I Q H L E K D L V K D F K A L V E S A H R M R Q G H M I N V K Y I L Y Q L L K K H G H G P D G P D I L T V K T G S K G V L Y D D S F R K I Y T D L G W K F T P L\n") f.write("M S I I G A T R L Q N D K S D T Y S A G P C Y A G G C S A F T P R G T C G K D W D L G E Q T C A S G F C T S Q P L C A R I K K T Q V C G L R Y S S K G K D P L V S A E W D S R G A P Y V R C T Y D A D L I D T Q A Q V D Q F V S M F G E S P S L A E R Y C M R G V K N T A G E L V S R V S S D A D P A G G W C R K W Y S A H R G P D Q D A A L G S F C I K N P G A A D C K C I N R A S D P V Y Q K V K T L H A Y P D Q C W Y V P C A A D V G E L K M G T Q R D T P T N C P T Q V C Q I V F N M L D D G S V T M D D V K N T I N C D F S K Y V P P P P P P K P T P P T P P T P P T P P T P P T P P T P P T P R P V H N R K V M F F V A G A V L V A I L I S T V R W\n") f.write("M A S N T V S A Q G G S N R P V R D F S N I Q D V A Q F L L F D P I W N E Q P G S I V P W K M N R E Q A L A E R Y P E L Q T S E P S E D Y S G P V E S L E L L P L E I K L D I M Q Y L S W E Q I S W C K H P W L W T R W Y K D N V V R V S A I T F E D F Q R E Y A F P E K I Q E I H F T D T R A E E I K A I L E T T P N V T R L V I R R I D D M N Y N T H G D L G L D D L E F L T H L M V E D A C G F T D F W A P S L T H L T I K N L D M H P R W F G P V M D G I K S M Q S T L K Y L Y I F E T Y G V N K P F V Q W C T D N I E T F Y C T N S Y R Y E N V P R P I Y V W V L F Q E D E W H G Y R V E D N K F H R R Y M Y S T I L H K R D T D W V E N N P L K T P A Q V E M Y K F L L R I S Q L N R D G T G Y E S D S D P E N E H F D D E S F S S G E E D S S D E D D P T W A P D S D D S D W E T E T E E E P S V A A R I L E K G K L T I T N L M K S L G F K P K P K K I Q S I D R Y F C S L D S N Y N S E D E D F E Y D S D S E D D D S D S E D D C\n") f.write("M Y Q A I N P C P Q S W Y G S P Q L E R E I V C K M S G A P H Y P N Y Y P V H P N A L G G A W F D T S L N A R S L T T T P S L T T C T P P S L A A C T P P T S L G M V D S P P H I N P P R R I G T L C F D F G S A K S P Q R C E C V A S D R P S T T S N T A P D T Y R L L I T N S K T R K N N Y G T C R L E P L T Y G I\n") f.write("M A R P L L G K T S S V R R R L E S L S A C S I F F F L R K F C Q K M A S L V F L N S P V Y Q M S N I L L T E R R Q V D R A M G G S D D D G V M V V A L S P S D F K T V L G S A L L A V E R D M V H V V P K Y L Q T P G I L H D M L V L L T P I F G E A L S V D M S G A T D V M V Q Q I A T A G F V D V D P L H S S V S W K D N V S C P V A L L A V S N A V R T M M G Q P C Q V T L I I D V G T Q N I L R D L V N L P V E M S G D L Q V M A Y T K D P L G K V P A V G V S V F D S G S V Q K G D A H S V G A P D G L V S F H T H P V S S A V E L N Y H A G W P S N V D M S S L L T M K N L M H V V V A E E G L W T M A R T L S M Q R L T K V L T D A E K D V M R A A A F N L F L P L N E L R V M G T K D S N N K S L K T Y F E V F E T F T I G A L M K H S G V T P T A F V D R R W L D N T I Y H M G F I P W G R D M R F V V E Y D L D G T N P F L N T V P T L M S V K R K A K I Q E M F D N M V S R M V T S\n") f.write("M N A K Y D T D Q G V G R M L F L G T I G L A V V V G G L M A Y G Y Y Y D G K T P S S G T S F H T A S P S F S S R Y R Y\n") f.write("M R Y T V L I A L Q G A L L L L L L I D D G Q G Q S P Y P Y P G M P C N S S R Q C G L G T C V H S R C A H C S S D G T L C S P E D P T M V W P C C P E S S C Q L V V G L P S L V N H Y N C L P N Q C T D S S Q C P G G F G C M T R R S K C E L C K A D G E A C N S P Y L D W R K D K E C C S G Y C H T E A R G L E G V C I D P K K I F C T P K N P W Q L A P Y P P S Y H Q P T T L R P P T S L Y D S W L M S G F L V K S T T A P S T Q E E E D D Y\n") f.write("M Q N P L P E V M S P E H D K R T T T P M S K E A N K F I R E L D K K P G D L A V V S D F V K R N T G K R L P I G K R S N L Y V R I C D L S G T I Y M G E T F I L E S W E E L Y L P E P T K M E V L G T L E S C C G I P P F P E W I V M V G E D Q C V Y A Y G D E E I L L F A Y S V K Q L V E E G I Q E T G I S Y K Y P D D I S D V D E E V L Q Q D E E I Q K I R K K T R E F V D K D A Q E F Q D F L N S L D A S L L S\n") f.write("M D S L N E V C Y E Q I K G T F Y K G L F G D F P L I V D K K T G C F N A T K L C V L G G K R F V D W N K T L R S K K L I Q Y Y E T R C D I K T E S L L Y E I K G D N N D E I T K Q I T G T Y L P K E F I L D I A S W I S V E F Y D K C N N I I I N Y F V N E Y K T M D K K T L Q S K I N E V E E K M Q K L L N E K E E E L Q E K N D K I D E L I L F S K R M E E D R K K D R E M M I K Q E K M L R E L G I H L E D V S S Q N N E L I E K V D E Q V E Q N A V L N F K I D N I Q N K L E I A V E D R A P Q P K Q N L K R E R F I L L K R N D D Y Y P Y Y T I R A Q D I N A R S A L K R Q K N L Y N E V S V L L D L T C H P N S K T L Y V R V K D E L K Q K G V V F N L C K V S I S N S K I N E E E L I K A M E T I N D E K R D V\n") fake_data() spm.SentencePieceTrainer.train(input='vocab.txt', model_prefix='protein', vocab_size=30, model_type="word", #user_defined_symbols="<MASK>", pad_id=0, unk_id=1, bos_id=2, eos_id=3, pad_piece="[PAD]", unk_piece="[UNK]", bos_piece="[BOS]", eos_piece="[EOS]") tokenizer = spm.SentencePieceProcessor(model_file='protein.model') train_files = glob.glob("dataset/train*",recursive=True) random.shuffle(train_files) def mask_seq(seq,mask_prob=0.15): seq = np.array(seq) minValue = 1 maxValue = len(seq) - 2 max_mask_tokens = int(maxValue * 0.15 + 0.5) randomlist = random.sample(range(minValue, maxValue), max_mask_tokens) seq_masked = seq seq_masked[randomlist] = tokenizer.encode("[MASK]")[0] return seq_masked def get_seq(train_files): while True: for file in train_files: with open(file) as fp: for line in fp: yield line def get_batch(seq_gen, batch_length): batch = [] while True: seq = next(seq_gen) seq_ids = tokenizer.encode(seq,add_bos=True,add_eos=True) new_batch_len = len(batch) + len(seq_ids) if new_batch_len <= batch_length : batch = batch + seq_ids continue next_batch = batch batch = seq_ids yield next_batch # Set up the data pipeline. def my_inputs(n_devices): MAX_BATCH_LENGTH = 1024*4 seq_gen = get_seq(train_files) batch_gen = get_batch(seq_gen,MAX_BATCH_LENGTH) while True: inputs = [] targets = [] mask = [] for i in range(n_devices): batch_ids = next(batch_gen) masked_seq_ids = mask_seq(batch_ids) pad_amount = MAX_BATCH_LENGTH - len(batch_ids) inputs.append(np.pad(masked_seq_ids, (0,pad_amount))) targets.append(np.pad(batch_ids, (0,pad_amount))) mask.append(np.pad(np.ones_like(batch_ids, dtype=np.float32), (0,pad_amount), mode='constant')) inputs = np.stack(inputs) targets = np.stack(targets) mask = np.stack(mask) yield (inputs, targets, mask) inp_gen_test = my_inputs(trax.fastmath.device_count()) res = next(inp_gen_test) print(tokenizer.decode(res[0][0].tolist())) print(tokenizer.decode(res[1][0].tolist())) # Configure hyperparameters. gin.parse_config(""" import trax.layers import trax.models import trax.optimizers import trax.data.inputs import trax.supervised.trainer_lib # Parameters that will vary between experiments: # ============================================================================== train.model = @trax.models.Reformer n_layers = 15 n_heads = 16 dropout = 0.1 n_tokens = 40000 # They have used very small n_tokens = 2048 vocab_size= 30 d_model = 1024 d_ff = 4096 # Done # Parameters for MultifactorSchedule: # ============================================================================== multifactor.constant = 0.088 multifactor.decay_factor = 0.5 multifactor.factors = 'constant * linear_warmup * rsqrt_decay' multifactor.steps_per_cycle = 100000 multifactor.steps_per_decay = 20000 multifactor.warmup_steps = 8000 # Done # Parameters for Adam: # ============================================================================== Adam.b1 = 0.9 Adam.b2 = 0.98 Adam.eps = 1e-09 Adam.weight_decay_rate = 1e-05 # Done # Parameters for SelfAttention: # ============================================================================== #trax.layers.SelfAttention.attention_dropout = 0.05 #trax.layers.SelfAttention.chunk_len = 64 #trax.layers.SelfAttention.n_chunks_before = 1 #trax.layers.SelfAttention.n_parallel_heads = 1 trax.layers.SelfAttention.causal = False trax.layers.SelfAttention.chunk_len = None trax.layers.SelfAttention.masked = False trax.layers.SelfAttention.n_chunks_after = 0 trax.layers.SelfAttention.n_chunks_before = 0 trax.layers.SelfAttention.n_parallel_heads = None trax.layers.SelfAttention.predict_drop_len = 64 trax.layers.SelfAttention.predict_mem_len = 192 trax.layers.SelfAttention.share_qk = False trax.layers.SelfAttention.use_python_loop = False trax.layers.SelfAttention.use_reference_code = False # Done # Parameters for EncDecAttention: # ============================================================================== trax.layers.EncDecAttention.masked = True trax.layers.EncDecAttention.n_parallel_heads = None trax.layers.EncDecAttention.use_python_loop = False trax.layers.EncDecAttention.use_reference_code = False # Done # Parameters for LSHSelfAttention: # ============================================================================== #LSHSelfAttention.attention_dropout = 0.0 #LSHSelfAttention.chunk_len = 64 #LSHSelfAttention.n_buckets = [64, 128] #LSHSelfAttention.n_chunks_after = 0 #LSHSelfAttention.n_chunks_before = 1 #LSHSelfAttention.n_hashes = 1 #LSHSelfAttention.n_parallel_heads = 1 #LSHSelfAttention.predict_drop_len = 128 #LSHSelfAttention.predict_mem_len = 1024 # Done # Parameters for Reformer: # ============================================================================== Reformer.d_model = %d_model Reformer.d_ff = %d_ff Reformer.dropout = %dropout Reformer.ff_activation = @trax.layers.Relu Reformer.max_len = %n_tokens Reformer.mode = 'train' Reformer.n_heads = %n_heads Reformer.n_encoder_layers = %n_layers Reformer.n_decoder_layers = %n_layers Reformer.input_vocab_size = %vocab_size """) # Set up a Trainer. output_dir = os.path.expanduser('train_dir/') trainer = trax.supervised.Trainer( model=trax.models.Reformer, loss_fn=trax.layers.CrossEntropyLoss(), optimizer=trax.optimizers.Adam, lr_schedule=trax.lr.multifactor(), inputs=trax.data.inputs.Inputs(my_inputs), output_dir=output_dir) # Run one training step, to make sure the model fits in memory. # The first time trainer.train_epoch is called, it will JIT the entire network # architecture, which takes around 2 minutes. The JIT-compiled model is saved # so subsequent runs will be much faster than the first. trainer.train_epoch(n_steps=1, n_eval_steps=1) ``` # Error logs: ... ``` 2020-08-26 17:46:37.421334: W external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.cc:601] TPU Execute is taking a long time. This might be due to a deadlock between multiple TPU cores or a very slow program. 2020-08-26 17:51:46.613101: W external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.cc:601] TPU Execute is taking a long time. This might be due to a deadlock between multiple TPU cores or a very slow program. ``` Any idea what could be the problem ?
open
2020-08-26T15:58:20Z
2020-08-26T15:58:20Z
https://github.com/google/trax/issues/976
[]
agemagician
0
ultrafunkamsterdam/undetected-chromedriver
automation
1,389
Rotating proxies im receiving selenium.common.exceptions.SessionNotCreatedException: Message: session not created
Im having this in order to rotate proxies, everytime a new_chrome starts it use a new proxy ```python def navigate_selenium(self, doc_id): while True: try: self.chrome = self.new_chrome() self.chrome.get(self.origin + "/document/" + doc_id) self.chrome.wait_for_element_display(60, "#head") except Exception as e: logcolor_warning("EXCEPCION, RECARGANDO") if hasattr(self, "chrome"): detail = Soup(self.chrome.driver.page_source, "html.parser") if "captcha" in detail: logcolor_warning("CAPTCHA EN DETALLE SELENIUM, RECARGANDO") del self.chrome self.navigate_selenium(doc_id) else: detail = Soup(self.chrome.driver.page_source, "html.parser") self._obtain_cookies() return detail finally: logcolor_debug(f"CERRANDO VENTANA Y NAVEGADOR") if hasattr(self, "chrome"): del self.chrome ``` , but sometimes I receive the following error: ``` File "/u02/user/git/modulos/lib/modulos/utils/js_render.py", line 199, in __init__ self._driver = uc.Chrome( File "/home/user/.local/lib/python3.8/site-packages/undetected_chromedriver/__init__.py", line 441, in __init__ super(Chrome, self).__init__( File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py", line 84, in __init__ super().__init__( File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__ super().__init__( File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__ self.start_session(capabilities, browser_profile) File "/home/user/.local/lib/python3.8/site-packages/undetected_chromedriver/__init__.py", line 704, in start_session super(selenium.webdriver.chrome.webdriver.WebDriver, self).start_session( File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute self.error_handler.check_response(response) File "/home/user/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created from no such execution context: uniqueContextId not found (Session info: chrome=114.0.5735.106) Stacktrace: #0 0x562bbbd104e3 <unknown> #1 0x562bbba3fc76 <unknown> #2 0x562bbba298c3 <unknown> #3 0x562bbba27bac <unknown> #4 0x562bbba28162 <unknown> #5 0x562bbba43af1 <unknown> #6 0x562bbba4597e <unknown> #7 0x562bbba45a4c <unknown> #8 0x562bbbaa4f77 <unknown> #9 0x562bbbaa347f <unknown> #10 0x562bbba9ade3 <unknown> #11 0x562bbba702dd <unknown> #12 0x562bbba7134e <unknown> #13 0x562bbbcd03e4 <unknown> #14 0x562bbbcd43d7 <unknown> #15 0x562bbbcdeb20 <unknown> #16 0x562bbbcd5023 <unknown> #17 0x562bbbca31aa <unknown> #18 0x562bbbcf96b8 <unknown> #19 0x562bbbcf9847 <unknown> #20 0x562bbbd09243 <unknown> #21 0x7f69f745dea5 start_thread ```
closed
2023-07-12T07:28:02Z
2023-07-19T10:54:21Z
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1389
[]
juanfrilla
2
vitalik/django-ninja
django
585
I need to comment the parameters in the URL address
Here's my view definition: ![image](https://user-images.githubusercontent.com/16644654/194811618-482f73f6-f03e-427f-950b-5a05c9278177.png) This is openapi. Json: ![image](https://user-images.githubusercontent.com/16644654/194811885-69d3aa1d-72ab-4a95-9a18-51d7bcf3d712.png) I need to change the title in the schema to Chinese
closed
2022-10-10T06:56:10Z
2022-10-27T06:32:41Z
https://github.com/vitalik/django-ninja/issues/585
[]
hanbinloop
2
MaartenGr/BERTopic
nlp
1,697
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
Hi, I encountered the following error while performing semi-supervised training: embeddings[indices] = np.average([embeddings[indices], seed_topic_embeddings[seed_topic]], weights=[3, 1]) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. The clustering corpus is formatted as a one-dimensional list with a length of 50779, and the vector dimensions are (50779, 1024). Importing the seed_topic_list as either a one-dimensional or two-dimensional list will result in this error. Here is a snippet of the code๏ผš reduction_model = BaseDimensionalityReduction() cluster_model = KMeans(n_clusters=num_topics) topic_model = BERTopic(nr_topics=num_topics, top_n_words=10, seed_topic_list=seed_topic_list, embedding_model=sentence_model, umap_model=reduction_model, # min_topic_size=50, calculate_probabilities=False, hdbscan_model=cluster_model, # vectorizer_model=vectorizer, verbose=True) topics, probs = topic_model.fit_transform(documents=recall, embeddings=embeddings) I look forward to your reply.
closed
2023-12-15T10:23:51Z
2024-12-20T17:03:24Z
https://github.com/MaartenGr/BERTopic/issues/1697
[]
shyzzz521
4
google-research/bert
nlp
981
"intermediate" hidden layer
Hi, what is the purpose of this "intermediate" hidden layer with activation + output layer w/o activation? https://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/modeling.py#L866
open
2020-01-03T09:34:38Z
2020-01-03T09:34:38Z
https://github.com/google-research/bert/issues/981
[]
congchan
0
sigmavirus24/github3.py
rest-api
1,160
Add support for Deployment Environments
It appears that Deployment Environments ( https://docs.github.com/en/rest/deployments/environments?apiVersion=2022-11-28 ) are not currently supported. Unfortunately I don't have time to implement this myself, but wanted to open an issue to track the feature request.
open
2023-09-15T12:02:13Z
2023-12-15T12:57:55Z
https://github.com/sigmavirus24/github3.py/issues/1160
[]
jantman
1
pyg-team/pytorch_geometric
pytorch
9,792
Install a full PyG environment using only a single pip command
### ๐Ÿ˜ต Describe the installation problem The current installation process requires installing PyTorch first then running a second pip install for all the PyG components. This is because torch is required in the setup of the PyG components: - https://github.com/pyg-team/pytorch_geometric/issues/861 - https://github.com/pyg-team/pytorch_geometric/issues/1440 There are cases where only a single pip command using requirements.txt can be used to set up the full env, for example the following frameworks for queuing jobs: - https://clear.ml/docs/latest/docs/apps/clearml_task - https://docs.wandb.ai/guides/launch/create-launch-job#requirements-file Is there a way of setting up a full Python environment for PyG using a _single_ pip command? ### Environment * PyG version: * PyTorch version: * OS: * Python version: * CUDA/cuDNN version: * How you installed PyTorch and PyG (`conda`, `pip`, source): * Any other relevant information (*e.g.*, version of `torch-scatter`):
open
2024-11-15T21:02:49Z
2025-02-14T11:05:07Z
https://github.com/pyg-team/pytorch_geometric/issues/9792
[ "installation" ]
Anjum48
5
pyeve/eve
flask
1,472
422 UNPROCESSABLE ENTITY when using user_id and data_relation on PATCH
### Expected Behavior: 200 OK ### Actual Behavior: 422 UNPROCESSABLE ENTITY If using `set_request_auth_value` on a schema and having defined a `data_relation` to a schema that is not using any auth value, a PATCH operation will fail with the follwoing error: ``` value 'XXX' must exist in resource 'YYY', field '_id' ``` POST works fine. The only workaround is not to define any data_relation. According to my research, the `_filter` variable contains the `user_id` in an `$and` condition when PATCH but not on POST. https://github.com/pyeve/eve/blob/c62941992b2f66fa02b822581891bd7c18e76d9c/eve/io/mongo/mongo.py#L335 ### Environment * Python version: 3.8.10 * Eve version: 1.1.5
open
2022-03-29T09:33:39Z
2023-08-17T04:41:23Z
https://github.com/pyeve/eve/issues/1472
[]
xibriz
1
comfyanonymous/ComfyUI
pytorch
6,984
How to composite noise per region in regional sampling?
### Your question I want to have the effect equivalent of a "different denoise value" for each regional prompt in a RegionalSampler workflow. Is there an easy way to accomplish this? I was considering manually setting sigma schedules for the sampler in each regional prompt, but it's not clear to me how I'd then prepare the latent noise so that this could work well - or if that even makes since (since RegionalSampler seems to add noise in various ways as it goes!) Is anything like this possible? I basically just want to have sliders for each region that set the amount of noise/denoise to perform, just as I could with sequential regional prompts. If this were made easy to do it would be an absolutely killer feature - I think I could replace many of my workflows with Impact Regional Sampler.... ### Logs ```powershell ``` ### Other _No response_
open
2025-02-26T20:06:06Z
2025-03-10T06:28:49Z
https://github.com/comfyanonymous/ComfyUI/issues/6984
[ "User Support" ]
dtromb
2
AirtestProject/Airtest
automation
1,264
error: metadata-generation-failed
```bash python --version Python 3.13.0 ``` ```log (.venv) PS xxx.air> pip install -U airtest Looking in indexes: https://mirrors.aliyun.com/pypi/simple/ Collecting airtest Using cached https://mirrors.aliyun.com/pypi/packages/b2/52/62391b32309ce0cbf5e2d2ba5751a6a4a4cf8aec470f3d94ec76f2d85099/airtest-1.3.5.tar.gz (49.5 MB) Preparing metadata (setup.py) ... done Collecting Jinja2>=2.8 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/31/80/3a54838c3fb461f6fec263ebf3a3a41771bd05190238de3486aae8540c36/jinja2-3.1.4-py3-none-any.whl (133 kB) Collecting Pillow>=3.4.0 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/fb/01/3755ba287dac715e6afdb333cb1f6d69740a7475220b4637b5ce3d78cec2/pillow-11.0.0-cp313-cp313-win_amd64.whl (2.6 MB) Collecting requests>=2.11.1 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl (64 kB) Collecting six<=1.16.0,>=1.9.0 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting mss==6.1.0 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/d7/5f/77dece686b8d08a17430e169e936722693712b8cf1ee638caa8b1cb6452b/mss-6.1.0-py3-none-any.whl (76 kB) Collecting numpy<2.0 (from airtest) Using cached https://mirrors.aliyun.com/pypi/packages/65/6e/09db70a523a96d25e115e71cc56a6f9031e7b8cd166c1ac8438307c14058/numpy-1.26.4.tar.gz (15.8 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error ร— Preparing metadata (pyproject.toml) did not run successfully. โ”‚ exit code: 2 โ•ฐโ”€> [35 lines of output] + xxx.air\.venv\Scripts\python.exe C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\meson.py setup C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3 C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\.mesonpy-c7quwdt1 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\.mesonpy-c7quwdt1\meson-python-native-file.ini Traceback (most recent call last): File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\mesonmain.py", line 194, in run return options.run_func(options) ~~~~~~~~~~~~~~~~^^^^^^^^^ File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\msetup.py", line 358, in run app.generate() ~~~~~~~~~~~~^^ File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\msetup.py", line 178, in generate env = environment.Environment(self.source_dir, self.build_dir, self.options) File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\environment.py", line 571, in __init__ config = coredata.parse_machine_files(self.coredata.config_files) File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\coredata.py", line 1032, in parse_machine_files parser = MachineFileParser(filenames) File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\coredata.py", line 973, in __init__ self.parser.read(filenames) ~~~~~~~~~~~~~~~~^^^^^^^^^^^ File "C:\Users\admin\AppData\Local\Temp\pip-install-zvzk__iw\numpy_dc9f6a4dc9314a7eb557e738b3d5f2d3\vendored-meson\meson\mesonbuild\coredata.py", line 960, in read return super().read(filenames, encoding) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python313\Lib\configparser.py", line 735, in read self._read(fp, filename) ~~~~~~~~~~^^^^^^^^^^^^^^ File "C:\Program Files\Python313\Lib\configparser.py", line 1050, in _read ParsingError._raise_all(self._read_inner(fp, fpname)) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "C:\Program Files\Python313\Lib\configparser.py", line 1058, in _read_inner for st.lineno, line in enumerate(map(Line, fp), start=1): ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen codecs>", line 325, in decode UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc9 in position 46: invalid continuation byte ERROR: Unhandled python exception This is a Meson bug and should be reported! [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed ร— Encountered error while generating package metadata. โ•ฐโ”€> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. (.venv) PS xxx.air> ```
open
2024-11-23T20:09:06Z
2024-11-23T20:09:06Z
https://github.com/AirtestProject/Airtest/issues/1264
[]
Ran-Xing
0
microsoft/JARVIS
deep-learning
190
{available task list} slot is missing
in the Table 5 of the paper, it designs an injectable slot called {{available task list}} in the prompt for LLM. plz release more details about this slot.
open
2023-04-30T10:51:40Z
2023-04-30T10:51:40Z
https://github.com/microsoft/JARVIS/issues/190
[]
henern
0
yezz123/authx
pydantic
224
Failed to import RedisCacheBackend and JWTBackend class from authx. Version 0.4.0
### First Check - [X] I added a very descriptive title to this issue. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to AuthX but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to AuthX but to [FastAPI](https://github.com/tiangolo/fastapi). ### Example Code ```python from authx import RedisCacheBackend, JWTBackend, RedisBackend from fastapi import Depends, FastAPI from fastapi.security import HTTPBearer from starlette.config import Config app = FastAPI( title="AuthX", description="AuthX authentication system for fastapi.", version="0.1.0", ) config = Config(".env") SecurityConfig = JWTBackend( cache_backend=RedisBackend, private_key=config("PRIVATE_KEY", default="private_key"), public_key=config("PUBLIC_KEY", default="public_key"), access_expiration=3600, refresh_expiration=3600 ) oauth_token_scheme = HTTPBearer() # app.include_router(starlette_app.router) # pygeoapi # Set Anonymous User @app.get("/anonym") def anonym_test(): pass # Set Authenticated User @app.get("/user") async def user_test(token=Depends(oauth_token_scheme)): is_verified = await SecurityConfig.decode_token(token) print(is_verified) pass ``` ### Description as specified in the documentation for jwt implementation [https://authx.yezz.codes/configuration/security/jwt/](https://github.com/yezz123/authx/discussions/url) , from authx import RedisCacheBackend, JWTBackend should not return an error. as shown below ![Screenshot from 2022-04-02 15-06-33](https://user-images.githubusercontent.com/67229938/161563369-cb473886-3a50-47da-8949-455e23f1baf2.png) - i want to use JWTBackend to verify token using FastAPI dependency injection. a token should be passed to the swaggerUI docs from fastapi. and should be accessible via the oauth_token_scheme. - is_verified should return a boolean showing the verification state of the token, returns None instead of a boolean Suspected problem RedisCacheBackend is not an exported class from the module - not part of the module, but that is what was shown according to the documentation. using RedisBackend(this is exported from the module) instead, returns TypeError `Expected Type 'RedisBackend', got Type[RedisBackend] instead. in the JWTBackend class ### Operating System Linux ### Operating System Details _No response_ ### FastAPI Version 0.75.1 ### Python Version python 3.8.10 ### Additional Context _No response_
closed
2022-04-04T14:21:11Z
2023-03-06T09:31:41Z
https://github.com/yezz123/authx/issues/224
[ "bug", "question" ]
r-scheele
3
lorien/grab
web-scraping
69
ะžั‡ะตะฟัั‚ะบะฐ ะฒ PyquerySelector
ะคะฐะนะป selector/selector.py ัั‚ั€ะพะบะฐ 296 - ะฝะฐะฟะธัะฐะฝะพ pyquery ะฒะผะตัั‚ะพ query ``` python class PyquerySelector(LxmlNodeBaseSelector): __slots__ = () def pyquery_node(self): return PyQuery(self.node) def process_query(self, query): return self.pyquery_node().find(pyquery) ``` ะดะพะปะถะฝะพ ะฑั‹ั‚ัŒ ``` python def process_query(self, query): return self.pyquery_node().find(query) ```
closed
2014-10-08T09:04:06Z
2015-03-26T10:56:57Z
https://github.com/lorien/grab/issues/69
[]
antipooh
1
stanfordnlp/stanza
nlp
1,057
Problem when adding a new language
Hi all! First time posting a question, feel free to correct me if I'm not following conventions. I'm a Python newbie trying to start an NLP project, so all help is welcome! I'm trying to add a new language (Old English) to Stanza, to train a model to automatically annotate OE texts. My data is converted into the corresponding format, I have word2vec word vectors, and I have a tokenized raw text file according to the documentation (https://stanfordnlp.github.io/stanza/new_language.html#data-format). My main issue is that the documentation is not clear for me. The case example used to explain how to add new languages to Stanza, and more concretely the section CharacterLM, assumes that the user is going to use data either from Wikipedia or from conll17 or OSCAR, so the terminal commands examples are fitting those scenarios. As my data is from other source, I'm using this command `python3 -m stanza.utils.charlm.make_lm_data extern_data/charlm_raw extern_data/charlm` from the third bulletpoint, giving my source directory and target directory. This is where my problem starts, when I enter the command in the terminal, the following error appears: <img width="942" alt="Captura de Pantalla 2022-06-23 a las 14 52 40" src="https://user-images.githubusercontent.com/65161098/175506650-ef389dac-e2c7-4715-a325-013911cafbe8.png"> The command automatically looks for a language, although the language parameter is optional in the command. It also appears that the command tries to create the target directory, although the target directory parameter is not optional for the command, and then the error NotADirectoryError appears. Is there anything that I'm doing wrong that prevents me from progressing in this project? Any thoughts on how I can solve this problem? I have tried looking for info in the published issues in this repo and on the internet, but I haven't found any extra info about how to add new languages. Thanks for your help, and sorry for the long post!
closed
2022-06-24T09:31:04Z
2022-09-16T00:46:45Z
https://github.com/stanfordnlp/stanza/issues/1057
[ "question", "stale" ]
dmetola
7
piskvorky/gensim
machine-learning
3,108
For sponsors
## Thank you for your [Gensim sponsorship](https://github.com/sponsors/piskvorky) โค๏ธ I don't know who you are, I can only see your Github handle. So please leave your desired Twitter handle here as a comment, for a mention in the monthly [@gensim_py](https://twitter.com/gensim_py) tweet. If you chose one of the tiers where I send you stuff, **contact me [via email](mailto:me+sponsorship@radimrehurek.com) with your mailing address and T-shirt sizes / company logo**. Thanks again!
open
2021-04-08T07:18:03Z
2021-04-22T13:42:32Z
https://github.com/piskvorky/gensim/issues/3108
[]
piskvorky
1
ivy-llc/ivy
numpy
28,765
Fix Frontend Failing Test: tensorflow - tensor.torch.Tensor.new_zeros
To-do List: https://github.com/Transpile-AI/ivy/issues/27499
open
2024-06-17T09:46:14Z
2025-03-18T14:49:08Z
https://github.com/ivy-llc/ivy/issues/28765
[ "Sub Task" ]
Mubashirshariq
1
browser-use/browser-use
python
858
Mistook My Instructions and sent the browser to a bad website!
### Bug Description I was trying to get the web browser to help me write some blogs and it took me to a tranny escort service website... WTF MAN??? NOT COOL! Now all three chats that I had started are not loading and there is no way to delete them? I would like a refund for my subscription please... ### Reproduction Steps gave instructions to write me a blog post on wordpress.com and it bugged out on me... ### Code Sample ```python Updated Instructions for Your Tool Navigate to WordPress Go to your WordPress dashboard at yourdomain.com/wp-admin. Log in with the provided credentials (username and password) if required. Start a New Post Click Posts in the left sidebar. Click Add New (the blue button on the top right or left sidebar) to open a new post. Enter the Title Look for the big โ€œAdd titleโ€ field at the very top of the page (itโ€™s a gray box that says โ€œAdd titleโ€ in light text). Click inside that box to focus on it. Type exactly: Darn Tough - Socks (including the space and hyphen). Wait 2 seconds after typing to ensure the text is entered correctly. Publish the Post Look for the Publish button. Itโ€™s usually on the right sidebar (under โ€œStatus & Visibilityโ€) or at the top-right corner of the page (a blue button labeled โ€œPublishโ€ or โ€œUpdateโ€). Click the Publish button to publish the post immediately. Wait 3 seconds after clicking to ensure the post is published. Confirm and Move On Check if the post appears in the WordPress โ€œPostsโ€ list with the title โ€œDarn Tough - Socksโ€ and a โ€œPublishedโ€ status. If successful, show a message in the browser (e.g., a popup or text box) saying: โ€œPublished โ€˜Darn Tough - Socksโ€™ successfully.โ€ If thereโ€™s an error (e.g., the title isnโ€™t entered or the post doesnโ€™t publish), show an error message (e.g., โ€œFailed to publish โ€˜Darn Tough - Socksโ€™โ€”check title or button locationโ€) and pause for user instructions. Repeat for Other Products (Optional) If instructed, repeat steps 2โ€“5 for the next product title in the list (e.g., โ€œVermont Flannel - Flannelโ€), following the same process. Use the full list of 50 product titles I provided earlier, entering and publishing one at a time. ``` ### Version Brave ### LLM Model Other (specify in description) ### Operating System Windows 11 ### Relevant Log Output ```shell Navigate to WordPress dashboard at yourdomain.com/wp-admin. Navigate back to WordPress dashboard at yourdomain.com/wp-admin. go_to_url({"url":"http://yourdomain.com/wp-admin"}) ```
open
2025-02-25T00:24:22Z
2025-02-25T05:28:09Z
https://github.com/browser-use/browser-use/issues/858
[ "bug" ]
SubliminalCoding
1
wkentaro/labelme
computer-vision
749
Problem with python labelme2voc.py data_annotated data_dataset_voc --labels labels.txt in my data
dear all, (tensorflow1x) D:\segmentation\labelme-master\examples\semantic_segmentation>python labelme2voc.py data_annotated data_dataset_voc --labels labels.txt I try them with my data below [labels.txt](https://github.com/wkentaro/labelme/files/5060621/labels.txt) [data_annotated.zip](https://github.com/wkentaro/labelme/files/5060624/data_annotated.zip) Result: Can not show all labels. It only show 1 label is __ignore__. Why? ![10](https://user-images.githubusercontent.com/30711163/89975582-52285e00-dc90-11ea-9753-a69d8c89276d.png) Help me! Thank you so much!
closed
2020-08-12T04:38:31Z
2022-06-25T04:58:21Z
https://github.com/wkentaro/labelme/issues/749
[]
NguyenDangBinh
0
aio-libs-abandoned/aioredis-py
asyncio
1,003
[2.0] Use loop.time() instead of time.time() for health checks
In the 2.0 alpha, health checks are based around `time.time`. Using `asyncio.get_event_loop().time()` has several advantages: - The default implementation uses a monotonic clock, so it won't go wonky if the system time is changed. - It sticks to a consistent source of time relative to things like `async_timeout`, which makes mocking out time easier (e.g. with [async-solipsism](https://github.com/bmerry/async-solipsism)). I'm happy to submit a PR if there is agreement that this is a reasonable change.
closed
2021-06-08T18:49:33Z
2021-07-22T00:37:20Z
https://github.com/aio-libs-abandoned/aioredis-py/issues/1003
[ "enhancement" ]
bmerry
2
biolab/orange3
data-visualization
6,169
Splash screen on Windows with scaling
Splash screen on Widows is pixelated when using scaling. The screenshot below was captured using 125% scaling. The image is nice without scaling. ![image](https://user-images.githubusercontent.com/919223/195833408-aaa6cf2e-0c1a-4402-a649-3ce488b74250.png)
closed
2022-10-14T11:15:24Z
2022-11-25T08:35:04Z
https://github.com/biolab/orange3/issues/6169
[ "bug report" ]
thocevar
0
microsoft/unilm
nlp
1,032
Pre-training code of BEiT-3
Great work! However, the code of [BEiT-3](https://github.com/microsoft/unilm/tree/master/beit3) only includes code for various downstream tasks. Is there any way I can reproduce the pre-training task?
closed
2023-03-14T12:17:58Z
2023-03-15T15:27:28Z
https://github.com/microsoft/unilm/issues/1032
[]
MonsterZhZh
3
serengil/deepface
deep-learning
1,193
'dfs' is not recognized as an internal or external command, operable program or batch file.
Okay great but now i have a new issue i did your command with my specific addins to it but this came back of command prompt ![image](https://github.com/serengil/deepface/assets/124982114/684f7974-8f13-4fe0-9276-c3acbf3bade6) I dont know maybe im just stupid and didnt comprehend something Command Prompt C:\Windows\System32>dfs = DeepFace.find(img_path = "102-1000-2.jpg", db_path = "C:/Users/Manti/Documents") 'dfs' is not recognized as an internal or external command, operable program or batch file. I thought i did it correctly i added the info of where it can find my exact photo by the photo name which is 102-1000-2 i didnt get a chance yet to actually customize it after its default naming by the computer when saving the photo and i added the complete path of where the photo is located
closed
2024-04-18T05:39:12Z
2024-04-18T07:54:51Z
https://github.com/serengil/deepface/issues/1193
[ "invalid" ]
olstice
6
supabase/supabase-py
fastapi
926
Supabase Client Requires Explicit `sign_out()` to Terminate Properly
## Summary The Supabase client currently requires an explicit call to `client.auth.sign_out()` for processes to terminate correctly. Without this, background WebSocket connections and other resources may remain active, leading to incomplete shutdowns and potential resource leaks. ## Problem Explanation: The current behavior of the Supabase client involves establishing WebSocket connections and listening for authentication events. These processes, especially those involving real-time functionality, do not automatically terminate upon the programโ€™s end. Explicitly calling client.auth.sign_out() is necessary to clean up these resources and ensure proper process termination. ``` # From SyncClient class in SyncClient.py class SyncClient: def __init__(self, ...): # ... self.realtime = self._init_realtime_client( realtime_url=self.realtime_url, supabase_key=self.supabase_key, options=options.realtime if options else None, ) # ... @staticmethod def _init_realtime_client( realtime_url: str, supabase_key: str, options: Optional[Dict[str, Any]] ) -> SyncRealtimeClient: """Private method for creating an instance of the realtime-py client.""" return SyncRealtimeClient( realtime_url, token=supabase_key, params=options or {} ) def _listen_to_auth_events( self, event: AuthChangeEvent, session: Union[Session, None] ): # ... self.realtime.set_auth(access_token) # From SyncRealtimeClient in realtime-py class SyncRealtimeClient: def __init__(self, ...): # ... self._endpointWebSocket = None # ... def connect(self): # ... self._endpointWebSocket = websocket.WebSocketApp( # ... ) # ... def set_auth(self, token): # ... self.connect() # This might create a new WebSocket connection # From GoTrueClient in gotrue-py class SyncGoTrueClient: def sign_out(self, options: SignOutOptions = {"scope": "global"}) -> None: # ... self._remove_session() self._notify_all_subscribers("SIGNED_OUT", None) ``` ## Key points: 1. Real-time Connections: The WebSocket connections created by SyncRealtimeClient continue running in the background and need to be manually terminated. 2. Authentication Events: Sign-out triggers an event that helps reset real-time client authentication, which won't occur unless sign_out() is called. 3. Resource Management: The sign_out() function ensures proper cleanup of sessions and network connections, preventing potential memory leaks or resource hogging. 4. Daemon Threads: Real-time connections might be running as daemon threads, which do not automatically terminate, leading to hanging processes unless explicitly stopped with sign_out(). Given this behavior, the necessity of an explicit `client.auth.sign_out()` call should be clearly documented and potentially re-evaluated for a more intuitive shutdown process.
open
2024-09-16T07:17:13Z
2025-01-23T16:28:07Z
https://github.com/supabase/supabase-py/issues/926
[ "bug" ]
sigridjineth
7
miguelgrinberg/Flask-SocketIO
flask
1,257
Recursive stack overflow
**Describe the bug** Occasional stack overflow happens, looks similar to #230 but i'm unsure if it's happening when I'm emitting a message. ``` # pip show flask_socketio Name: Flask-SocketIO Version: 4.2.1 Summary: Socket.IO integration for Flask applications Home-page: http://github.com/miguelgrinberg/Flask-SocketIO/ Author: Miguel Grinberg Author-email: miguelgrinberg50@gmail.com License: MIT Location: /usr/local/lib/python3.6/site-packages Requires: python-socketio, Flask Required-by: ``` **Logs** ```(7) accepted ('172.18.0.4', 42154) 109.41.129.0,172.18.0.4 - - [17/Apr/2020 05:03:45] "GET /socket.io/?id=&type=undefined&uid=21169&EIO=3&transport=polling&t=N66YjHM HTTP/1.1" 200 1292 0.034741 (7) accepted ('172.18.0.4', 42172) disconnect handler error Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545, in _trigger_event return self.handlers[event](*args) File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725, in _handle_eio_disconnect self._handle_disconnect(sid, '/') File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632, in _handle_disconnect self._trigger_event('disconnect', '/', sid) File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680, in _trigger_event return self.handlers[namespace][event](*args) File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284, in _handler *args) File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 675, in _handle_event with app.request_context(self.server.environ[sid]): File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2358, in request_context return RequestContext(self, environ) RecursionError: maximum recursion depth exceeded while calling a Python object Fatal Python error: Cannot recover from stack overflow. Current thread 0x00007fdd71352700 (most recent call first): File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2358 in request_context File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 675 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 632 in _handle_disconnect File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 725 in _handle_eio_disconnect File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 545 in _trigger_event File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 131 in close File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 80 in check_ping_timeout File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 86 in send File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 217 in send File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 588 in _send_packet File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 577 in _emit_internal File "/usr/local/lib/python3.6/site-packages/socketio/base_manager.py", line 141 in emit File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 286 in emit File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 417 in emit File "/app/backend.py", line 235 in disconnect File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 698 in _handle_event File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 284 in _handler File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 680 in _trigger_event ... Thread 0x00007fdd7a730700 (most recent call first): File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 214 in run File "/usr/local/lib/python3.6/site-packages/werkzeug/_reloader.py", line 337 in run_with_reloader File "/usr/local/lib/python3.6/site-packages/werkzeug/serving.py", line 1060 in run_with_reloader File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 569 in run File "/app/backend.py", line 256 in main File "/app/backend.py", line 261 in <module> ``` I've enabled `logger=True` and `engineio_logger=True` and I'll try to catch any more information that I can ``` socketio = SocketIO(app, async_mode='eventlet', logger=True, engineio_logger=True) ```
closed
2020-04-19T18:23:17Z
2020-06-30T22:53:17Z
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1257
[ "question" ]
dgtlmoon
13
mljar/mercury
jupyter
301
Can you stream the output as the notebook is evaluating?
Currently, the output only displays after the entire notebook evaluates. However, you may have some computationally expensive cells. From a UI/UX perspective, it would be helpful to see the results get stream as they are available
open
2023-05-29T19:02:54Z
2023-05-30T10:07:09Z
https://github.com/mljar/mercury/issues/301
[ "enhancement", "help wanted" ]
kapily
1
matplotlib/matplotlib
matplotlib
29,534
[Bug]: missing graph
### Bug summary Good day, I'm having issues with my graphs showing after running my command, I only get axis but no graph ![Image](https://github.com/user-attachments/assets/c19f90fb-7946-44cb-ab34-3ed69580460e) ### Code for reproduction ```Python Gby_plt.plot() ``` ### Actual outcome ![Image](https://github.com/user-attachments/assets/a17e50ea-3cf0-4ad9-bb38-010441b86300) ### Expected outcome ![Image](https://github.com/user-attachments/assets/642c86ef-0190-46e9-bac4-475632fc6dfa) ### Additional information _No response_ ### Operating system _No response_ ### Matplotlib Version 3.9.2 ### Matplotlib Backend _No response_ ### Python version 3.10.1 ### Jupyter version _No response_ ### Installation None
open
2025-01-28T16:37:05Z
2025-01-29T17:58:27Z
https://github.com/matplotlib/matplotlib/issues/29534
[ "Community support" ]
Gidman21
2
taverntesting/tavern
pytest
640
How to understand and solve the error: "List item(s) not present in response"
There is below the output of my test. How can I understand which item(s) doesn't present in a response: the strict options provided for the whole file: ```yaml strict: - headers:off - json:off ``` Format variables: tavern.env_vars.PA_HOST = 'localhost:8080' Source test stage (line 681): ```yaml - name: Average payment by country request: url: '{tavern.env_vars.PA_HOST}/v2/data/avg_payment' method: POST headers: X-Auth-Key: faketoken content-type: application/json json: filter: [] grouping: - user_country_name ordering: - direction: desc field: sum_revenue_usd aggregation: - aggr: avg field: avgpayment_usd - aggr: sum field: revenue_usd others: count: 6 time_interval: from: 1606813200 to: 1609405200 response: status_code: 200 headers: content-type: application/json json: data: - data: - avg_avgpayment_usd: 34.9 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 56608.99 sum_revenue_usd_symbol: $ user_country_name: USA - avg_avgpayment_usd: 34.75 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 9183.92 sum_revenue_usd_symbol: $ user_country_name: United Kingdom - avg_avgpayment_usd: 40.36 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 8213.64 sum_revenue_usd_symbol: $ user_country_name: Germany - avg_avgpayment_usd: 34.11 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 7967.49 sum_revenue_usd_symbol: $ user_country_name: Canada - avg_avgpayment_usd: 42.52 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 4023.08 sum_revenue_usd_symbol: $ user_country_name: France - avg_avgpayment_usd: 26.75 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 3351.44 sum_revenue_usd_symbol: $ user_country_name: South Korea - avg_avgpayment_usd: 34.4 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 29498.04 sum_revenue_usd_symbol: $ user_country_name: Others name: avg_payment error: {} ``` Formatted stage: ```yaml name: Average payment by country request: headers: X-Auth-Key: faketoken content-type: application/json json: aggregation: - aggr: avg field: avgpayment_usd - aggr: sum field: revenue_usd filter: [] grouping: - user_country_name ordering: - direction: desc field: sum_revenue_usd others: count: 6 time_interval: from: 1606813200 to: 1609405200 method: POST url: 'localhost:8080/v2/data/avg_payment' response: headers: content-type: application/json json: data: - data: - avg_avgpayment_usd: 34.9 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 56608.99 sum_revenue_usd_symbol: $ user_country_name: USA - avg_avgpayment_usd: 34.75 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 9183.92 sum_revenue_usd_symbol: $ user_country_name: United Kingdom - avg_avgpayment_usd: 40.36 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 8213.64 sum_revenue_usd_symbol: $ user_country_name: Germany - avg_avgpayment_usd: 34.11 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 7967.49 sum_revenue_usd_symbol: $ user_country_name: Canada - avg_avgpayment_usd: 42.52 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 4023.08 sum_revenue_usd_symbol: $ user_country_name: France - avg_avgpayment_usd: 26.75 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 3351.44 sum_revenue_usd_symbol: $ user_country_name: South Korea - avg_avgpayment_usd: 34.4 avg_avgpayment_usd_symbol: $ sum_revenue_usd: 29498.04 sum_revenue_usd_symbol: $ user_country_name: Others name: avg_payment error: {} status_code: 200 ``` ``` Errors: E tavern.util.exceptions.TestFailError: Test 'Average payment by country' failed: - List item(s) not present in response: [{'data': [{'avg_avgpayment_usd': 34.9, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 56608.99, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'USA'}, {'avg_avgpayment_usd': 34.75, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 9183.92, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'United Kingdom'}, {'avg_avgpayment_usd': 40.36, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 8213.64, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'Germany'}, {'avg_avgpayment_usd': 34.11, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 7967.49, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'Canada'}, {'avg_avgpayment_usd': 42.52, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 4023.08, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'France'}, {'avg_avgpayment_usd': 26.75, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 3351.44, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'South Korea'}, {'avg_avgpayment_usd': 34.4, 'avg_avgpayment_usd_symbol': '$', 'sum_revenue_usd': 29498.04, 'sum_revenue_usd_symbol': '$', 'user_country_name': 'Others'}], 'name': 'avg_payment'}] ``` Formated json I got from server: ```json { "error": {}, "data": [ { "name": "avg_payment", "data": [ { "avg_avgpayment_usd": 34.9, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 56608.99, "sum_revenue_usd_symbol": "$", "user_country_name": "USA" }, { "avg_avgpayment_usd": 34.75, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 9183.92, "sum_revenue_usd_symbol": "$", "user_country_name": "United Kingdom" }, { "avg_avgpayment_usd": 40.36, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 8213.64, "sum_revenue_usd_symbol": "$", "user_country_name": "Germany" }, { "avg_avgpayment_usd": 34.11, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 7967.49, "sum_revenue_usd_symbol": "$", "user_country_name": "Canada" }, { "avg_avgpayment_usd": 42.52, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 4023.08, "sum_revenue_usd_symbol": "$", "user_country_name": "France" }, { "avg_avgpayment_usd": 26.75, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 3351.44, "sum_revenue_usd_symbol": "$", "user_country_name": "South Korea" }, { "avg_avgpayment_usd": 34.4, "avg_avgpayment_usd_symbol": "$", "sum_revenue_usd": 29498.04, "sum_revenue_usd_symbol": "$", "user_country_name": "Others" } ] } ] } ```
closed
2021-02-07T21:17:37Z
2023-01-09T15:48:24Z
https://github.com/taverntesting/tavern/issues/640
[]
rpoletaev
1
SALib/SALib
numpy
127
morris analysis method requires problem['group'] to exist
The sample method uses problem.get('groups') and the analysis method should, too, to avoid requiring call to set problem['group'] = None.
closed
2017-01-05T18:21:53Z
2017-01-07T23:39:59Z
https://github.com/SALib/SALib/issues/127
[ "bug" ]
rjplevin
2
yezz123/authx
pydantic
462
๐Ÿ”’๏ธ Add Security Tests including Examples
In that situation, it is crucial to address the security of the backend system for the package. This involves ensuring that we thoroughly adhere to each step we take in the process of token creation. ```py config = AuthXConfig( JWT_ALGORITHM = "HS256", JWT_SECRET_KEY = "SECRET_KEY", # Here in the JWT Location JWT_TOKEN_LOCATION = ["headers"], ) ```
closed
2023-06-01T20:32:33Z
2025-03-21T10:01:02Z
https://github.com/yezz123/authx/issues/462
[ "enhancement", "python" ]
yezz123
0