repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
iperov/DeepFaceLab | machine-learning | 832 | Comparison | Hello, I read the DeepFaceLab paper, I would like to ask how the Quantitative Comparison in it is done? Such as SSIM, pose, landmarks, etc. For learning purposes, I wrote a face-changing program based on style transfer, but I don’t know how to make comparisons like yours. Can you answer it for me?Thank you. | open | 2020-07-14T14:23:13Z | 2023-06-08T20:42:29Z | https://github.com/iperov/DeepFaceLab/issues/832 | [] | notknowchild | 1 |
nolar/kopf | asyncio | 774 | Cli forces --log-format | ## Long story short
`kopf run` doesn't run without `--log-format`
## Description
```bash
$ kopf run
Usage: kopf run [OPTIONS] [PATHS]...
Try 'kopf run --help' for help.
Error: Invalid value for '--log-format': <LogFormat.FULL: '[%(asctime)s] %(name)-20.20s [%(levelname)-8.8s] %(message)s'> is not one of 'plain', 'full', 'json'.
```
```bash
$ kopf run --log-format full
...does the magic
```
## Environment
* Kopf version: kopf, version 1.31.0
* Python version: Python 3.7.7
* OS/platform: Ubuntu
| closed | 2021-05-14T11:49:16Z | 2021-05-14T12:01:12Z | https://github.com/nolar/kopf/issues/774 | [
"bug"
] | mnarodovitch | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 294 | encoder_train.py and --low_mem | VoxCeleb2 dataset doesn't work with encoder_preprocess.py on Linux. The m4a files throw an error and nothing progresses. I converted all the m4a files to wav. This passes preprocessing, but doesn't output numpy files making the next step of training fail.
If I use only LibriSpeech/train-other-500, VoxCeleb1/wav, and VoxCeleb1/vox1_meta.csv, encoder_train.py works until it runs out of memory. I have a GTX 750 ti with only 2GB of VRAM.
I can't figure out how to get --low_mem flag to work with encoder_train.py. | closed | 2020-03-08T05:56:08Z | 2020-07-08T18:12:38Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/294 | [] | unassumingbadger | 1 |
hankcs/HanLP | nlp | 590 | 猜测最可能的词性方法数组越界 | com.hankcs.hanlp.seg.common.Vertex
public Nature guessNature() {
return attribute.nature[0];
}
这个方法数组越界 | closed | 2017-07-23T03:39:43Z | 2017-07-23T11:17:24Z | https://github.com/hankcs/HanLP/issues/590 | [
"invalid"
] | daydayup999 | 2 |
NVlabs/neuralangelo | computer-vision | 107 | tnt datasets church demo result is bad. | any one try the tnt datasets's church demo? the result (50k iters) is not good.



here is my train paramters:
Training with 1 GPUs.
Using random seed 0
Make folder logs/church
* checkpoint:
* save_epoch: 9999999999
* save_iter: 20000
* save_latest_iter: 9999999999
* save_period: 9999999999
* strict_resume: True
* cudnn:
* benchmark: True
* deterministic: False
* data:
* name: dummy
* num_images: None
* num_workers: 4
* preload: True
* readjust:
* center: [0.0, 0.0, 0.0]
* scale: 1.0
* root: datasets/church
* train:
* batch_size: 2
* image_size: [1086, 1960]
* subset: None
* type: projects.neuralangelo.data
* use_multi_epoch_loader: True
* val:
* batch_size: 1
* image_size: [300, 541]
* max_viz_samples: 16
* subset: 1
* image_save_iter: 9999999999
* inference_args:
* local_rank: 0
* logdir: logs/church
* logging_iter: 9999999999999
* max_epoch: 9999999999
* max_iter: 500000
* metrics_epoch: None
* metrics_iter: None
* model:
* appear_embed:
* dim: 8
* enabled: False
* background:
* enabled: False
* encoding:
* levels: 10
* type: fourier
* encoding_view:
* levels: 3
* type: spherical
* mlp:
* activ: relu
* activ_density: softplus
* activ_density_params:
* activ_params:
* hidden_dim: 256
* hidden_dim_rgb: 128
* num_layers: 8
* num_layers_rgb: 2
* skip: [4]
* skip_rgb: []
* view_dep: True
* white: False
* object:
* rgb:
* encoding_view:
* levels: 3
* type: spherical
* mlp:
* activ: relu_
* activ_params:
* hidden_dim: 256
* num_layers: 4
* skip: []
* weight_norm: True
* mode: idr
* s_var:
* anneal_end: 0.1
* init_val: 3.0
* sdf:
* encoding:
* coarse2fine:
* enabled: True
* init_active_level: 8
* step: 5000
* hashgrid:
* dict_size: 20
* dim: 4
* max_logres: 11
* min_logres: 5
* range: [-2, 2]
* levels: 16
* type: hashgrid
* gradient:
* mode: numerical
* taps: 4
* mlp:
* activ: softplus
* activ_params:
* beta: 100
* geometric_init: True
* hidden_dim: 256
* inside_out: True
* num_layers: 1
* out_bias: 0.5
* skip: []
* weight_norm: True
* render:
* num_sample_hierarchy: 4
* num_samples:
* background: 0
* coarse: 64
* fine: 16
* rand_rays: 512
* stratified: True
* type: projects.neuralangelo.model
* nvtx_profile: False
* optim:
* fused_opt: False
* params:
* lr: 0.001
* weight_decay: 0.01
* sched:
* gamma: 10.0
* iteration_mode: True
* step_size: 9999999999
* two_steps: [300000, 400000]
* type: two_steps_with_warmup
* warm_up_end: 5000
* type: AdamW
* pretrained_weight: None
* source_filename: projects/neuralangelo/configs/custom/church.yaml
* speed_benchmark: False
* test_data:
* name: dummy
* num_workers: 0
* test:
* batch_size: 1
* is_lmdb: False
* roots: None
* type: imaginaire.datasets.images
* timeout_period: 9999999
* trainer:
* amp_config:
* backoff_factor: 0.5
* enabled: False
* growth_factor: 2.0
* growth_interval: 2000
* init_scale: 65536.0
* ddp_config:
* find_unused_parameters: False
* static_graph: True
* depth_vis_scale: 0.5
* ema_config:
* beta: 0.9999
* enabled: False
* load_ema_checkpoint: False
* start_iteration: 0
* grad_accum_iter: 1
* image_to_tensorboard: False
* init:
* gain: None
* type: none
* loss_weight:
* curvature: 0.0005
* eikonal: 0.1
* render: 1.0
* type: projects.neuralangelo.trainer
* validation_iter: 5000
* wandb_image_iter: 10000
* wandb_scalar_iter: 100
cudnn benchmark: True
cudnn deterministic: False
Setup trainer.
Using random seed 0
model parameter count: 53,029,160
Initialize model weights using type: none, gain: None
Using random seed 0
Allow TensorFloat32 operations on supported devices | open | 2023-09-05T10:52:45Z | 2023-09-08T08:35:47Z | https://github.com/NVlabs/neuralangelo/issues/107 | [] | qq297110281 | 2 |
dask/dask | numpy | 11,619 | Using nested keys in array graphs creates large number of unnecessary tasks for higher-dimensional arrays | While investigating https://github.com/dask/distributed/issues/8958, I noticed this:
```
<Task None concrete(<Task None _identity_cast(<Task None _identity_cast(<Task None _identity_cast(<Task None _identity_cast(<Task None _identity_cast(Alias(('getitem-f7fd4f245dfedafeb33a2841a9c414ca', 2, 3, 19, 5, 0)), typ=<class 'list'>)>, <Task None _identity_cast(Alias(('getitem-f7fd4f245dfedafeb33a2841a9c414ca', 2, 3, 19, 5.9, 0)), typ=<class 'list'>)>, typ=<class 'list'>)>, typ=<class 'list'>)>, typ=<class 'list'>)>, typ=<class 'list'>)>)>=
```
Basically, the embedding of keys into nested data structures creates a large overhead of task objects. For the workload I investigated, this appears to have contributed up to 50% of all tasks. (Take that number with a grain of salt.)
We should avoid using these nested data structures for keys entirely. In array-code, I've identified the usage of `concrete` (the example above) as a culprit that can be trivially removed. However, `concatenate3` and related functions are other culprits that require a bit more rewriting. | open | 2024-12-20T17:26:03Z | 2025-02-17T02:01:00Z | https://github.com/dask/dask/issues/11619 | [
"array",
"needs attention"
] | hendrikmakait | 0 |
giotto-ai/giotto-tda | scikit-learn | 170 | Broken windows dev installation due to pybind11 update | #### Description
The changes to `FindPythonLibsNew.cmake` made by https://github.com/pybind/pybind11/commit/07e225932235ccb0db5271b0874d00f086f28423#diff-5d42889ea4f5ea3bb09df0d6cbeceff0 in `pybind11` breaks our Windows builds and dev installations. The issue was fixed temporarily in 3256628 by setting the submodule not to master but to the previous commit. This fix will be pushed to master via (#137).
However, the above fix is only temporary and a more stable solution should be found. | closed | 2020-01-15T10:30:12Z | 2020-02-18T10:00:25Z | https://github.com/giotto-ai/giotto-tda/issues/170 | [
"enhancement"
] | ulupo | 3 |
long2ice/fastapi-cache | fastapi | 298 | How to invalidate the cache for a post method | I want to invalidate the cache if user call POST method to update data into database, so that th GET method can return the latest data to user. | open | 2023-09-10T13:00:50Z | 2024-11-13T13:08:16Z | https://github.com/long2ice/fastapi-cache/issues/298 | [
"question"
] | coolsnake | 7 |
tensorpack/tensorpack | tensorflow | 1,391 | How to change graph when evaluating between epochs | I am trying to train a variational autoencoder with tensorpack and I am confused on how it should be done.
After a training epoch I would like to run a InferenceRunner or a Callback with a different graph as there won't be an encoder any more.
In the ideal case I would like to have two VAEs and use the latent code of one to be decoded by the other.
At the current state, I only cause the callback to hang indefinitely. I think that is cause by the fact that I might be using a different graph from what I think.
Thanks for the help! | closed | 2020-01-30T17:04:04Z | 2020-02-21T07:15:18Z | https://github.com/tensorpack/tensorpack/issues/1391 | [
"usage"
] | andreanicastro | 3 |
ultralytics/ultralytics | python | 19,709 | Training starting from loaded state dict | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I have two questions:
1. I would like to intialize a YOLOv8 detection model based on a state dict obtained from another YOLOv8 model. To do so, I have tried the following approach:
```
mdl1 = YOLO("yolov8n.yaml")
mdl1.train()
mdl1_state_dict = mdl1.state_dict()
mdl2 = YOLO("yolov8n.yaml")
mdl2.load_state_dict(mdl1_state_dict)
mdl2.train()
```
However, stepping through the code revealed that `mdl2.train()` calls the `get_model()` function of the `DetectionTrainer` class, which is going to initialize the weights from scratch regardless of whether or not a state dict was loaded beforehand. I gather from some other issues opened here that a potential workaround could be to save the state dict of `mdl1` to a `.pt` file, and to load the latter into `mdl2`. Is this the only solution, or can I start training from a state dict without writing it to a file first. And do I then have to set `resume=True`?
2. My second question is whether model initialisation is deterministic - i.e., all new models are initialised with the same weights - and if not, how I can make sure that it is. I see that by default `mdl.train()` sets the random seed to 0, and since model weights are initialised within `mdl.train()`, I would assume that model initialisation is indeed deterministic, but I wanted to make sure.
Thanks a lot
### Additional
_No response_ | open | 2025-03-14T23:52:11Z | 2025-03-15T21:46:02Z | https://github.com/ultralytics/ultralytics/issues/19709 | [
"question",
"detect"
] | gigumay | 3 |
onnx/onnx | machine-learning | 6,697 | Shape inference fails when the `pads` in Pad is a constant with value_ints | https://github.com/onnx/onnx/blob/a6b828cdabfb5c0f8795d82e6a3851224acecd10/onnx/defs/tensor/utils.cc#L481-L485
Fails when pads is a constant with value_ints. It can only be `value` right now. | open | 2025-02-12T02:25:23Z | 2025-02-19T17:32:45Z | https://github.com/onnx/onnx/issues/6697 | [
"bug",
"module: shape inference"
] | justinchuby | 0 |
gradio-app/gradio | data-science | 10,458 | Lite: Plotly doesn't work when installed along with altair | ### Describe the bug
In the `outbreak_forecast` demo running on Lite,
Plotly throws the following error.
`plotly==6.0.0` was released and it depends on `narwhals>=1.15.0` (https://github.com/plotly/plotly.py/blob/v6.0.0/packages/python/plotly/recipe/meta.yaml#L28).
However, installing `altair` leads to install `narwhals==1.10.0` **even after `narwhals>=1.15.0` is installed and the older version of `narwhals` overrides the already installed version.** (Pyodide provides `narwhals==1.10.0` [as a native package](https://pyodide.org/en/stable/usage/packages-in-pyodide.html), but `micropip.install("plotly")` installs `narwhals` from PyPI).
Then, the error says Plotly calls non-existing API of `narwhals`.
This poor dependency resolution is a known bug of micropip, but looks like it's not easy to introduce a fix,
so we should add some workaround on our end.
(Ref: https://github.com/pyodide/micropip/issues/103 )
```
webworker.js:368 Python error: Traceback (most recent call last):
File "/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 2044, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 1591, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<exec>", line 3, in mocked_anyio_to_thread_run_sync
File "/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "app.py", line 33, in outbreak
fig = px.line(df, x="day", y=countries)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_chart_types.py", line 270, in line
return make_figure(args=locals(), constructor=go.Scatter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 2477, in make_figure
args = build_dataframe(args, constructor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1727, in build_dataframe
df_output, wide_id_vars = process_args_into_dataframe(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1343, in process_args_into_dataframe
df_output[col_name] = to_named_series(
^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1175, in to_named_series
x = nw.from_native(x, series_only=True, pass_through=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: from_native() got an unexpected keyword argument 'pass_through'
```
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Run the `outbreak_forecast` demo on Lite.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Lite
```
### Severity
I can work around it | closed | 2025-01-29T07:53:43Z | 2025-01-30T07:20:21Z | https://github.com/gradio-app/gradio/issues/10458 | [
"bug",
"gradio-lite"
] | whitphx | 2 |
jupyter/nbgrader | jupyter | 1,771 | restrictions in jupyter_client and jupyter_server dependencies necessary ? | @brichet: Thanks for the 0.8.2 update!
The dependency part of [pyproject.toml](https://github.com/jupyter/nbgrader/blob/main/pyproject.toml) restricts jupyter_client<8 and jupyter_server<2. Is this necessary, because of jupyter v2 compatibility ?
Thanks, Nik
| closed | 2023-03-29T11:01:06Z | 2024-03-21T13:16:31Z | https://github.com/jupyter/nbgrader/issues/1771 | [] | nklever | 4 |
K3D-tools/K3D-jupyter | jupyter | 379 | Issue with sparse_voxels Z-buffer | * K3D version:
2.14.5
* Python version:
Python 3.95
* Operating System:
Windows
* Using WebGL / GPU accelarated view
### Description
Z buffering seems to fail on some near cubes in this sample.
### What I Did
`
import k3d
import numpy as np
N = 111220
sparse_voxels = np.random.randint(0, 1115, size=(N, 4), dtype=np.uint16)
sparse_voxels[:, 3] = np.random.randint(1, 5, size=(N,))
plot = k3d.plot(grid_visible = False)
obj = k3d.sparse_voxels(sparse_voxels, [300, 300, 300], compression_level=1, outlines=False)
plot += obj
plot.display()
`

| open | 2022-10-03T12:46:38Z | 2022-12-19T11:44:53Z | https://github.com/K3D-tools/K3D-jupyter/issues/379 | [
"order independent transparency"
] | CoenHordijk | 8 |
pytorch/pytorch | machine-learning | 149,279 | CUDA Assertion Error in Scatter Operation During Training (RTX5090 cu128) | ### 🐛 Describe the bug
Description:
I encountered a CUDA error: device-side assert triggered while training nnUNetv2 using PyTorch Nightly (ci128) on an RTX 5090. The error occurs in ScatterGatherKernel.cu:367, suggesting that an index is out of bounds in a scatter operation. This leads to a crash in the loss calculation.
System Information:
nnUNetv2 Version: Latest (as of submission)
PyTorch Version: Nightly (cu128)
CUDA Version: (12.8)
GPU: RTX 5090
OS: Windows 11
Python Version: 3.11
Environment: Virtualenv (PyCharm)
Error Message (Relevant Excerpt)
❌ Error during training: C:\actions-runner_work\pytorch\pytorch\pytorch\aten\src\ATen\native\cuda\ScatterGatherKernel.cu:367:
block: [4935,0,0], thread: [121,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
The error originates from the following line in dice.py:
y_onehot.scatter_(1, y.long(), 1)
Steps to Reproduce:
Train a model with nnUNetv2_train using PyTorch Nightly (ci128).
Use a dataset with multiple classes (segmentation task).
Encounter the crash during loss computation.
What I Have Tried:
Verified that target labels are within the expected range.
Checked for potential dataset preprocessing issues.
Ensured that the number of output channels in the model matches the expected number of classes.
The issue persists across multiple training runs.
Expected Behavior:
The training should run without assertion failures, ensuring that the scatter operation does not encounter out-of-bounds indices
Works on RTX3080 cu126.
Would appreciate any insights or potential fixes!
### Versions
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @eqy | open | 2025-03-16T18:51:31Z | 2025-03-19T21:44:05Z | https://github.com/pytorch/pytorch/issues/149279 | [
"needs reproduction",
"module: windows",
"module: cuda",
"triaged",
"module: scatter & gather ops"
] | bstartek | 5 |
PokeAPI/pokeapi | api | 603 | A lot of tutor moves missing (Gen 7) | There seem to be a lot of tutor ~~and egg moves~~ missing.
For tutor moves it seems to be everything (?) from US/UM. ~~For egg moves I've only found the Lycanroc forms so far, but it's possible that there are more missing (probably mostly Gen 7?)~~
Steps to Reproduce:
1. Look in the move data for Fomantis or Lycanroc
2. See that no tutor moves are listed ~~(& egg moves for Lycanroc)~~
| open | 2021-03-23T20:35:51Z | 2021-03-25T03:30:41Z | https://github.com/PokeAPI/pokeapi/issues/603 | [] | theCapypara | 6 |
KrishnaswamyLab/PHATE | data-visualization | 92 | Coloring cells by pseudo-time | Hello! I recently started getting in to using PHATE. I am wondering if one can color the cells on PHATE scatter plot by pseudo-time instead of a gene or known time-points across the data. Similarly, how to split the PHATE data into various states across pseudo-time? Thanks! | closed | 2020-05-13T20:51:07Z | 2020-05-21T14:06:21Z | https://github.com/KrishnaswamyLab/PHATE/issues/92 | [
"question"
] | ashwinikumarkulkarni | 4 |
axnsan12/drf-yasg | rest-api | 497 | Import ruamel.yaml issue | ``` from drf_yasg import openapi, views
File "/usr/local/lib/python2.7/site-packages/drf_yasg/views.py", line 13, in <module>
from .renderers import (
File "/usr/local/lib/python2.7/site-packages/drf_yasg/renderers.py", line 11, in <module>
from .codecs import VALIDATORS, OpenAPICodecJson, OpenAPICodecYaml
File "/usr/local/lib/python2.7/site-packages/drf_yasg/codecs.py", line 9, in <module>
from ruamel import yaml
ImportError: No module named ruamel
```
```from rest_framework import permissions
from drf_yasg.views import get_schema_view
from drf_yasg import openapi
schema_view = get_schema_view(
openapi.Info(
title="FPAAS APIS",
default_version='v1',
description="Food Personalisation Platform APIS",
terms_of_service="https://spoonshot.com/terms/"
),
public=False,
permission_classes=(permissions.IsAdminUser,),
)
urlpatterns = [
url(r'^swagger(?P<format>\.json|\.yaml)$',
schema_view.without_ui(cache_timeout=0),
name='schema-json'),
url(r'^swagger/$',
schema_view.with_ui('swagger', cache_timeout=0),
name='schema-swagger-ui'),
url(r'^redoc/$',
schema_view.with_ui('redoc', cache_timeout=0),
name='schema-redoc'),
]
```
This is the code i use
I am able to do `from ruamel import yaml` in python manage.py shell but when i run using uwsgi-nginx image by tianglo - python2.7-alpine3,9 it gives import error | closed | 2019-11-20T13:49:26Z | 2020-10-26T01:02:44Z | https://github.com/axnsan12/drf-yasg/issues/497 | [] | appunni-m | 2 |
marshmallow-code/flask-marshmallow | sqlalchemy | 45 | accomplishing a join | This is accomplished with nesting...
| closed | 2016-06-01T23:46:05Z | 2016-06-03T00:02:36Z | https://github.com/marshmallow-code/flask-marshmallow/issues/45 | [] | tharrington | 0 |
Significant-Gravitas/AutoGPT | python | 8,858 | Allow alphanumeric block module names | [https://github.com/Significant-Gravitas/AutoGPT/blob/25912067f2a3778bc85158eb49f68bb78c7772cd/autogpt_platform/backend/backend/blocks/\__init_\_.py#L17-L22](https://github.com/Significant-Gravitas/AutoGPT/blob/25912067f2a3778bc85158eb49f68bb78c7772cd/autogpt_platform/backend/backend/blocks/\__init_\_.py#L17-L22)
I don't see why we can't have digits in block module names, and it causes ugly naming like `slantthreed` in #8805. | closed | 2024-12-02T12:02:25Z | 2024-12-02T15:54:58Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8858 | [
"platform/backend"
] | Pwuts | 1 |
tox-dev/tox | automation | 2,766 | Doc request: how to do explicit invocations with multiple factors | ## What's the problem this feature will solve?
Under tox v3, I was able to pass a string with multiple factors to `tox -e`.
For example, in a project with a `tomli` factor to control the installation of `tomli` for TOML support on py<3.11, the following invocation was valid in v3:
```
tox -e 'py{37,310}{,-tomli}'
```
The above expression would expand to an environment list in the same way that those factors would expand in the `envlist`.
This is particularly useful for declaring CI builds and trying to run a large test matrix efficiently.
A simple project with one factor may have an envlist which reads with multiple python versions and the factor enabled/disabled:
```
envlist = py{37,38,39,310,311}{,-tomli}
```
and in CI, `tox` may be invoked with `tox -e 'py{,-tomli}`.
I have tried to reimplement this kind of logic with tox v4 using labels and `tox r -m`, but for mildly complex cases it becomes very verbose.
The following usage fails with a `ValueError`:
```
tox r -e 'py37{,-tomli}'
```
As does `tox -m ci` with the config
```
labels =
ci = py37{,-tomli}
```
## Describe the solution you'd like
On the assumption that this is possible and I simply haven't figured it out yet
- if someone could share the solution here, that would help
- let's get it into the documentation as part of the tox4 vs tox3 differences (I'm happy to open a PR once I know how to do this)
## Alternative Solutions
On the assumption that this is no longer possible and it isn't considered a desirable feature
- let's document that it's not supported anymore on the tox4 vs tox3 differences page
- some reasonable usage should be defined which covers this kind of usage, and offered in docs as an alternative
I think at a minimum, the docs should say that you used to be able to write `tox -e 'py{,-foo}'` and now you need to have an explicit `labels` config.
## Additional context
For a concrete example of a project where this broke, and the change which fixes it, refer to
https://github.com/python-jsonschema/check-jsonschema/pull/204 | closed | 2022-12-21T17:34:26Z | 2022-12-29T05:20:56Z | https://github.com/tox-dev/tox/issues/2766 | [
"help:wanted",
"enhancement"
] | sirosen | 6 |
graphistry/pygraphistry | jupyter | 265 | [BUG] Notebooks use api=1 auth | **Describe the bug**
Some demos use api=1/2 instead of api=3
**To Reproduce**
See main analyst notebook
**Expected behavior**
Should instead have something like:
```python
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
| closed | 2021-09-28T18:51:17Z | 2021-12-04T07:18:09Z | https://github.com/graphistry/pygraphistry/issues/265 | [
"bug",
"docs",
"good-first-issue"
] | lmeyerov | 6 |
babysor/MockingBird | deep-learning | 804 | 小白求教:运行工具箱时报错“AttributeError: 'Toolbox' object has no attribute 'selected_source_utterance'” | 

加载数据集运行工具箱时报错:
AttributeError: 'Toolbox' object has no attribute 'selected_source_utterance'
工具箱右上边,输入框下边的Vocode only按钮也是灰色的,只能合成没有声音输出
工具箱左下角
Toolbox outpup 也是无法加载选项
| open | 2022-12-15T14:02:15Z | 2023-03-30T14:58:39Z | https://github.com/babysor/MockingBird/issues/804 | [] | love530love | 1 |
modin-project/modin | pandas | 6,973 | BUG: The test test_series.py::test_case_when fails on Unidist | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
Run the test test_series.py::test_case_when()
```
### Issue Description
The with the new implementation #6972 of case_when(), the test fails on unidist. Currently, PandasQueryCompiler.case_when() defaults to Pandas for the Unidist engine.
### Expected Behavior
The test passes.
### Error Logs
<details>
```python-traceback
File "/usr/share/miniconda3/envs/modin_on_unidist/lib/python3.9/site-packages/unidist/core/backends/mpi/core/controller/common.py", line 138, in pull_data
if info_package["package_type"] == common.MetadataPackage.SHARED_DATA:
TypeError: 'int' object is not subscriptable
```
</details>
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
| open | 2024-02-28T14:13:23Z | 2024-02-28T14:13:23Z | https://github.com/modin-project/modin/issues/6973 | [
"bug 🦗",
"unidist"
] | AndreyPavlenko | 0 |
nteract/testbook | pytest | 144 | Mention in the contributing docs that an `ipykernel` with name `python3` must be present for tests to run locally | It can be created with the following commands:
```shell
python -m pip install ipykernel
python -m ipykernel install --user --name python3
```
| open | 2022-05-23T10:20:12Z | 2022-05-23T10:20:12Z | https://github.com/nteract/testbook/issues/144 | [
"documentation"
] | rohitsanj | 0 |
skypilot-org/skypilot | data-science | 4,098 | [GCP] Launching an instance type with a GPU is not working on GCP | <!-- Describe the bug report / feature request here -->
`sky launch -t a3-highgpu-8g` errors out with
```
sky.exceptions.ResourcesMismatchError: a3-highgpu-8g instance types should be used with H100 GPUs. Either use other instance types or specify the accelerators as H100.
```
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| open | 2024-10-16T22:59:14Z | 2024-12-19T23:08:45Z | https://github.com/skypilot-org/skypilot/issues/4098 | [] | Michaelvll | 0 |
BeanieODM/beanie | pydantic | 719 | [BUG] RevisionIdWasChanged is always raised when updating through FastAPI `put` route | **Describe the bug**
I am trying to build a simple CRUD application using FastAPI and beanie, with the setting "use_revision" enabled on the model that I am using for this app. However, it seems that I am unable to update items in the database as the RevisionIdWasChanged error is always raised on calling `.save()`.
**To Reproduce**
```python
import uvicorn
from beanie import Document, init_beanie, PydanticObjectId
from fastapi import FastAPI
from motor.motor_asyncio import AsyncIOMotorClient
class Foo(Document):
class Settings:
use_revision = True
name = "foos"
bar: str
app: FastAPI = FastAPI()
@app.post("/create")
async def create() -> PydanticObjectId:
foo = Foo(bar="bar")
result = await foo.insert()
return result.id
@app.put("/update")
async def update(foo: Foo) -> None:
result = await foo.save() # <- this always throws RevisionIdWasChanged
return None
@app.on_event("startup")
async def startup():
app.mongodb_client = AsyncIOMotorClient("mongodb://mongo-0:27117,mongo-1:27118,mongo-2:27119/?replicaSet=replica-set")
app.mongodb = app.mongodb_client["foos"]
await init_beanie(database=app.mongodb, document_models=[Foo])
if __name__ == "__main__":
uvicorn.run(
"main:app",
host="127.0.0.1",
port=8000,
reload=True
)
```
Using the above server, follow these steps:
1. Perform a POST request to the create endpoint
2. Perform a PUT request to the update endpoint using the ID returned in step 1.
3. The RevisionIdWasChanged error will be raised in the server.
The body used for step 2 is the example body provided by the doc generation of FastAPI ("localhost:8000/docs"):
```
{
"_id": "651163927129d9177247c1b7",
"bar": "string"
}
```
----
As a separate question; I would expect `version_id` to be part of the Model that is used for doc generation, but the field is marked as `hidden`. How are we supposed to check if a document has changed since it was retrieved, if the user does not send the revision_id for the object it was editing?
Even with the following body, the request still fails with RevisionIdWasChanged:
```
{
"_id": "651163927129d9177247c1b7",
"revision_id": "69a4b65b-83a8-4874-a129-b237cc51d11b",
"bar": "string"
}
```
where `_id` and `revision_id` were copied directly from the database.
----
**Expected behavior**
I would expect to be able to call the `save` method on an object that has not been changed since retrieving it. Furthermore, I would expect `revision_id` to be part of the expected body type generated when using the `use_revision = True` statement. Lastly, I would expect `foo.save()` to create a document with a filled in `revision_id` field: however, it seems to create document without setting that field (I need to use `foo.insert()` to actually generate a `revision_id`.
**Additional context**
This is using beanie with version 1.22.6, fastAPI version 0.103.1, mongodb version 7.
I recognise that I may be using this `use_revision` parameter in the wrong way, but the documentation on it is very sparse and my interpretation seems intuitive for a CRUD application. | closed | 2023-09-25T10:52:47Z | 2024-11-01T15:28:42Z | https://github.com/BeanieODM/beanie/issues/719 | [] | Ty-Ni | 13 |
microsoft/nni | tensorflow | 5,184 | Problem about saving and loading nni pruned model | **Describe the issue**:
Hi,I use a simple function to pruned my model and use `torch.save `to save pruned model.
But, when I load it I have a question like this.
```
Traceback (most recent call last):
File "/home/puyiwen/fastdepth_org/torch2onnx.py", line 85, in <module>
# device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
File "/home/puyiwen/.conda/envs/puyiwen/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/puyiwen/.conda/envs/puyiwen/lib/python3.9/site-packages/torch/serialization.py", line 885, in _load
result = unpickler.load()
File "/home/puyiwen/.conda/envs/puyiwen/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2d' object has no attribute 'accumulate_params'
```
I dont know why?Can you help me? Thank you very much!!
**Environment**:
- NNI version:
- 2.7
- Python version:
- 3.9.7
- PyTorch version:
- 1.10.0
- Is conda/virtualenv/venv used?:
- use conda
- Is running in Docker?:
- No
**Configuration**:
my prune code is:
```
pruner = L1FilterPruner(model, config_list)
pruner.compress()
pruner.export_model(args.save_results + '/model_temp.pth', args.save_results + '/mask_temp.pth')
pruner._unwrap_model()
# pruner.show_pruned_weights()
ModelSpeedup(model, dummy_input=torch.rand([16, 3, 224, 224]).cuda(), masks_file=args.save_results + './mask_temp.pth').speedup_model()
torch.save(model,'/home/puyiwen/fastdepth_org/results/nyu_reduced.samples=0.modality=rgb.arch=litedepth_mixdata_2_correct.decoder=nnconv.criterion=l1.lr=0.01.bs=16.pretrained=True/model_best_nni.pth')
```
my load pruned model code is:
```
model = torch.load(input_file) #input_file is saved pruned model path
model = model.cuda()
```
Thank you very much again!! | closed | 2022-10-26T05:46:46Z | 2022-10-27T01:18:11Z | https://github.com/microsoft/nni/issues/5184 | [] | puyiwen | 2 |
FactoryBoy/factory_boy | django | 301 | Simplifying how to use Faker wrapper within Factories | While creating a Factory with a Django's `ImageField`, I realized that I needed different `filename` values for each `ImageField`. So, I tried this:
``` python
import factory
import faker import Factory as FakerFactory
def gen_filename():
return FakerFactory.create().file_name()
class BannerFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.Banner
image = ImageField(filename=factory.LazyFunction(gen_filename))
... other fields
```
Then, I thought it'd be simpler to use factory_boy's faker wrapper instead of going directly with Faker, so I changed gen_file function to:
``` python
def gen_filename():
return factory.Faker('file_name').generate({})
```
So, unless I'm missing something, `extra_kwargs` parameter in Faker's generate method could become optional. As a result, you would be able to call `factory.Fake('provider').generate()` without the empty/unintuitive dict.
I already made this change in my fork https://github.com/jsangilve/factory_boy/commit/4f6be9e4b6c1b39e3433c816e4e8fc6602dc8b55.
| closed | 2016-05-08T14:43:05Z | 2016-05-23T22:25:20Z | https://github.com/FactoryBoy/factory_boy/issues/301 | [
"Q&A"
] | jsangilve | 2 |
howie6879/owllook | asyncio | 46 | 能把mongodb和redis都加入到docker中吗? | 把mongodb和redis都加入到docker中,然后可以直接运行,不用装mongodb和redis | closed | 2018-10-29T02:07:27Z | 2018-12-21T01:30:26Z | https://github.com/howie6879/owllook/issues/46 | [] | last2win | 1 |
home-assistant/core | python | 140,453 | Segmentation Fault on launch | ### The problem
I have had HomeAssistant Core working for a couple weeks, so far so good, haven't done any changes (that I can think off) in the last couple of days. Today I had a power outage, and after that it seems like there is an issue with the bluetooth service. ¿Would it be possible to run without the module being loaded?
### What version of Home Assistant Core has the issue?
homeassistant==2025.3.0
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
025-03-12 14:48:00.253 INFO (MainThread) [homeassistant.setup] Setup of domain zeroconf took 0.02 seconds
2025-03-12 14:48:00.254 INFO (MainThread) [homeassistant.bootstrap] Setting up stage 1: {'ssdp', 'dhcp', 'bluetooth', 'cloud', 'usb'}
2025-03-12 14:48:00.256 INFO (MainThread) [homeassistant.setup] Setting up webhook
2025-03-12 14:48:00.260 INFO (MainThread) [homeassistant.setup] Setup of domain webhook took 0.00 seconds
2025-03-12 14:48:00.348 INFO (MainThread) [homeassistant.setup] Setting up ssdp
2025-03-12 14:48:00.356 INFO (MainThread) [homeassistant.setup] Setup of domain ssdp took 0.01 seconds
2025-03-12 14:48:00.645 INFO (MainThread) [homeassistant.setup] Setting up dhcp
2025-03-12 14:48:00.646 INFO (MainThread) [homeassistant.setup] Setup of domain dhcp took 0.00 seconds
2025-03-12 14:48:00.670 INFO (MainThread) [homeassistant.setup] Setting up usb
2025-03-12 14:48:00.670 INFO (MainThread) [homeassistant.setup] Setup of domain usb took 0.00 seconds
2025-03-12 14:48:00.844 INFO (MainThread) [homeassistant.components.webhook] Received message for unregistered webhook 3d2d70815af5bc37e34a947adabd76f771b21f20c2b4767e24bc96495e57bdda from 10.8.0.1
2025-03-12 14:48:01.299 INFO (MainThread) [homeassistant.setup] Setting up cloud
2025-03-12 14:48:01.304 INFO (MainThread) [homeassistant.setup] Setting up ffmpeg
2025-03-12 14:48:01.308 INFO (MainThread) [homeassistant.setup] Setup of domain cloud took 0.01 seconds
2025-03-12 14:48:01.460 ERROR (ImportExecutor_0) [homeassistant.loader] Unexpected exception importing component homeassistant.components.bluetooth
Traceback (most recent call last):
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/loader.py", line 1074, in _get_component
ComponentProtocol, importlib.import_module(self.pkg_path)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/util/loop.py", line 201, in protected_loop_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1022, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/components/bluetooth/__init__.py", line 55, in <module>
from . import passive_update_processor, websocket_api
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/components/bluetooth/passive_update_processor.py", line 32, in <module>
from .update_coordinator import BasePassiveBluetoothCoordinator
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/components/bluetooth/update_coordinator.py", line 12, in <module>
from .api import (
...<4 lines>...
)
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/components/bluetooth/api.py", line 26, in <module>
from .manager import HomeAssistantBluetoothManager
File "/srv/homeassistant/lib/python3.13/site-packages/homeassistant/components/bluetooth/manager.py", line 33, in <module>
from .match import (
...<8 lines>...
)
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1018, in exec_module
File "<frozen importlib._bootstrap_external>", line 1151, in get_code
File "<frozen importlib._bootstrap_external>", line 780, in _compile_bytecode
EOFError: marshal data too short
2025-03-12 14:48:01.474 WARNING (MainThread) [homeassistant.util.loop] Detected blocking call to import_module with args ('homeassistant.components.bluetooth',) in /srv/homeassistant/lib/python3.13/site-packages/homeassistant/loader.py, line 1074: ComponentProtocol, importlib.import_module(self.pkg_path) inside the event loop; This is causing stability issues. Please create an issue....
```
### Additional information
_No response_ | closed | 2025-03-12T14:57:36Z | 2025-03-13T15:30:09Z | https://github.com/home-assistant/core/issues/140453 | [
"problem with file system"
] | mixtoism | 3 |
dadadel/pyment | numpy | 128 | Breaking existing numpy docstring. | When running `pyment` on a file with already existing docstrings pyment will try to add again the arguments.
Running pyment on the following file
```python
def add(left: int, right: int) -> int:
"""
Add two intergers together.
Really high tech !
Parameters
----------
left : int :
Left element to add
right : int :
right element to add
Returns
-------
"""
return left + right
```
will result in this file (the bug still occurs with or without the `-w` arg):
```python
def add(left: int, right: int) -> int:
"""
Add two intergers together.
Really high tech ! like woa !
Parameters
----------
left : int :
Left element to add
right : int :
right element to add
left: int :
right: int :
Returns
-------
"""
return left + right
```
I at first thought that pyment was hard checking for the presence of those two lines, but it does not seem to be the case :
```python
def add(left: int, right: int) -> int:
"""
Add two intergers together.
Really high tech ! like woa !
Parameters
----------
left : int :
Left element to add
right : int :
right element to add
left : int :
right : int :
left: int :
right: int :
Returns
-------
"""
return left + right
```
Tested on python 3.10.8, pyment 0.3.3
| open | 2023-01-17T09:37:36Z | 2023-01-17T09:37:36Z | https://github.com/dadadel/pyment/issues/128 | [] | galyfray | 0 |
plotly/dash | data-science | 2,997 | Default background manager. | Add a default background manager using diskcache, would write to `~/.cache/dash/${hash_of_app_directory}/` on linux or the appdata folder on windows.
This would allow to use `@callback(..., background=True)` without having to setup anything. | open | 2024-09-12T14:38:43Z | 2024-09-12T18:11:36Z | https://github.com/plotly/dash/issues/2997 | [
"feature",
"P3"
] | T4rk1n | 0 |
BeanieODM/beanie | pydantic | 140 | Support for tailable cursors | MongoDB supports tailable cursors which allow you to "subscribe" to additions and changes to a collection or document.
https://motor.readthedocs.io/en/stable/examples/tailable-cursors.html
It would be neat if you could use this with beanie, perhaps in the form of:
```python
async for result in Product.find(search_criteria, cursor_type=CursorType.TAILABLE_AWAIT):
print(result)
```
Which would convert the loop into an infinite generator yielding one object at a time. Great for websockets! | open | 2021-11-16T03:46:43Z | 2024-10-25T18:54:21Z | https://github.com/BeanieODM/beanie/issues/140 | [
"feature request"
] | tclasen | 4 |
tox-dev/tox | automation | 3,272 | TOX_DISCOVER not working (micromamba) | ## Issue
I've set `TOX_DISCOVER` to a space separated list of paths, however, `tox` seems to ignore that.
Even when I pass it directly to `--discover`, only the last path is actually discovered.
<details><summary>Console output from my investigation</summary>
```console
(ansible-lint-empty-lines-between-tasks) ➜ ansible-lint-empty-lines-between-tasks git:(master) ✗ which tox
/home/martin/micromamba/envs/ansible-lint-empty-lines-between-tasks/bin/tox
(ansible-lint-empty-lines-between-tasks) ➜ ansible-lint-empty-lines-between-tasks git:(master) ✗ echo $TOX_DISCOVER
/home/martin/micromamba/envs/py37/bin /home/martin/micromamba/envs/py38/bin /home/martin/micromamba/envs/py39/bin /home/martin/micromamba/envs/py310/bin /home/martin/micromamba/envs/py311/bin /home/martin/micromamba/envs/py312/bin
(ansible-lint-empty-lines-between-tasks) ➜ ansible-lint-empty-lines-between-tasks git:(master) ✗ for p in $(echo $TOX_DISCOVER); do echo -n "$p: "; ${p}/python --version; done
/home/martin/micromamba/envs/py37/bin: Python 3.7.16
/home/martin/micromamba/envs/py38/bin: Python 3.8.17
/home/martin/micromamba/envs/py39/bin: Python 3.9.17
/home/martin/micromamba/envs/py310/bin: Python 3.10.12
/home/martin/micromamba/envs/py311/bin: Python 3.11.4
/home/martin/micromamba/envs/py312/bin: Python 3.12.0
</details>
## Environment
I'm using micromamba.
Provide at least:
- OS: Ubuntu 20.04
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
Package Version
------------------ ---------------
annotated-types 0.6.0
black 24.4.0
build 1.2.1
bump2version 1.0.1
cachetools 5.3.3
certifi 2024.2.2
cffi 1.16.0
chardet 5.2.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
coverage 7.4.4
cryptography 42.0.5
distlib 0.3.8
distro 1.9.0
docutils 0.21.1
filelock 3.13.4
flake8 7.0.0
freezegun 1.4.0
idna 3.7
importlib_metadata 7.1.0
iniconfig 2.0.0
jaraco.classes 3.4.0
jaraco.context 5.3.0
jaraco.functools 4.0.0
jeepney 0.8.0
keyring 25.1.0
maison 1.4.3
markdown-it-py 3.0.0
mccabe 0.7.0
mdurl 0.1.2
more-itertools 10.2.0
mypy 1.9.0
mypy-extensions 1.0.0
nh3 0.2.17
packaging 24.0
pathspec 0.12.1
pip 24.0
pkginfo 1.10.0
platformdirs 4.2.0
pluggy 1.4.0
pycodestyle 2.11.1
pycparser 2.22
pydantic 2.7.0
pydantic_core 2.18.1
pyflakes 3.2.0
Pygments 2.17.2
pyproject-api 1.6.1
pyproject_hooks 1.0.0
pytest 8.1.1
pytest-cov 5.0.0
python-dateutil 2.9.0.post0
readme_renderer 43.0
requests 2.31.0
requests-toolbelt 1.0.0
rfc3986 2.0.0
rich 13.7.1
ruff 0.3.7
ruyaml 0.91.0
SecretStorage 3.3.3
setuptools 69.2.0
six 1.16.0
toml 0.10.2
tox 4.14.2
twine 5.0.0
types-freezegun 1.1.10
types-setuptools 69.2.0.20240317
typing_extensions 4.11.0
urllib3 2.2.1
virtualenv 20.25.1
wheel 0.43.0
yamlfix 1.16.0
zipp 3.18.1
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
(ansible-lint-empty-lines-between-tasks) ➜ ansible-lint-empty-lines-between-tasks git:(master) ✗ tox -rvv
py3.10: 295 W remove tox env folder /home/martin/workspace/ansible-lint-empty-lines-between-tasks/.tox/py3.10 [tox/tox_env/api.py:323]
.pkg: 297 W remove tox env folder /home/martin/workspace/ansible-lint-empty-lines-between-tasks/.tox/.pkg [tox/tox_env/api.py:323]
py3.10: 390 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:58]
py3.10: 391 D get interpreter info via cmd: / /home/martin/micromamba/envs/ansible-lint-empty-lines-between-tasks/lib/python3.12/site-packages/virtualenv/discovery/py_info.py WRvXQNTlINZPalllgkZop8FdDeClVXMC Rxdhh6bKxVUdHvEI1R1C4qYyZmifFTd0 [virtualenv/discovery/cached_py_info.py:112]
py3.10: 392 W skipped because could not find python interpreter with spec(s): py3.10 [tox/session/cmd/run/single.py:50]
py3.11: 394 W remove tox env folder /home/martin/workspace/ansible-lint-empty-lines-between-tasks/.tox/py3.11 [tox/tox_env/api.py:323]
py3.10: SKIP ⚠ in 0.1 seconds
py3.11: 398 I find interpreter for spec PythonSpec(major=3, minor=11) [virtualenv/discovery/builtin.py:58]
py3.11: 399 W skipped because could not find python interpreter with spec(s): py3.11 [tox/session/cmd/run/single.py:50]
py3.12: 400 W remove tox env folder /home/martin/workspace/ansible-lint-empty-lines-between-tasks/.tox/py3.12 [tox/tox_env/api.py:323]
py3.11: SKIP ⚠ in 0.01 seconds
py3.12: 579 I find interpreter for spec PythonSpec(major=3, minor=12) [virtualenv/discovery/builtin.py:58]
py3.12: 579 W skipped because could not find python interpreter with spec(s): py3.12 [tox/session/cmd/run/single.py:50]
lint: 582 W remove tox env folder /home/martin/workspace/ansible-lint-empty-lines-between-tasks/.tox/lint [tox/tox_env/api.py:323]
py3.12: SKIP ⚠ in 0.18 seconds
lint: 645 I find interpreter for spec PythonSpec(path=/home/martin/micromamba/envs/ansible-lint-empty-lines-between-tasks/bin/python3.12) [virtualenv/discovery/builtin.py:58]
lint: 646 W skipped because could not find python interpreter with spec(s): /home/martin/micromamba/envs/ansible-lint-empty-lines-between-tasks/bin/python3.12 [tox/session/cmd/run/single.py:50]
py3.10: SKIP (0.10 seconds)
py3.11: SKIP (0.01 seconds)
py3.12: SKIP (0.18 seconds)
lint: SKIP (0.06 seconds)
evaluation failed :( (0.47 seconds)
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
You can check the repo https://github.com/mimre25/ansible-lint-empty-lines-between-tasks, it's rather small.
I've tried the same with the paths being `:` separated, but to no avail.
Unfortunately, I couldn't find the format required for `--discover`/`TOX_DISCOVER` in the documentation, but the space separated format used to work in the past when I was using miniconda. | closed | 2024-04-25T10:58:15Z | 2024-04-26T18:56:24Z | https://github.com/tox-dev/tox/issues/3272 | [] | mimre25 | 5 |
deeppavlov/DeepPavlov | tensorflow | 1,150 | Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initializ |
python =3.7.3
tensorflow-gpu=1.15.0
cuda=10.0
when i use `KerasClassificationModel` to train classifier model, the model is error with
`` `
Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initializer
```
but i use
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test, verbose=2)
```
the code is fine.
so How can i do? | closed | 2020-03-09T14:29:38Z | 2022-04-05T18:16:08Z | https://github.com/deeppavlov/DeepPavlov/issues/1150 | [] | lw3259111 | 3 |
OFA-Sys/Chinese-CLIP | computer-vision | 51 | 使用否定词去搜索的话结果有问题 | 在这个体验页面,搜索“戴眼镜的猫”与“没戴眼镜的猫”结果出来的都是戴眼镜的猫。这个问题可以解决吗? | open | 2023-02-09T03:02:04Z | 2023-06-27T03:24:05Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/51 | [
"enhancement"
] | starinskycc | 5 |
RomelTorres/alpha_vantage | pandas | 76 | Another Timeseries import error | I have looked through other sources and have tried the same solutions, but I still get the same outcome every time.
I am able to successfully install and import alpha_vantage, but then the time series function and other functions will not import, stating this error:
ModuleNotFoundError: No module named 'alpha_vantage.timeseries'; 'alpha_vantage' is not a package
I have started with python 3.6.4and have tried older and various versions of python, such as 3.5.5 and now 3.6.4 again with Anaconda, yet run into the same error every time.
I greatly appreciate all the help that I could get.
<img width="463" alt="error_message" src="https://user-images.githubusercontent.com/36968051/41615975-95785a2e-73b1-11e8-82a5-48a535c81018.PNG">
| closed | 2018-06-19T18:12:11Z | 2018-07-24T07:13:14Z | https://github.com/RomelTorres/alpha_vantage/issues/76 | [
"duplicate"
] | vincentcortese | 4 |
tensorpack/tensorpack | tensorflow | 1,262 | learning_rate not changing on monitoring. | Hi!
I'm using the FasterRCNN example with this config changes :
```
MODE_FPN=True
FPN.CASCADE=True
BACKBONE.RESNET_NUM_BLOCKS=[3,4,23,3]
FPN.NORM=GN
BACKBONE.NORM=GN
FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head
FPN.MRCNN_HEAD_FUNC=maskrcnn_up4conv_gn_head
PREPROC.TRAIN_SHORT_EDGE_SIZE=[640,800]
TRAIN.LR_SCHEDULE=[1250000,1500000,1750000]
BACKBONE.FREEZE_AT=0
TRAIN.STEPS_PER_EPOCH=2000
```
and while the output of the callbacks says that the learning rate will be changed:
```
[0711 12:39:32 @base.py:275] Start Epoch 1250 ...
100%|########################################|2000/2000[17:20<00:00, 1.91it/s]
[0711 12:56:53 @base.py:285] Epoch 1250 (global_step 1538000) finished, time:17 minutes 20 seconds.
[0711 12:56:53 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[0711 12:56:54 @saver.py:79] Model saved to train_log/maskrcnn/model-1538000.
[0711 12:56:54 @param.py:158] [HyperParamSetter] At global_step=1538000, learning_rate is set to 0.001000
[0711 12:56:58 @misc.py:109] Estimated Time Left: 6 days 1 hour 6 minutes 20 seconds
[0711 12:56:58 @eval.py:294] Running evaluation ...
```
The monitoring and Tensorboard keeps showing the previous learning rate:
`[0711 14:14:16 @monitor.py:467] learning_rate: 0.01`
Is that supposed to work this way? Is this a known problem?
Thanks,
Alex | closed | 2019-07-11T14:47:52Z | 2019-07-11T14:57:24Z | https://github.com/tensorpack/tensorpack/issues/1262 | [] | areche | 1 |
jschneier/django-storages | django | 1,090 | Using Django Storages, Directly Upload to S3 Bucket Without Going Through Server | I'm not deeply familiar with Django Storages. I've used it in past projects. I know the basics. I'm now building an app that will ingest video files. I've read that this can put strain on a server's resources. I've read about signed URLs and [pre-signed URLs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.generate_presigned_url).
What I want to do is upload video files directly to an S3 bucket. According to this [Stack Overflow answer](https://stackoverflow.com/a/65535046/1577947), uploaded files [using Django Storages] must go through your server taking up processing power and bandwidth.
Is that true, and if not, how can I use Django Storages to upload directly to an S3 bucket?
> Media uploads are typically large, so **transferring these can represent a large share of network I/O and server CPU time**. You must also manage the state of the transfer to ensure that the entire object is successfully uploaded, and manage retries and errors.
> **By directly uploading these files to Amazon S3, you can avoid proxying these requests through your application server**. This can significantly reduce network traffic and server CPU usage, and enable your application server to handle other requests during busy periods. S3 also is highly available and durable, making it an ideal persistent store for user uploads.
References:
[Uploading to Amazon S3 directly from a web or mobile application](https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/) | closed | 2021-11-09T18:17:42Z | 2021-11-16T21:48:57Z | https://github.com/jschneier/django-storages/issues/1090 | [] | jaradc | 3 |
reloadware/reloadium | pandas | 174 | Can't start debugger on Pycharm 2023.3 EAP | ## Describe the bug*
Start debugger
```
E:\Python\python.exe -m reloadium_launcher pydev_proxy -X pycache_prefix=C:\Users\xxxx\AppData\Local\JetBrains\PyCharm2023.3\cpython-cache "C:/Program Files/JetBrains/PyCharm/plugins/python/helpers/pydev/pydevd.py" --multiprocess --client 127.0.0.1 --port 62520 --file E:\Projects\xxxx\manage.py runserver localhost:8000
Unexpected flag "-X"
Process finished with exit code 1
```
Regular debugger runs fine
```
E:\Python\python.exe -X pycache_prefix=C:\Users\xxxx\AppData\Local\JetBrains\PyCharm2023.3\cpython-cache "C:/Program Files/JetBrains/PyCharm/plugins/python/helpers/pydev/pydevd.py" --multiprocess --client 127.0.0.1 --port 63062 --file E:\Projects\xxxx\manage.py runserver localhost:8000
```
Uninstalled Pycharm and reloadium and tried again. No luck.
Windows 11 Pro
Pycharm 2023.3 EAP 5
Reloadium 1.3.3
Python 3.11.6
| closed | 2023-11-26T15:24:36Z | 2024-01-04T08:28:16Z | https://github.com/reloadware/reloadium/issues/174 | [
"bug"
] | andyp05 | 7 |
axnsan12/drf-yasg | rest-api | 690 | Allow to set `openapi.Schema` manually to be nullable | As far as I understand the is no way to specify that some field is "nullable" when we use manual `swagger_schema_fields`. https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/inspectors/field.py#L528
It would be great to have something like `nullable: bool = True` option in `openapi.Schema` class and other related code.
Example:
```python
class Meta:
swagger_schema_fields = {
'type': openapi.TYPE_OBJECT,
'properties': {
'optional_field': openapi.Schema(
description='Some optional field',
type=openapi.TYPE_STRING,
nullable=True, # note this
),
}
}
```
Expected result:
```json
"optional_field": {
"description": "Some optional field",
"type": "string",
"x-nullable": true
},
```
The actual result:
```json
"optional_field": {
"description": "Some optional field",
"type": "string",
},
```
If we use full auto-generated doc without `swagger_schema_fields` the result is OK, but when we use `swagger_schema_fields` -- all serializer's fields become not-nullable.
| closed | 2021-01-08T16:20:44Z | 2021-10-08T18:44:37Z | https://github.com/axnsan12/drf-yasg/issues/690 | [] | d3QUone | 1 |
piskvorky/gensim | data-science | 3,042 | Phraser max NPMI score > 1 | #### Problem description
I trained a NMPI phraser on the latest wikipedia dump. It is my understanding that scores should be <= 1.0, but I get a higher score.
#### Steps/code/corpus to reproduce
```python
from gensim.corpora import WikiCorpus
from gensim.models import Phrases
from gensim.models.phrases import Phraser
wiki_corpus = WikiCorpus("enwiki-latest-pages-articles-multistream.xml.bz2", dictionary={})
ENGLISH_CONNECTOR_WORDS = frozenset(
" a an the " # articles; we never care about these in MWEs
" for of with without at from to in on by " # prepositions; incomplete on purpose, to minimize FNs
" and or " # conjunctions; incomplete on purpose, to minimize FNs
.split()
)
phrases = Phrases(wiki_corpus.get_texts(), scoring='npmi', threshold=0.75, min_count=5, common_terms=ENGLISH_CONNECTOR_WORDS, max_vocab_size=80000000)
phraser = Phraser(phrases)
```
Then:
```
In[2]: max(phraser.phrasegrams.values())
Out[2]: 1.2003355030351979
```
#### Versions
```python
Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Python 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0]
Bits 64
NumPy 1.19.2
gensim 3.8.0
FAST_VERSION 1
```
| open | 2021-02-09T12:04:17Z | 2021-02-09T12:25:47Z | https://github.com/piskvorky/gensim/issues/3042 | [
"bug",
"need info"
] | joachimdb | 2 |
pyqtgraph/pyqtgraph | numpy | 2,178 | PR #2011 + anti-aliasing enabled leads to line artifacts | ### Short description
PR #2011 introduced an optimized way to plot thick lines. However, when anti-aliasing is enabled, this
leads to line artifacts.
### Code to reproduce
``` python
import pyqtgraph as pg
pg.setConfigOptions(antialias=True)
pg.setConfigOption('background', 'w')
pg.setConfigOption('foreground', 'k')
import pyqtgraph.Qt.QtWidgets as QtWidgets
import numpy as np
win = pg.GraphicsLayoutWidget(show=True)
win.resize(800,350)
win.setWindowTitle('pyqtgraph example: #2011 test')
plt1 = win.addPlot(title='With anti-aliasing and alpha = 1')
plt2 = win.addPlot(title='With anti-aliasing and alpha != 1')
x = np.linspace(0, 2*np.pi, 25)
y = np.sin(x)
WIDTH = 4.0
color = (0, 113, 188)
pen = pg.mkPen(color + (255,), width=WIDTH)
pen_alpha = pg.mkPen(color + (254,), width=WIDTH)
plt1.plot(x, y, pen=pen)
plt2.plot(x, y, pen=pen_alpha)
QtWidgets.QApplication.instance().exec_()
```
### Expected behavior
Fallback to the old drawing method whenever anti-aliasing is enabled (just as is the case if alpha < 1).
### Real behavior
Line artifacts are seen for thicker lines.
### Tested environment(s)
* PyQtGraph version: commit Id 8436457 and newer
* Qt Python binding: PySide2 5.15.1 Qt 5.15.1
* Python version: 3.8.10
* NumPy version: 1.19.2
* Operating system: XUbuntu 20.04.2 LTS
* Installation method: pip | open | 2022-01-18T19:41:57Z | 2022-01-31T14:47:12Z | https://github.com/pyqtgraph/pyqtgraph/issues/2178 | [] | swvanbuuren | 12 |
aleju/imgaug | machine-learning | 786 | Are there any function to remove the bbox with too small area after transformation? | Hi, I am a freshman of this library, I want to use some transformation which could lead to the change of bbox location and range to argument my dataset of object detection.
Now my question is are there any function to remove the bbox with too small area after transformation? The function is like the min_visibility parameter in albumentations as follows:
```
import albumentations as A
augmentation_pipeline = A.Compose(
[
A.HorizontalFlip(p = 0.5), # apply horizontal flip to 50% of images
A.VerticalFlip(p=0.5),
A.OneOf(
[
#A.CLAHE(clip_limit=1),
A.RandomBrightnessContrast(),
A.RandomGamma(),
A.Blur()
],
p = 1
),
A.OneOf(
[
# apply one of transforms to 50% of images
A.RandomContrast(), # apply random contrast
A.RandomGamma(), # apply random gamma
A.RandomBrightnessContrast(), # apply random brightness
],
p = 0.5
),
A.OneOf(
[
# apply one of transforms to 50% images
A.ElasticTransform(
alpha = 120,
sigma = 120 * 0.05,
alpha_affine = 120 * 0.03,
border_mode = cv2.BORDER_CONSTANT
),
A.GridDistortion(border_mode = cv2.BORDER_CONSTANT),
A.OpticalDistortion(
distort_limit = 3,
shift_limit = 0.6,
border_mode = cv2.BORDER_CONSTANT
),
],
p = 0
),
A.OneOf(
[
A.SafeRotate(limit=10,border_mode=cv2.BORDER_CONSTANT)
],
p = 0
),
],
bbox_params= A.BboxParams('coco', min_visibility= 0.3)
)
```
Because I want to remove the bbox with a small area in the transformed image like this:

| open | 2021-08-26T02:04:54Z | 2021-08-26T02:06:02Z | https://github.com/aleju/imgaug/issues/786 | [] | lantudou | 0 |
piskvorky/gensim | data-science | 2,851 | CalledProcessError: non-zero returned non-zero exit status 1. Gensim Mallet | I was trying to run ldaMallet for modeling, but ran into the CalledProcessError.

Then, we I run the following code:
`model_list, coherence_values = compute_coherence_values(dictionary=words_id2word, corpus=words_corpus, texts=data_words_nonstop_trigrams, start=2, limit=40, step=6)`
I encountered the calledprocesserror:
'CalledProcessError: Command 'C:/mallet-2.0.8/bin/mallet import-file --preserve-case --keep-sequence --remove-stopwords --token-regex "\S+" --input C:\Users\jia\AppData\Local\Temp\aa34be_corpus.txt --output C:\Users\jia\AppData\Local\Temp\aa34be_corpus.mallet' returned non-zero exit status 1.
'
I tried the solutions in https://github.com/RaRe-Technologies/gensim/issues/2163, as well as stackoverlow solutions, but none of them worked. Please help. Thank you in advance.
#### Versions
```python
Windows-10-10.0.18362-SP0
Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)]
NumPy 1.16.4
SciPy 1.4.1
gensim 3.8.0
FAST_VERSION 1
```
| closed | 2020-06-05T18:43:16Z | 2021-01-08T20:42:24Z | https://github.com/piskvorky/gensim/issues/2851 | [] | jhuang12 | 5 |
Johnserf-Seed/TikTokDownload | api | 508 | 按季度生成文件夹存放视频 | 之前的版本把所有视频都放在一个文件夹下感觉挺好的,这次更新了发现每下载一个视频就单独生成了一个文件夹。如果要分多个文件夹的话,也可以考虑按季度将视频分到不同文件夹下
| open | 2023-08-12T08:42:33Z | 2023-08-17T08:22:09Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/508 | [
"需求建议(enhancement)"
] | dslyz | 6 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,468 | Message: unknown error: cannot determine loading status from no such window | Currently, I am using uc to bypass cloudflare and scrape data from 'https://www.topcv.vn/viec-lam-it'. I run multithread and sometimes got this error:
WebDriverException: Message: unknown error: cannot determine loading status
from no such window
(Session info: chrome=115.0.5790.171)
Stacktrace:
Backtrace:
GetHandleVerifier [0x00CDA813+48355]
(No symbol) [0x00C6C4B1]
(No symbol) [0x00B75220]
(No symbol) [0x00B688E2]
(No symbol) [0x00B67138]
(No symbol) [0x00B677AA]
(No symbol) [0x00B703E5]
(No symbol) [0x00B7C668]
(No symbol) [0x00B7F566]
(No symbol) [0x00B67BC3]
(No symbol) [0x00B7C37A]
(No symbol) [0x00BCC87D]
(No symbol) [0x00BBA536]
(No symbol) [0x00B982DC]
(No symbol) [0x00B993DD]
GetHandleVerifier [0x00F3AABD+2539405]
GetHandleVerifier [0x00F7A78F+2800735]
GetHandleVerifier [0x00F7456C+2775612]
GetHandleVerifier [0x00D651E0+616112]
(No symbol) [0x00C75F8C]
(No symbol) [0x00C72328]
(No symbol) [0x00C7240B]
(No symbol) [0x00C64FF7]
BaseThreadInitThunk [0x760E00C9+25]
RtlGetAppContainerNamedObjectPath [0x779D7B1E+286]
RtlGetAppContainerNamedObjectPath [0x779D7AEE+238]
Is there any solution for this? Thank you.
-> Here is the code: https://github.com/VQHieu1012/Issue.git
selenium version 4.11.2
python version 3.10.7 | open | 2023-08-13T07:19:41Z | 2023-08-23T09:42:20Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1468 | [] | VQHieu1012 | 1 |
kizniche/Mycodo | automation | 930 | Add scripts directory that is preserved during upgrade | Problem: Currently user-created scripts associated with Mycodo (e.g. those created for functions/conditionals/actions) must be manually backed up and moved, and must be outside the Mycodo install directory due to upgrades only carrying over specific files that must be preserved.
Solution: Adding a directory that users can put scripts in under the Mycodo install directory will allow them to be preserved across upgrades, backed up during backups, and exported/imported when using the export/import feature, and ensures they remain associated with your Mycodo instance. | closed | 2021-02-05T15:35:11Z | 2021-03-09T18:07:49Z | https://github.com/kizniche/Mycodo/issues/930 | [
"enhancement",
"Implemented"
] | kizniche | 0 |
plotly/dash | data-visualization | 2,233 | Include type hints in function parameters | Hi,
Static type checkers (mypy, pyright/pylance) are becoming increasingly popular in the python world.
For now they don't work with Dash as there are no type annotations included with the library.
From what I understand, python code for dash components as well as their docstrings are somehow transpiled from javascript? If so, adding type hints should be relatively straightforward as it appears that the docstrings already contain very detailed typing information. This would significantly improve QOL for users using type checkers (which includes the majority of the vscode userbase since vscode uses pylance by default).
For reference, plotly has also started adding typing information to their functions - see for instance https://github.com/plotly/plotly.py/pull/3425#issuecomment-1117210067 .
| open | 2022-09-17T10:09:59Z | 2024-08-13T19:19:30Z | https://github.com/plotly/dash/issues/2233 | [
"feature",
"P3"
] | ldorigo | 4 |
postmanlabs/httpbin | api | 409 | Feature request | hanging connections / timeouts tests | Option to add a delay between the time of the request to the response in order to test how the client handles hanging requests (i.e. timeout of waiting for response, timeout for connect, etc) | closed | 2017-12-08T22:02:47Z | 2018-04-26T17:51:16Z | https://github.com/postmanlabs/httpbin/issues/409 | [] | AlmogBaku | 1 |
adap/flower | tensorflow | 4,300 | Too many pings and one client always disconnects | ### Describe the bug
```
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Too many pings"
debug_error_string = "UNKNOWN:Error received from peer ipv4:192.168.229.99:5040 {grpc_message:"Too many pings", grpc_status:14, created_time:"2024-10-07T15:40:46.164225255+02:00"}"
>
```
I've got my grpc server settings as:
```
("grpc.http2.max_pings_without_data", 0),
# Is it permissible to send keepalive pings from the client without
# any outstanding streams. More explanation here:
# https://github.com/adap/flower/pull/2197
("grpc.keepalive_permit_without_calls", 0),
```
but it does not help though
later, i added up two options:
```
("grpc.http2.max_ping_strikes", 0),
("grpc.http2.min_ping_interval_without_data_ms", 10)
```
it allowed me escape the initial error, but then I have:
```
raise GrpcBridgeClosed()
flwr.server.superlink.fleet.grpc_bidi.grpc_bridge.GrpcBridgeClosed
```
### Steps/Code to Reproduce
I use basic FedAvg strategy except that i send additional round of evaluation on each client during aggregate_fit
`EvaluateRes = client_proxy.evaluate(ins = evaluate_ins, timeout = None, group_id=rnd)` . Sometimes when rerun the clients and server, the error happens after 1 successful round, so it is not always happens the same moment.
### Expected Results
Client stays alive
### Actual Results
Client disconnects | open | 2024-10-07T13:43:34Z | 2025-03-12T20:02:19Z | https://github.com/adap/flower/issues/4300 | [
"bug",
"stale",
"part: communication"
] | ajulyav | 7 |
coqui-ai/TTS | pytorch | 4,172 | The XTTS autoregressive problem? | ### Describe the bug
When I use XTTS to continuously generate German audio (no more than 10s each), it always stops briefly after the normal output and then outputs the extra words, even though I've expanded length_penalty, repetition_penalty, top p, and top k by a factor of ten. It seems to be related to the REFERENCE audio I'm entering?
### To Reproduce
inference code referenced:
import os
import torch
import torchaudio
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
print("Loading model...")
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=True)
model.cuda()
print("Computing speaker latents...")
gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"])
print("Inference...")
out = model.inference(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
"de",
gpt_cond_latent,
speaker_embedding,
length_penalty = 10.0,
repetition_penalty = 50.0,
top_k = 500,
top_p=8.0
)
torchaudio.save("xtts.wav", torch.tensor(out["wav"]).unsqueeze(0), 24000)
### Expected behavior
_No response_
### Logs
```shell
```
### Environment
```shell
ubuntu 20.04
```
### Additional context
_No response_ | open | 2025-03-14T12:21:16Z | 2025-03-17T03:08:09Z | https://github.com/coqui-ai/TTS/issues/4172 | [
"bug"
] | lllmx-GH | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,564 | Questions regarding how to process data into correct data format | Hi, thank you very much for the wonderful work!
When I was using your method to train a nerf on my data, I found the image shown in the visualizer is empty in the rgb channel. However, there are contents in the depth and accumulation channels.
I have checked the camera pose using other methods and it is correct. I wonder if there are some mistakes in my data format?
Here is an example of my data. I am using blender to render the images. The width, height and focal parameters are retrieved using blender python API.
{
"fl_x": 262.5,
"fl_y": 262.5,
"k1": 0,
"k2": 0,
"p1": 0,
"p2": 0,
"cx": 128,
"cy": 128,
"w": 256,
"h": 256,
"aabb_scale": 16,
"frames": [
{
"file_path": "images/0001.png",
"transform_matrix": [
[
-0.8327592015266418,
-0.4579005837440491,
0.3111884593963623,
0.40454572439193726
],
[
-3.7252868878567824e-07,
0.5620833039283752,
0.827080488204956,
1.0752044916152954
],
[
-0.553634524345398,
0.6887590885162354,
-0.4680800437927246,
-0.6085042953491211
],
[
0.0,
0.0,
-0.0,
1.0
]
]
}
]
}
I would really appreciate you reply and assistance. Thank you once again. | open | 2025-01-09T15:08:55Z | 2025-01-09T15:08:55Z | https://github.com/nerfstudio-project/nerfstudio/issues/3564 | [] | Yushi-Du | 0 |
amdegroot/ssd.pytorch | computer-vision | 477 | KeyError: Caught KeyError in DataLoader worker process 0. | open | 2020-05-12T09:38:47Z | 2022-04-02T04:39:22Z | https://github.com/amdegroot/ssd.pytorch/issues/477 | [] | GeLee-Q | 6 | |
flasgger/flasgger | rest-api | 230 | AttributeError: 'NoneType' object has no attribute 'Str' | after apispec 0.38.0 version apispec/apispec/ext/marshmallow/swagger.py has moved,replace with openapi.py and common.py. so in file flasgger/marshmallow_apispec.py can not import schema2jsonschema, schema2parameters from apispec.ext.marshmallow.swagger ,
but i change to import from apispec.ext.marshmallow.openapi also get error ImportError: cannot import name schema2jsonschema
when i install apispec with 0.38.0 version (pip install marshmallow apispec==0.38.0) it's ok.
so I think it‘s time to update document “pip install marshmallow apispec” to “pip install marshmallow apispec==0.38.0” or upgrade origin code to match apispec latest verson .
however,I'm so. I don't know how to do it . I hope somebody can fix it when he(or she) have time. | closed | 2018-08-26T06:03:11Z | 2018-08-26T22:16:49Z | https://github.com/flasgger/flasgger/issues/230 | [
"duplicate"
] | suifengpiao14 | 2 |
slackapi/bolt-python | fastapi | 1,172 | Trouble using an InstallationStore other than the default | Hello 👋 ,
I am having trouble using an `InstallationStore` other than the default. Specifically, it looks like the installation data is not being installed when using other installation stores.
I have a Django app that integrates with a new Slack application I am building. Our org does not have a Slack enterprise plan. Our org uses Okta OAuth for users to sign into Slack.
In the Slack Apps web platform (`api.slack.com`) I have added the proper Request URL ending in `/slack/events` to
```
Features > Interactivity & Shortcuts > Interactivity [ON] > Request URL
```
As well as to a slash command's Request URL.
When working with the default installation store, the installation works. This is on my local machine and leveraging an ngrok tunnel.
The default installation store cannot be used in production environment because the Django app is hosted on Heroku, which means the server's filesystem is ephemeral.
So I looked to the `AmazonS3InstallationStore`, and confirmed that my AWS credentials are correct and that the bucket's permissions are correct.
For example, running
```python
try:
s3_client.put_object(Bucket=settings.AWS_BUCKET, Key='test-file.txt, Body='test content')
print('File written successfully')
except Exception as e:
print(f"Failed to write file to S3: {str(e)}")
```
properly creates the object in the S3 bucket.
But when using this s3_client with an instance of `AmazonS3InstallationStore`, it does not look like the SackBolt `App()` instance ever writes the installation data to the s3 bucket. Instead I ultimately get the following:
```
Failed to find an installation data for enterprise: none, team: <teamID>: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
```
I kept the `AmazonS3InstallationStore` installation store but reverted to localhost + ngrok, and here again nothing is being written to the S3 bucket. So I thought it was due to the Heroku environment at first, but now I am wondering if the installation store's `save()` method is not running?
I even tried the `SQLite3InstallationStore` on localhost + ngrok but that is not writing any records to the sqlite3 database either.
Here is my current code:
```python
import logging
from boto3 import Session
from django.conf import settings
from slack_bolt import App
from slack_sdk.oauth.installation_store.amazon_s3 import AmazonS3InstallationStore
logger = logging.getLogger(__name__)
aws_session = session = Session(
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
region_name=settings.AWS_DEFAULT_REGION,
)
s3_client = session.client("s3")
s3_store = AmazonS3InstallationStore(
bucket_name=settings.AWS_BUCKET,
client_id=settings.SLACK_CLIENT_ID,
s3_client=s3_client,
logger=logger,
historical_data_enabled=False, # Keep old versions if True, False for simpler setup
)
app = App(
signing_secret=settings.SLACK_SIGNING_SECRET,
installation_store=s3_store,
logger=logger,
)
```
Would love your take on this, all thoughts appreciated.
### Reproducible in:
#### The `slack_bolt` version
```
slack_bolt==1.20.1
slack_sdk==3.32.0
```
#### Python runtime version
```
Python 3.10.9
```
#### OS info
```
ProductName: macOS
ProductVersion: 14.7
BuildVersion: 23H124
Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:39 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6000
```
But also in Heroku server.
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
On local: Django's `runserver` & `ngrok http`
Heroku Procfile command is: `daphne app_name.asgi:application --port $PORT --bind 0.0.0.0 -v2`
### Expected result:
Similar to the default installation store automatically adding the installation data to its storage target (eg filesystem), I expect the S3 installation store to automatically store the data in the s3 bucket, or the sqlite3 store to insert into the sqlite3 database.
### Actual result:
Focusing on the s3 store -- the installation data is never uploaded to the s3 bucket, ultimately leading to
```
Failed to find an installation data for enterprise: none, team: <teamID>: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-10-01T18:09:30Z | 2024-10-04T17:59:39Z | https://github.com/slackapi/bolt-python/issues/1172 | [
"question"
] | brunogarciagonzalez | 6 |
pmaji/crypto-whale-watching-app | dash | 91 | Make hover state stay up until you mouseout | Right now when you hover over a dot, the tooltip disappears before you have a chance to fully read what it says. It should stay up until you move the mouse off of it. | closed | 2018-03-03T23:12:51Z | 2018-03-06T04:00:32Z | https://github.com/pmaji/crypto-whale-watching-app/issues/91 | [] | ccampbell | 6 |
graphistry/pygraphistry | jupyter | 295 | [FEA] Control the nodes and relationship properties displayed in the graphistry graphs | Request to include a function which filters all properties of a node or a relationship, where we just mention the property name and only those mentioned in the function are displayed when the graphs are shown.

Taking this image as an example
Mentioning -
"Color"
"id"
"name"
Would only show those specific 3 properties in the output when the particular node is selected.
If there are nodes with different labels and properties, we can also mention the required properties for specific labeled node.
Default value is showing all properties | open | 2021-12-27T12:14:31Z | 2021-12-27T18:24:19Z | https://github.com/graphistry/pygraphistry/issues/295 | [
"enhancement"
] | Parth-Joshi-6669 | 1 |
Kitware/trame | data-visualization | 474 | vtkRemoteView not working. wslink ConnectionResetError: Cannot write to closing transport. Memory usage grows indefinitely. | <!-- Ignoring this template may result in your bug report getting deleted -->
**Describe the bug**
vtkLocalView works well, but changing to vtkRemoteView doesn't.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://github.com/Kitware/trame-tutorial.git
2. Use 04_vtk/solution.py
3. Swap usage of vtkRemoteView (the default is vtkLocalView), or even clearer, swap to vtkRemoteLocalView
***Code***
```python
import os
from trame.app import get_server
from trame.ui.vuetify import SinglePageWithDrawerLayout
from trame.widgets import vtk, vuetify
from trame_vtk.modules.vtk.serializers import configure_serializer
from vtkmodules.vtkCommonDataModel import vtkDataObject
from vtkmodules.vtkFiltersCore import vtkContourFilter
from vtkmodules.vtkIOXML import vtkXMLUnstructuredGridReader
from vtkmodules.vtkRenderingAnnotation import vtkCubeAxesActor
from vtkmodules.vtkRenderingCore import (
vtkActor,
vtkDataSetMapper,
vtkRenderer,
vtkRenderWindow,
vtkRenderWindowInteractor,
)
# Required for interactor initialization
from vtkmodules.vtkInteractionStyle import vtkInteractorStyleSwitch # noqa
# Required for rendering initialization, not necessary for
# local rendering, but doesn't hurt to include it
import vtkmodules.vtkRenderingOpenGL2 # noqa
CURRENT_DIRECTORY = os.path.abspath(os.path.dirname(__file__))
# Configure scene encoder
configure_serializer(encode_lut=True, skip_light=True)
# -----------------------------------------------------------------------------
# Constants
# -----------------------------------------------------------------------------
class Representation:
Points = 0
Wireframe = 1
Surface = 2
SurfaceWithEdges = 3
class LookupTable:
Rainbow = 0
Inverted_Rainbow = 1
Greyscale = 2
Inverted_Greyscale = 3
# -----------------------------------------------------------------------------
# VTK pipeline
# -----------------------------------------------------------------------------
renderer = vtkRenderer()
renderWindow = vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
renderWindowInteractor.GetInteractorStyle().SetCurrentStyleToTrackballCamera()
# Read Data
reader = vtkXMLUnstructuredGridReader()
reader.SetFileName(os.path.join(CURRENT_DIRECTORY, "../../data/disk_out_ref.vtu"))
reader.Update()
# Extract Array/Field information
dataset_arrays = []
fields = [
(reader.GetOutput().GetPointData(), vtkDataObject.FIELD_ASSOCIATION_POINTS),
(reader.GetOutput().GetCellData(), vtkDataObject.FIELD_ASSOCIATION_CELLS),
]
for field in fields:
field_arrays, association = field
for i in range(field_arrays.GetNumberOfArrays()):
array = field_arrays.GetArray(i)
array_range = array.GetRange()
dataset_arrays.append(
{
"text": array.GetName(),
"value": i,
"range": list(array_range),
"type": association,
}
)
default_array = dataset_arrays[0]
default_min, default_max = default_array.get("range")
# Mesh
mesh_mapper = vtkDataSetMapper()
mesh_mapper.SetInputConnection(reader.GetOutputPort())
mesh_actor = vtkActor()
mesh_actor.SetMapper(mesh_mapper)
renderer.AddActor(mesh_actor)
# Mesh: Setup default representation to surface
mesh_actor.GetProperty().SetRepresentationToSurface()
mesh_actor.GetProperty().SetPointSize(1)
mesh_actor.GetProperty().EdgeVisibilityOff()
# Mesh: Apply rainbow color map
mesh_lut = mesh_mapper.GetLookupTable()
mesh_lut.SetHueRange(0.666, 0.0)
mesh_lut.SetSaturationRange(1.0, 1.0)
mesh_lut.SetValueRange(1.0, 1.0)
mesh_lut.Build()
# Mesh: Color by default array
mesh_mapper.SelectColorArray(default_array.get("text"))
mesh_mapper.GetLookupTable().SetRange(default_min, default_max)
if default_array.get("type") == vtkDataObject.FIELD_ASSOCIATION_POINTS:
mesh_mapper.SetScalarModeToUsePointFieldData()
else:
mesh_mapper.SetScalarModeToUseCellFieldData()
mesh_mapper.SetScalarVisibility(True)
mesh_mapper.SetUseLookupTableScalarRange(True)
# Contour
contour = vtkContourFilter()
contour.SetInputConnection(reader.GetOutputPort())
contour_mapper = vtkDataSetMapper()
contour_mapper.SetInputConnection(contour.GetOutputPort())
contour_actor = vtkActor()
contour_actor.SetMapper(contour_mapper)
renderer.AddActor(contour_actor)
# Contour: ContourBy default array
contour_value = 0.5 * (default_max + default_min)
contour.SetInputArrayToProcess(
0, 0, 0, default_array.get("type"), default_array.get("text")
)
contour.SetValue(0, contour_value)
# Contour: Setup default representation to surface
contour_actor.GetProperty().SetRepresentationToSurface()
contour_actor.GetProperty().SetPointSize(1)
contour_actor.GetProperty().EdgeVisibilityOff()
# Contour: Apply rainbow color map
contour_lut = contour_mapper.GetLookupTable()
contour_lut.SetHueRange(0.666, 0.0)
contour_lut.SetSaturationRange(1.0, 1.0)
contour_lut.SetValueRange(1.0, 1.0)
contour_lut.Build()
# Contour: Color by default array
contour_mapper.SelectColorArray(default_array.get("text"))
contour_mapper.GetLookupTable().SetRange(default_min, default_max)
if default_array.get("type") == vtkDataObject.FIELD_ASSOCIATION_POINTS:
contour_mapper.SetScalarModeToUsePointFieldData()
else:
contour_mapper.SetScalarModeToUseCellFieldData()
contour_mapper.SetScalarVisibility(True)
contour_mapper.SetUseLookupTableScalarRange(True)
# Cube Axes
cube_axes = vtkCubeAxesActor()
renderer.AddActor(cube_axes)
# Cube Axes: Boundaries, camera, and styling
cube_axes.SetBounds(mesh_actor.GetBounds())
cube_axes.SetCamera(renderer.GetActiveCamera())
cube_axes.SetXLabelFormat("%6.1f")
cube_axes.SetYLabelFormat("%6.1f")
cube_axes.SetZLabelFormat("%6.1f")
cube_axes.SetFlyModeToOuterEdges()
renderer.ResetCamera()
# -----------------------------------------------------------------------------
# Trame setup
# -----------------------------------------------------------------------------
server = get_server(client_type="vue2")
state, ctrl = server.state, server.controller
state.setdefault("active_ui", None)
# -----------------------------------------------------------------------------
# Callbacks
# -----------------------------------------------------------------------------
@state.change("cube_axes_visibility")
def update_cube_axes_visibility(cube_axes_visibility, **kwargs):
cube_axes.SetVisibility(cube_axes_visibility)
ctrl.view_update()
# Selection Change
def actives_change(ids):
_id = ids[0]
if _id == "1": # Mesh
state.active_ui = "mesh"
elif _id == "2": # Contour
state.active_ui = "contour"
else:
state.active_ui = "nothing"
# Visibility Change
def visibility_change(event):
_id = event["id"]
_visibility = event["visible"]
if _id == "1": # Mesh
mesh_actor.SetVisibility(_visibility)
elif _id == "2": # Contour
contour_actor.SetVisibility(_visibility)
ctrl.view_update()
# Representation Callbacks
def update_representation(actor, mode):
property = actor.GetProperty()
if mode == Representation.Points:
property.SetRepresentationToPoints()
property.SetPointSize(5)
property.EdgeVisibilityOff()
elif mode == Representation.Wireframe:
property.SetRepresentationToWireframe()
property.SetPointSize(1)
property.EdgeVisibilityOff()
elif mode == Representation.Surface:
property.SetRepresentationToSurface()
property.SetPointSize(1)
property.EdgeVisibilityOff()
elif mode == Representation.SurfaceWithEdges:
property.SetRepresentationToSurface()
property.SetPointSize(1)
property.EdgeVisibilityOn()
@state.change("mesh_representation")
def update_mesh_representation(mesh_representation, **kwargs):
update_representation(mesh_actor, mesh_representation)
ctrl.view_update()
@state.change("contour_representation")
def update_contour_representation(contour_representation, **kwargs):
update_representation(contour_actor, contour_representation)
ctrl.view_update()
# Color By Callbacks
def color_by_array(actor, array):
_min, _max = array.get("range")
mapper = actor.GetMapper()
mapper.SelectColorArray(array.get("text"))
mapper.GetLookupTable().SetRange(_min, _max)
if array.get("type") == vtkDataObject.FIELD_ASSOCIATION_POINTS:
mesh_mapper.SetScalarModeToUsePointFieldData()
else:
mesh_mapper.SetScalarModeToUseCellFieldData()
mapper.SetScalarVisibility(True)
mapper.SetUseLookupTableScalarRange(True)
@state.change("mesh_color_array_idx")
def update_mesh_color_by_name(mesh_color_array_idx, **kwargs):
array = dataset_arrays[mesh_color_array_idx]
color_by_array(mesh_actor, array)
ctrl.view_update()
@state.change("contour_color_array_idx")
def update_contour_color_by_name(contour_color_array_idx, **kwargs):
array = dataset_arrays[contour_color_array_idx]
color_by_array(contour_actor, array)
ctrl.view_update()
# Color Map Callbacks
def use_preset(actor, preset):
lut = actor.GetMapper().GetLookupTable()
if preset == LookupTable.Rainbow:
lut.SetHueRange(0.666, 0.0)
lut.SetSaturationRange(1.0, 1.0)
lut.SetValueRange(1.0, 1.0)
elif preset == LookupTable.Inverted_Rainbow:
lut.SetHueRange(0.0, 0.666)
lut.SetSaturationRange(1.0, 1.0)
lut.SetValueRange(1.0, 1.0)
elif preset == LookupTable.Greyscale:
lut.SetHueRange(0.0, 0.0)
lut.SetSaturationRange(0.0, 0.0)
lut.SetValueRange(0.0, 1.0)
elif preset == LookupTable.Inverted_Greyscale:
lut.SetHueRange(0.0, 0.666)
lut.SetSaturationRange(0.0, 0.0)
lut.SetValueRange(1.0, 0.0)
lut.Build()
@state.change("mesh_color_preset")
def update_mesh_color_preset(mesh_color_preset, **kwargs):
use_preset(mesh_actor, mesh_color_preset)
ctrl.view_update()
@state.change("contour_color_preset")
def update_contour_color_preset(contour_color_preset, **kwargs):
use_preset(contour_actor, contour_color_preset)
ctrl.view_update()
# Opacity Callbacks
@state.change("mesh_opacity")
def update_mesh_opacity(mesh_opacity, **kwargs):
mesh_actor.GetProperty().SetOpacity(mesh_opacity)
ctrl.view_update()
@state.change("contour_opacity")
def update_contour_opacity(contour_opacity, **kwargs):
contour_actor.GetProperty().SetOpacity(contour_opacity)
ctrl.view_update()
# Contour Callbacks
@state.change("contour_by_array_idx")
def update_contour_by(contour_by_array_idx, **kwargs):
array = dataset_arrays[contour_by_array_idx]
contour_min, contour_max = array.get("range")
contour_step = 0.01 * (contour_max - contour_min)
contour_value = 0.5 * (contour_max + contour_min)
contour.SetInputArrayToProcess(0, 0, 0, array.get("type"), array.get("text"))
contour.SetValue(0, contour_value)
# Update UI
state.contour_min = contour_min
state.contour_max = contour_max
state.contour_value = contour_value
state.contour_step = contour_step
# Update View
ctrl.view_update()
@state.change("contour_value")
def update_contour_value(contour_value, **kwargs):
contour.SetValue(0, float(contour_value))
ctrl.view_update()
# -----------------------------------------------------------------------------
# GUI elements
# -----------------------------------------------------------------------------
def standard_buttons():
vuetify.VCheckbox(
v_model=("cube_axes_visibility", True),
on_icon="mdi-cube-outline",
off_icon="mdi-cube-off-outline",
classes="mx-1",
hide_details=True,
dense=True,
)
vuetify.VCheckbox(
v_model="$vuetify.theme.dark",
on_icon="mdi-lightbulb-off-outline",
off_icon="mdi-lightbulb-outline",
classes="mx-1",
hide_details=True,
dense=True,
)
vuetify.VCheckbox(
v_model=("viewMode", "local"),
on_icon="mdi-lan-disconnect",
off_icon="mdi-lan-connect",
true_value="local",
false_value="remote",
classes="mx-1",
hide_details=True,
dense=True,
)
with vuetify.VBtn(icon=True, click="$refs.view.resetCamera()"):
vuetify.VIcon("mdi-crop-free")
def pipeline_widget():
return
trame.GitTree(
sources=(
"pipeline",
[
{"id": "1", "parent": "0", "visible": 1, "name": "Mesh"},
{"id": "2", "parent": "1", "visible": 1, "name": "Contour"},
],
),
actives_change=(actives_change, "[$event]"),
visibility_change=(visibility_change, "[$event]"),
)
def ui_card(title, ui_name):
with vuetify.VCard(v_show=f"active_ui == '{ui_name}'"):
vuetify.VCardTitle(
title,
classes="grey lighten-1 py-1 grey--text text--darken-3",
style="user-select: none; cursor: pointer",
hide_details=True,
dense=True,
)
content = vuetify.VCardText(classes="py-2")
return content
def mesh_card():
with ui_card(title="Mesh", ui_name="mesh"):
vuetify.VSelect(
# Representation
v_model=("mesh_representation", Representation.Surface),
items=(
"representations",
[
{"text": "Points", "value": 0},
{"text": "Wireframe", "value": 1},
{"text": "Surface", "value": 2},
{"text": "SurfaceWithEdges", "value": 3},
],
),
label="Representation",
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
with vuetify.VRow(classes="pt-2", dense=True):
with vuetify.VCol(cols="6"):
vuetify.VSelect(
# Color By
label="Color by",
v_model=("mesh_color_array_idx", 0),
items=("array_list", dataset_arrays),
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
with vuetify.VCol(cols="6"):
vuetify.VSelect(
# Color Map
label="Colormap",
v_model=("mesh_color_preset", LookupTable.Rainbow),
items=(
"colormaps",
[
{"text": "Rainbow", "value": 0},
{"text": "Inv Rainbow", "value": 1},
{"text": "Greyscale", "value": 2},
{"text": "Inv Greyscale", "value": 3},
],
),
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
vuetify.VSlider(
# Opacity
v_model=("mesh_opacity", 1.0),
min=0,
max=1,
step=0.1,
label="Opacity",
classes="mt-1",
hide_details=True,
dense=True,
)
def contour_card():
with ui_card(title="Contour", ui_name="contour"):
vuetify.VSelect(
# Contour By
label="Contour by",
v_model=("contour_by_array_idx", 0),
items=("array_list", dataset_arrays),
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
vuetify.VSlider(
# Contour Value
v_model=("contour_value", contour_value),
min=("contour_min", default_min),
max=("contour_max", default_max),
step=("contour_step", 0.01 * (default_max - default_min)),
label="Value",
classes="my-1",
hide_details=True,
dense=True,
)
vuetify.VSelect(
# Representation
v_model=("contour_representation", Representation.Surface),
items=(
"representations",
[
{"text": "Points", "value": 0},
{"text": "Wireframe", "value": 1},
{"text": "Surface", "value": 2},
{"text": "SurfaceWithEdges", "value": 3},
],
),
label="Representation",
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
with vuetify.VRow(classes="pt-2", dense=True):
with vuetify.VCol(cols="6"):
vuetify.VSelect(
# Color By
label="Color by",
v_model=("contour_color_array_idx", 0),
items=("array_list", dataset_arrays),
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
with vuetify.VCol(cols="6"):
vuetify.VSelect(
# Color Map
label="Colormap",
v_model=("contour_color_preset", LookupTable.Rainbow),
items=(
"colormaps",
[
{"text": "Rainbow", "value": 0},
{"text": "Inv Rainbow", "value": 1},
{"text": "Greyscale", "value": 2},
{"text": "Inv Greyscale", "value": 3},
],
),
hide_details=True,
dense=True,
outlined=True,
classes="pt-1",
)
vuetify.VSlider(
# Opacity
v_model=("contour_opacity", 1.0),
min=0,
max=1,
step=0.1,
label="Opacity",
classes="mt-1",
hide_details=True,
dense=True,
)
# -----------------------------------------------------------------------------
# GUI
# -----------------------------------------------------------------------------
with SinglePageWithDrawerLayout(server) as layout:
layout.title.set_text("Viewer")
with layout.toolbar:
# toolbar components
vuetify.VSpacer()
vuetify.VDivider(vertical=True, classes="mx-2")
standard_buttons()
with layout.drawer as drawer:
# drawer components
drawer.width = 325
pipeline_widget()
vuetify.VDivider(classes="mb-2")
mesh_card()
contour_card()
with layout.content:
# content components
with vuetify.VContainer(
fluid=True,
classes="pa-0 fill-height",
):
# view = vtk.VtkRemoteView(renderWindow, interactive_ratio=1)
# view = vtk.VtkLocalView(renderWindow)
view = vtk.VtkRemoteLocalView(
renderWindow, namespace="view", mode="local", interactive_ratio=1
)
ctrl.view_update = view.update
ctrl.view_reset_camera = view.reset_camera
```
**Expected behavior**
Render in remote view is shown inside the webpage, memory usage is stable and proportional to the vtk data.
**Screenshots**
https://github.com/Kitware/trame/assets/3021667/48d1489c-552a-45c9-a89d-96b8a8e45339
Please note that when using the Remote view, nothing is displayed.
Another window pops-up when starting with right rendering, but the interactor doesn't work (I bet this is expected).
RAM grows indefinitely. It's like new renderers are added continuously, the size of the increases are constant.
The wslink is broken since the start, the console displays:
```txt
App running at:
- Local: http://localhost:8080/
- Network: http://127.0.0.1:8080/
Note that for multi-users you need to use and configure a launcher.
And to prevent your browser from opening, add '--server' to your command line.
Opening in existing browser session.
Task exception was never retrieved
future: <Task finished name='Task-86' coro=<WslinkHandler.sendWrappedMessage() done, defined at /path/lib/python3.9/site-packages/wslink/protocol.py:423> exception=ConnectionResetError('Cannot write to closing transport')>
Traceback (most recent call last):
File "/path/lib/python3.9/site-packages/wslink/protocol.py", line 475, in sendWrappedMessage
await ws.send_str(json_header)
File "/path/lib/python3.9/site-packages/aiohttp/web_ws.py", line 335, in send_str
await self._writer.send(data, binary=False, compress=compress)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 729, in send
await self._send_frame(message, WSMsgType.TEXT, compress)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 682, in _send_frame
self._write(header + message)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 702, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
Exception raised
ConnectionResetError('Cannot write to closing transport')
Traceback (most recent call last):
File "/path/lib/python3.9/site-packages/wslink/protocol.py", line 340, in onMessage
await self.sendWrappedMessage(
File "/path/lib/python3.9/site-packages/wslink/protocol.py", line 484, in sendWrappedMessage
await ws.send_str(encMsg)
File "/path/lib/python3.9/site-packages/aiohttp/web_ws.py", line 335, in send_str
await self._writer.send(data, binary=False, compress=compress)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 729, in send
await self._send_frame(message, WSMsgType.TEXT, compress)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 682, in _send_frame
self._write(header + message)
File "/path/lib/python3.9/site-packages/aiohttp/http_websocket.py", line 702, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
```
This is the log with `TRAME_LOG_NETWORK=./log.log` (other session that the one recorded, only with vtkRemoteView)
[log.log](https://github.com/Kitware/trame/files/14551854/log.log)
It seems that many calls to addRenderer?
These are with python 3.11, but I tried with python3.9, and same behavior.
Paraview 5.12 binary works fine.
**Platform:**
***Device:***
<!-- Check all that apply -->
- [x] Desktop
- [ ] Mobile
***OS:***
<!-- Check all that apply -->
- [ ] Windows
- [ ] MacOS
- [x] Linux. kernel 6.7.9.arch1-1, cuda 12.4.0-2
- [ ] Android
- [ ] iOS
***Browsers Affected:***
<!-- Check all that apply -->
- [ ] Chrome
- [ ] Firefox
- [ ] Microsoft Edge
- [ ] Safari
- [ ] Opera
- [x] Brave
- [ ] IE 11
| closed | 2024-03-10T21:38:29Z | 2024-03-16T16:50:44Z | https://github.com/Kitware/trame/issues/474 | [] | phcerdan | 18 |
babysor/MockingBird | deep-learning | 51 | deploy as webservice | is there anyway to deploy it as http service ,we can call it remote
I have two computer~ | closed | 2021-08-26T19:48:50Z | 2021-09-22T08:24:25Z | https://github.com/babysor/MockingBird/issues/51 | [] | wanghaisheng | 2 |
modelscope/data-juicer | streamlit | 199 | [MM] speed up OPs using hf models (clip, ...) | Currently, when set np=28, clip of vit-base-p32 takes over 1h to compute similarities for 558k dataset, and tens of hours for vit-large-p14-336.



Perhaps the following can help:
1. loading on GPU (implemented)
2. <del>using batched computing (not easy to implement, as batching is closely related to the internal logic of operators)</del>
| closed | 2024-01-26T09:26:00Z | 2024-02-22T04:01:42Z | https://github.com/modelscope/data-juicer/issues/199 | [
"enhancement",
"dj:multimodal"
] | drcege | 1 |
aio-libs/aiopg | sqlalchemy | 110 | Exception in Connection.__del__ when database connection has failed. | Try the following code snippet:
```
import asyncio
import aiopg
import psycopg2
async def run():
try:
pool = await aiopg.create_pool('dbname=foo host=bar')
except psycopg2.OperationalError:
pass
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
if __name__ == '__main__':
main()
```
If `foo` is a database that **does** exist and `bar` a host that **does not**, you'll receive the following error:
```
Exception ignored in: <bound method Connection.__del__ of <aiopg.connection.Connection object at 0x7ffff4d90358>>
Traceback (most recent call last):
File "/home/mostafa/source/salesman/venv/lib/python3.5/site-packages/aiopg/connection.py", line 452, in __del__
if not self._conn.closed:
AttributeError: 'Connection' object has no attribute '_conn'
```
We should probably either check if `_conn` exists, or make sure it does exist even if the `connect` method fails.
| closed | 2016-03-21T00:53:29Z | 2016-07-16T15:23:11Z | https://github.com/aio-libs/aiopg/issues/110 | [] | elektito | 1 |
jupyter/nbviewer | jupyter | 662 | Consecutive mathjax `$$` equations are ignored :-P | The following:
```
$$
a^2
$$
$$
b^2
$$
```
... will only be rendered as the second equation, displaying as if one had written only:
```
$$
b^2
$$
```
This is a dangerous bug! There is no warning or message given about it. One can easily send notebooks to friends, and so on, and not notice that what they read is missing 75% of your equations :-P
A workaround is to put `nbsp;` in between the equations, but its ugly, but better than missing equations...
```
$$
a^2
$$
$$
b^2
$$
```
Example of this effect: https://github.com/hughperkins/selfstudy-IBP/blob/9b9173d16542ee4846d272053c52f10dc1933f97/ibp_section3.ipynb
eg , there should be two equations between the first sentence `Latent feature values for N objects`, and the next sentence `where:`
| closed | 2017-01-11T10:21:47Z | 2018-07-10T03:30:44Z | https://github.com/jupyter/nbviewer/issues/662 | [
"type:Bug",
"tag:GitHub"
] | hughperkins | 1 |
open-mmlab/mmdetection | pytorch | 11,939 | ValueError: need at least one array to concatenate | {'joints_vis': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'joints': [[620.0, 394.0], [616.0, 269.0], [573.0, 185.0], [647.0, 188.0], [661.0, 221.0], [656.0, 231.0], [610.0, 187.0], [647.0, 176.0], [637.0201, 189.8183], [695.9799, 108.1817], [606.0, 217.0], [553.0, 161.0], [601.0, 167.0], [692.0, 185.0], [693.0, 240.0], [688.0, 313.0]], 'image': '015601864.jpg', 'scale': 3.021046, 'center': [594.0, 257.0]}
1111111111111111111111111111111111111111111111111111111111111111111
09/04 22:25:35 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: win32
Python: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)]
CUDA available: True
MUSA available: False
numpy_random_seed: 801145617
GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3
NVCC: Cuda compilation tools, release 11.3, V11.3.58
MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.29.30154 版
GCC: n/a
PyTorch: 1.10.2+cu113
PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
- OpenMP 2019
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX2
- CUDA Runtime 11.3
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.2
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=C:/w/b/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/w/b/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,
TorchVision: 0.11.3+cu113
OpenCV: 4.10.0
MMEngine: 0.10.4
Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 801145617
Distributed launcher: none
Distributed training: False
GPU number: 1
------------------------------------------------------------
09/04 22:25:35 - mmengine - INFO - Config:
auto_scale_lr = dict(base_batch_size=512)
backend_args = dict(backend='local')
codec = dict(
input_size=(
256,
256,
), type='RegressionLabel')
custom_hooks = [
dict(type='SyncBuffersHook'),
]
data_mode = 'topdown'
data_root = 'data/mpii/'
dataset_type = 'MpiiDataset'
default_hooks = dict(
badcase=dict(
badcase_thr=5,
enable=False,
metric_type='loss',
out_dir='badcase',
type='BadCaseAnalysisHook'),
checkpoint=dict(
interval=10, rule='greater', save_best='PCK', type='CheckpointHook'),
logger=dict(interval=50, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(enable=False, type='PoseVisualizationHook'))
default_scope = 'mmpose'
env_cfg = dict(
cudnn_benchmark=False,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(
by_epoch=True, num_digits=6, type='LogProcessor', window_size=50)
model = dict(
backbone=dict(
depth=50,
init_cfg=dict(checkpoint='torchvision://resnet50', type='Pretrained'),
type='ResNet'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
123.675,
116.28,
103.53,
],
std=[
58.395,
57.12,
57.375,
],
type='PoseDataPreprocessor'),
head=dict(
decoder=dict(input_size=(
256,
256,
), type='RegressionLabel'),
in_channels=2048,
loss=dict(type='RLELoss', use_target_weight=True),
num_joints=16,
type='RLEHead'),
neck=dict(type='GlobalAveragePooling'),
test_cfg=dict(flip_test=True, shift_coords=True),
type='TopdownPoseEstimator')
optim_wrapper = dict(optimizer=dict(lr=0.0005, type='Adam'))
param_scheduler = [
dict(
begin=0, by_epoch=False, end=500, start_factor=0.001, type='LinearLR'),
dict(
begin=0,
by_epoch=True,
end=210,
gamma=0.1,
milestones=[
170,
200,
],
type='MultiStepLR'),
]
resume = False
test_cfg = dict()
test_dataloader = dict(
batch_size=32,
dataset=dict(
ann_file='annotations/mpii_val.json',
data_mode='topdown',
data_prefix=dict(img='images/'),
data_root='data/mpii/',
headbox_file='data/mpii//annotations/mpii_gt_val.mat',
pipeline=[
dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(input_size=(
256,
256,
), type='TopdownAffine'),
dict(type='PackPoseInputs'),
],
test_mode=True,
type='MpiiDataset'),
drop_last=False,
num_workers=2,
persistent_workers=True,
sampler=dict(round_up=False, shuffle=False, type='DefaultSampler'))
test_evaluator = dict(type='MpiiPCKAccuracy')
train_cfg = dict(by_epoch=True, max_epochs=210, val_interval=10)
train_dataloader = dict(
batch_size=64,
dataset=dict(
ann_file='annotations/mpii_train.json',
data_mode='topdown',
data_prefix=dict(img='images/'),
data_root='data/mpii/',
pipeline=[
dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(direction='horizontal', type='RandomFlip'),
dict(shift_prob=0, type='RandomBBoxTransform'),
dict(input_size=(
256,
256,
), type='TopdownAffine'),
dict(
encoder=dict(input_size=(
256,
256,
), type='RegressionLabel'),
type='GenerateTarget'),
dict(type='PackPoseInputs'),
],
type='MpiiDataset'),
num_workers=2,
persistent_workers=True,
sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [
dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(direction='horizontal', type='RandomFlip'),
dict(shift_prob=0, type='RandomBBoxTransform'),
dict(input_size=(
256,
256,
), type='TopdownAffine'),
dict(
encoder=dict(input_size=(
256,
256,
), type='RegressionLabel'),
type='GenerateTarget'),
dict(type='PackPoseInputs'),
]
val_cfg = dict()
val_dataloader = dict(
batch_size=32,
dataset=dict(
ann_file='annotations/mpii_val.json',
data_mode='topdown',
data_prefix=dict(img='images/'),
data_root='data/mpii/',
headbox_file='data/mpii//annotations/mpii_gt_val.mat',
pipeline=[
dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(input_size=(
256,
256,
), type='TopdownAffine'),
dict(type='PackPoseInputs'),
],
test_mode=True,
type='MpiiDataset'),
drop_last=False,
num_workers=2,
persistent_workers=True,
sampler=dict(round_up=False, shuffle=False, type='DefaultSampler'))
val_evaluator = dict(type='MpiiPCKAccuracy')
val_pipeline = [
dict(type='LoadImage'),
dict(type='GetBBoxCenterScale'),
dict(input_size=(
256,
256,
), type='TopdownAffine'),
dict(type='PackPoseInputs'),
]
vis_backends = [
dict(type='LocalVisBackend'),
]
visualizer = dict(
name='visualizer',
type='PoseLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
])
work_dir = './work_dirs\\td-reg_res50_rle-8xb64-210e_mpii-256x256'
09/04 22:25:36 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
09/04 22:25:36 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) SyncBuffersHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) SyncBuffersHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) PoseVisualizationHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook
--------------------
before_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) PoseVisualizationHook
(NORMAL ) BadCaseAnalysisHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) BadCaseAnalysisHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
Traceback (most recent call last):
File "tools/train.py", line 162, in <module>
main()
File "tools/train.py", line 158, in main
runner.train()
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\runner\runner.py", line 1728, in train
self._train_loop = self.build_train_loop(
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\runner\runner.py", line 1527, in build_train_loop
loop = EpochBasedTrainLoop(
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\runner\loops.py", line 44, in __init__
super().__init__(runner, dataloader)
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\runner\base_loop.py", line 26, in __init__
self.dataloader = runner.build_dataloader(
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\runner\runner.py", line 1370, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\registry\build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "D:\mkvirtualenv\w38\lib\site-packages\mmpose\datasets\datasets\body\mpii_dataset.py", line 122, in __init__
super().__init__(
File "D:\mkvirtualenv\w38\lib\site-packages\mmpose\datasets\datasets\base\base_coco_style_dataset.py", line 103, in __init__
super().__init__(
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\dataset\base_dataset.py", line 247, in __init__
self.full_init()
File "D:\mkvirtualenv\w38\lib\site-packages\mmengine\dataset\base_dataset.py", line 298, in full_init
self.data_list = self.load_data_list()
File "D:\mkvirtualenv\w38\lib\site-packages\mmpose\datasets\datasets\base\base_coco_style_dataset.py", line 205, in load_data_list
instance_list, image_list = self._load_annotations()
File "D:\mkvirtualenv\w38\lib\site-packages\mmpose\datasets\datasets\body\mpii_dataset.py", line 166, in _load_annotations
assert 'center' in ann, f"Annotation at index {idx} is missing 'center': {ann}"
AssertionError: Annotation at index 0 is missing 'center': joints_vis | closed | 2024-09-04T14:31:01Z | 2024-09-05T09:27:09Z | https://github.com/open-mmlab/mmdetection/issues/11939 | [] | liangzzzz233 | 1 |
ray-project/ray | machine-learning | 50,879 | CI test linux://python/ray/train/v2:test_v2_api is flaky | CI test **linux://python/ray/train/v2:test_v2_api** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8535#01953acf-0951-40b4-a2b5-36341549fc9b
- https://buildkite.com/ray-project/postmerge/builds/8535#01953aa3-599c-481c-9dda-6fbc27388a8d
DataCaseName-linux://python/ray/train/v2:test_v2_api-END
Managed by OSS Test Policy | closed | 2025-02-25T02:25:37Z | 2025-03-01T01:45:51Z | https://github.com/ray-project/ray/issues/50879 | [
"bug",
"triage",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability",
"ml"
] | can-anyscale | 20 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 724 | Stuck on "WARNING:root:Setting up a new session" | I downloaded the `facades` dataset. I then run `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`, but I'm stuck at `WARNING:root:Setting up a new session`
Even after a few hours, it says there and doesn't seem to progress. Why is this? | closed | 2019-08-06T04:45:16Z | 2019-09-21T22:33:10Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/724 | [] | za13 | 1 |
cookiecutter/cookiecutter-django | django | 5,288 | [Update Django] Django 5.1 | 5.1 requirements tables
## base.txt
| Name | Version in Master | 5.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [python-slugify](https://github.com/un33k/python-slugify) | 8.0.4 | n/a | ✅ |
| [Pillow](https://pypi.org/project/pillow/) | 11.1.0 | n/a | ✅ |
| [rcssmin](https://opensource.perlig.de/rcssmin/) | 1.1.2 | n/a | ✅ |
| [argon2-cffi](https://pypi.org/project/argon2-cffi/) | 23.1.0 | n/a | ✅ |
| [whitenoise](https://pypi.org/project/whitenoise/) | 6.9.0 | 6.7.0 | ✅ |
| [redis](https://github.com/redis/redis-py) | 5.2.1 | n/a | ✅ |
| [hiredis](https://github.com/redis/hiredis-py) | 3.1.0 | n/a | ✅ |
| [celery](https://docs.celeryq.dev/) | 5.4.0 | n/a | ✅ |
| [django-celery-beat](https://github.com/celery/django-celery-beat) | 2.7.0 | 2.7.0 | ✅ |
| [flower](https://github.com/mher/flower) | 2.0.1 | n/a | ✅ |
| [uvicorn](https://pypi.org/project/uvicorn/) | 0.34.0 | n/a | ✅ |
| [uvicorn-worker](https://pypi.org/project/uvicorn-worker/) | 0.3.0 | n/a | ✅ |
| [django](https://pypi.org/project/Django/) | 5.0.13 | 2.7.0 | ✅ |
| [django-environ](https://django-environ.readthedocs.org) | 0.12.0 | | ❓ |
| [django-model-utils](https://github.com/jazzband/django-model-utils) | 5.0.0 | 5.0.0 | ✅ |
| [django-allauth](https://allauth.org) | 65.4.1 | 64.1.0 | ✅ |
| [django-crispy-forms](https://pypi.org/project/django-crispy-forms/) | 2.3 | 2.3 | ✅ |
| [crispy-bootstrap5](https://pypi.org/project/crispy-bootstrap5/) | 2024.10 | 2024.10 | ✅ |
| [django-compressor](https://django-compressor.readthedocs.io/en/latest/) | 4.5.1 | 4.5.1 | ✅ |
| [django-redis](https://github.com/jazzband/django-redis) | 5.4.0 | | ❌ |
| [djangorestframework](https://www.django-rest-framework.org/) | 3.15.2 | | ❌ |
| [django-cors-headers](https://pypi.org/project/django-cors-headers/) | 4.7.0 | 4.4.0 | ✅ |
| [drf-spectacular](https://github.com/tfranzel/drf-spectacular) | 0.28.0 | 0.28.0 | ✅ |
| [django-webpack-loader](https://github.com/django-webpack/django-webpack-loader) | 3.1.1 | 3.1.1 | ✅ |
## local.txt
| Name | Version in Master | 5.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [Werkzeug](https://pypi.org/project/Werkzeug/) | 3.1.3 | n/a | ✅ |
| [ipdb](https://github.com/gotcha/ipdb) | 0.13.13 | n/a | ✅ |
| [psycopg](https://psycopg.org/psycopg3/) | 3.2.6 | n/a | ✅ |
| [watchfiles](https://github.com/samuelcolvin/watchfiles) | 1.0.4 | n/a | ✅ |
| [mypy](https://pypi.org/project/mypy/) | 1.15.0 | n/a | ✅ |
| [django-stubs](https://github.com/typeddjango/django-stubs) | 5.1.3 | 5.1.1 | ✅ |
| [pytest](https://pypi.org/project/pytest/) | 8.3.5 | n/a | ✅ |
| [pytest-sugar](https://github.com/Teemu/pytest-sugar) | 1.0.0 | n/a | ✅ |
| [djangorestframework-stubs](https://github.com/typeddjango/djangorestframework-stubs) | 3.15.3 | n/a | ✅ |
| [sphinx](https://pypi.org/project/Sphinx/) | 8.3.0 | n/a | ✅ |
| [sphinx-autobuild](https://pypi.org/project/sphinx-autobuild/) | 2024.10.3 | n/a | ✅ |
| [ruff](https://docs.astral.sh/ruff) | 0.11.2 | n/a | ✅ |
| [coverage](https://github.com/nedbat/coveragepy) | 7.7.1 | n/a | ✅ |
| [djlint](https://pypi.org/project/djlint/) | 1.36.4 | n/a | ✅ |
| [pre-commit](https://github.com/pre-commit/pre-commit) | 4.2.0 | n/a | ✅ |
| [factory-boy](https://github.com/FactoryBoy/factory_boy) | 3.3.2 | 3.3.1 | ✅ |
| [django-debug-toolbar](https://pypi.org/project/django-debug-toolbar/) | 5.1.0 | 5.0.1 | ✅ |
| [django-extensions](https://github.com/django-extensions/django-extensions) | 3.2.3 | | ❌ |
| [django-coverage-plugin](https://github.com/nedbat/django_coverage_plugin) | 3.1.0 | | ❌ |
| [pytest-django](https://pypi.org/project/pytest-django/) | 4.10.0 | 4.9.0 | ✅ |
## production.txt
| Name | Version in Master | 5.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [gunicorn](https://pypi.org/project/gunicorn/) | 23.0.0 | n/a | ✅ |
| [psycopg](https://psycopg.org/psycopg3/) | 3.2.6 | n/a | ✅ |
| [Collectfasta](https://github.com/jasongi/collectfasta/) | 3.2.1 | n/a | ✅ |
| [sentry-sdk](https://github.com/getsentry/sentry-python) | 2.24.0 | n/a | ✅ |
| [hiredis](https://github.com/redis/hiredis-py) | 3.1.0 | n/a | ✅ |
| [django-storages](https://pypi.org/project/django-storages/) | 1.14.5 | 1.14.5 | ✅ |
| [django-anymail](https://pypi.org/project/django-anymail/) | 12.0 | 12.0 | ✅ |
| open | 2024-08-08T05:30:53Z | 2025-03-24T15:15:44Z | https://github.com/cookiecutter/cookiecutter-django/issues/5288 | [
"django5.1"
] | github-actions[bot] | 12 |
gunthercox/ChatterBot | machine-learning | 1,598 | error: chatbot object has no attribute train |
runfile('C:/Users/gnane/OneDrive/Documents/PhD/phd2018 Implementation/GGBot/reflexivemodel.py', wdir='C:/Users/gnane/OneDrive/Documents/PhD/phd2018 Implementation/GGBot')
Training the Reflexive Layer
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] C:\Users\gnane\AppData\Roaming\nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\gnane\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\gnane\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Traceback (most recent call last):
File "<ipython-input-5-5e98e236c263>", line 1, in <module>
runfile('C:/Users/gnane/OneDrive/Documents/PhD/phd2018 Implementation/GGBot/reflexivemodel.py', wdir='C:/Users/gnane/OneDrive/Documents/PhD/phd2018 Implementation/GGBot')
File "C:\Users\gnane\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile
execfile(filename, namespace)
File "C:\Users\gnane\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/gnane/OneDrive/Documents/PhD/phd2018 Implementation/GGBot/reflexivemodel.py", line 12, in <module>
refbot.train(conversation)
AttributeError: 'ChatBot' object has no attribute 'train'
This is the error when I tried installing in my new system and tried to execute my code.
| closed | 2019-01-31T16:13:11Z | 2019-11-28T07:23:56Z | https://github.com/gunthercox/ChatterBot/issues/1598 | [] | ggkar | 3 |
ExpDev07/coronavirus-tracker-api | fastapi | 139 | help needed | hi
I can't seem to be able to call:
/v2/locations/:id
it always returns 404
can you give me an example of a request with full path?
Thanks | closed | 2020-03-22T13:34:03Z | 2020-03-22T13:50:41Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/139 | [] | gianpaolof | 2 |
davidsandberg/facenet | tensorflow | 938 | why REGULARIZATION_LOSSES add prelogits_norm? | # Norm for the prelogits
eps = 1e-5
prelogits_norm = tf.reduce_mean(tf.norm(tf.abs(prelogits) + eps, ord=args.prelogits_norm_p, axis=1))
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, prelogits_norm * args.prelogits_norm_loss_factor)
| open | 2018-12-21T03:55:05Z | 2018-12-21T03:58:41Z | https://github.com/davidsandberg/facenet/issues/938 | [] | ninerui | 1 |
ranaroussi/yfinance | pandas | 1,338 | Request: dynamically extract decryption key(s) from HTML | Probably you're aware of the back-and-forth tustle with Yahoo to decrypt their data. The latest fix is to use hardcoded keys. Better would be `yfinance` extract from the HTML it's parsing. See below for important info, hopefully someone can implement.
From @ValueRaider:
> Can you look into the feasibility of fetching & parsing the JS file to extract keys? Just to assess how difficult implementation would be, no need to actually do it.
From @ifel:
> Basically, we need to look into the html and find a version of the main.js is in use, now it's https://s.yimg.com/uc/finance/dd-site/js/main.92af8b3fe8fc9dac750f.min.js. Then load it, and look for a string like this:
```
(t.context.dispatcher.stores=JSON.parse(c.default.decrypt(t.context.dispatcher.stores,"".concat(t.d742f5c4a0a6).concat(t["3bbecdde7968"]).concat(t["5ad3aeab21a5"]).concat(t["08d0c11ddeba"])).toString(l.default))
```
> note they use several ways to access the keys. The last 2 strings were similar:
```
(t.context.dispatcher.stores = JSON.parse(c().decrypt(n, t["2f28c82aa38ad3e4dc1a"]).toString(d())))
```
```
(t.context.dispatcher.stores = JSON.parse(c().decrypt(n, `${t["5e971f17f196"]}${t["04b4af022e08"]}${t.f550b3f47950}${t["1e0c85f35401"]}`).toString(d()))
``` | closed | 2023-01-25T14:59:10Z | 2023-03-27T15:42:22Z | https://github.com/ranaroussi/yfinance/issues/1338 | [] | ValueRaider | 4 |
pywinauto/pywinauto | automation | 542 | .drag_mouse() doesn't works | win32 backend, hwndwrapper.
1. `.drag_mouse()` unable to drag the mouse.
2. Mouse cursor stays on the left side of the screen, seemingly doing nothing.
3. `.drag_mouse_input()` works flawlessly on the other hand.
**Example code where dragging won't work using `.drag_mouse()`**
```
import pywinauto
from pywinauto import actionlogger
actionlogger.enable()
APP = pywinauto.Application(backend="win32").start("notepad.exe")
elem = APP.Notepad.Edit
elem.type_keys("12345678901234567890")
elem.drag_mouse(button="left", press_coords=(elem.rectangle().left + 10, elem.rectangle().top), release_coords=(elem.rectangle().left + 600, elem.rectangle().top))
```
**Result:**
```
2018-08-10 16:43:52,998 INFO: Started notepad.exe application.
2018-08-10 16:43:53,498 INFO: Typed text to the Edit: 12345678901234567890
2018-08-10 16:43:53,622 INFO: Clicked Edit "12345678901234567890" by left button event (342, 584)
2018-08-10 16:43:53,622 INFO: Moving mouse to relative (client) coordinates (342, 584)
2018-08-10 16:43:53,730 INFO: Moved mouse over Edit "12345678901234567890" to screen point (342, 584) by WM_MOUSEMOVE
2018-08-10 16:43:53,838 INFO: Moving mouse to relative (client) coordinates (343, 584)
2018-08-10 16:43:53,947 INFO: Moved mouse over Edit "12345678901234567890" to screen point (343, 584) by WM_MOUSEMOVE
2018-08-10 16:43:54,056 INFO: Moving mouse to relative (client) coordinates (344, 584)
2018-08-10 16:43:54,164 INFO: Moved mouse over Edit "12345678901234567890" to screen point (344, 584) by WM_MOUSEMOVE
2018-08-10 16:43:54,273 INFO: Moving mouse to relative (client) coordinates (345, 584)
2018-08-10 16:43:54,381 INFO: Moved mouse over Edit "12345678901234567890" to screen point (345, 584) by WM_MOUSEMOVE
2018-08-10 16:43:54,490 INFO: Moving mouse to relative (client) coordinates (346, 584)
2018-08-10 16:43:54,598 INFO: Moved mouse over Edit "12345678901234567890" to screen point (346, 584) by WM_MOUSEMOVE
2018-08-10 16:43:54,707 INFO: Moving mouse to relative (client) coordinates (932, 584)
2018-08-10 16:43:54,815 INFO: Moved mouse over Edit "12345678901234567890" to screen point (932, 584) by WM_MOUSEMOVE
2018-08-10 16:43:55,032 INFO: Clicked Edit "12345678901234567890" by left button event (932, 584)
```
Result after replacing the `.drag_mouse()` line with `elem.drag_mouse_input(button="left", src=(elem.rectangle().left + 10, elem.rectangle().top), dst=(elem.rectangle().left + 600, elem.rectangle().top))`:
```
2018-08-10 16:52:12,800 INFO: Started notepad.exe application.
2018-08-10 16:52:13,311 INFO: Typed text to the Edit: 12345678901234567890
2018-08-10 16:52:13,324 INFO: Drag mouse from coordinates (277, 670) to (867, 670)
2018-08-10 16:52:13,977 INFO: Clicked Edit "12345678901234567890" by left button mouse click at (277, 670)
2018-08-10 16:52:14,287 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((277, 670)
2018-08-10 16:52:14,504 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((278, 670)
2018-08-10 16:52:14,721 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((279, 670)
2018-08-10 16:52:14,938 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((280, 670)
2018-08-10 16:52:15,155 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((281, 670)
2018-08-10 16:52:15,373 INFO: Moved mouse over Edit "12345678901234567890" to screen point ((867, 670)
2018-08-10 16:52:15,590 INFO: Clicked Edit "12345678901234567890" by left button mouse click at (867, 670)
``` | open | 2018-08-10T14:57:47Z | 2018-08-13T18:30:01Z | https://github.com/pywinauto/pywinauto/issues/542 | [
"bug",
"Priority-Low"
] | meshuggahtas | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,072 | Transport websocket does not work and will result in 400 Return codes | I am currently facing the following problem:
I basically followed [this](https://blog.miguelgrinberg.com/post/easy-websockets-with-flask-and-gevent) tutorial to establish a socketio connection between a flask server and the browser. This does work but the socket.io implementation only uses the polling transport method. When I try to force the transport to websocket by `io.connect({transports: ['websocket']});`, I do see websocket-requests from the browser resulting in Status 400 responses from the server.
```
from flask import Flask, render_template, send_from_directory,request
import logging
from flask_socketio import SocketIO, emit
from utils import repeated_timer
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
app.logger.setLevel(logging.DEBUG)
socketio = SocketIO(app)
@app.route('/')
def main_page():
return render_template('main_page.html')
@socketio.on('connect')
def test_connect():
print("Client connected")
socketio.emit('res','answer')
@socketio.on('disconnect')
def test_connect():
print("Client disconnect")
@socketio.on_error_default
def default_error_handler(e):
print(request.event["message"]) # "my error event"
print(request.event["args"]) # (data,)
def send_update():
socketio.emit('dataupdate', {'data': 'no-data'})
if __name__ == '__main__':
repeated_timer.RepeatedTimer(1, send_update)
socketio.run(app)
```
On the server side I do get some "client connnected" and "Client disconnect", but there is no data transfer through the websocket.
| closed | 2019-10-01T14:41:33Z | 2019-10-01T15:02:46Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1072 | [
"question"
] | Tyde | 4 |
sammchardy/python-binance | api | 1,031 | [testnet] Someting wrong with client.futures_create_order() | I use my testnet account , and run the function futures_create_order() for the symbol BTCUSDT, it is all OK. The code is following:
`client.futures_create_order(symbol='BTCUSDT', side='BUY', positionSide='LONG',type='MARKET',quantity=20)`
But when I use it for the symbol SOLUSDT:
`client.futures_create_order(symbol='SOLUSDT', side='BUY', positionSide='LONG',type='MARKET',quantity=20)`
Error message appears:
binance.exceptions.BinanceAPIException: APIError(code=-4131): The counterparty's best price does not meet the PERCENT_PRICE filter limit.
How to solve the problem? thanks!!! | open | 2021-09-15T07:35:51Z | 2023-02-18T08:27:40Z | https://github.com/sammchardy/python-binance/issues/1031 | [] | bmw7 | 8 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 269 | PostgreSQL specific fields (DATERANGE, TSTZRANGE, etc.) break when SQLAlchemy-Utils is installed | Due to the following lines:
https://github.com/graphql-python/graphene-sqlalchemy/blob/421f8e48d169a91e20328108c6f56ae0987d21b8/graphene_sqlalchemy/converter.py#L19-L22
Graphene will break with the following message when `SQLAlchemy-Utils` is installed:
```
Exception: Don't know how to convert the SQLAlchemy field Foo.bar (<class 'sqlalchemy.sql.schema.Column'>)
```
where Foo.bar is a TSTZRANGE type as follows:
```python
In [11]: from script import base_script
...: from sqlalchemy.inspection import inspect as sqlalchemyinspect
...: from graphene_sqlalchemy.converter import convert_sqlalchemy_type
...: from singledispatch import _compose_mro
...: from foo import Foo
...:
...: inspected_model = sqlalchemyinspect(Foo)
...: print(inspected_model.columns.items()[5])
...: name, column = inspected_model.columns.items()[5]
...: model_hierarchy = _compose_mro(
...: column.type.__class__, convert_sqlalchemy_type.registry.keys()
...: )
...: reg = convert_sqlalchemy_type.registry
...: for model in model_hierarchy:
...: if model in reg:
...: print(model, reg[model])
...:
...: print(model_hierarchy)
...:
('bar', Column('bar', TSTZRANGE(), table=<foos>, nullable=False))
<class 'object'> <function convert_sqlalchemy_type at 0x108f88560>
[<class 'sqlalchemy.dialects.postgresql.ranges.TSTZRANGE'>, <class 'sqlalchemy.dialects.postgresql.ranges.RangeOperators'>, <class 'sqlalchemy.sql.type_api.TypeEngine'>, <class 'sqlalchemy.sql.visitors.Visitable'>, <class 'object'>]
``` | closed | 2020-02-27T23:23:03Z | 2023-02-24T14:55:29Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/269 | [
"bug"
] | lame | 2 |
ultralytics/yolov5 | machine-learning | 12,528 | why training map is 0.9 ,but validation map is 0.01. | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training, Validation
### Bug
training map is abou
![Uploading 1703052497936.png…]()
t 0.9 . but Validating map is very small
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2023-12-20T06:08:41Z | 2024-10-20T19:34:54Z | https://github.com/ultralytics/yolov5/issues/12528 | [
"bug"
] | dongdong2023 | 3 |
wagtail/wagtail | django | 12,444 | jQuery Prototype pollution on docs website | ### Issue Summary
Thank you to [Devansh Chauhan](https://www.linkedin.com/in/devansh-chauhan-b36b6a1b1/) for reporting this. There is a prototype pollution vulnerability in a jQuery version in use on the docs.wagtail.org website.
### Steps to Reproduce
1. Visit the website.
2. Right-click, select "Inspect Element," and paste this payload into the console:
```js
$.extend(true, {}, JSON.parse('{"proto": {"devMode": true}}'))
```
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this, the first step will be agreeing here in the comments on what to do. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| closed | 2024-10-22T09:01:52Z | 2024-10-24T14:02:19Z | https://github.com/wagtail/wagtail/issues/12444 | [
"type:Bug",
"component:Security",
"status:Won't Fix"
] | thibaudcolas | 7 |
SYSTRAN/faster-whisper | deep-learning | 1,214 | model output asr often lost fragment text | After running the model for ASR recognition, some content is often missing
audio link:(https://share-github.tos-cn-beijing.volces.com/test.mp3)
```python
import whisperx
from faster_whisper import WhisperModel
mp3_audio = whisperx.load_audio('test.mp3')
prompt = ' 新闻今日谈 林秀芹 李炜 时事评论员 '
language = 'zh'
asr_model = WhisperModel("large-v2", device='cuda', compute_type='float16')
segments, info = asr_model.transcribe(mp3_audio,
beam_size=5,
vad_filter=True,
language=language,
initial_prompt=prompt,
hotwords=prompt,
)
tmp_segments = []
for segment in segments:
simplified_text = segment.text
if hasattr(segment, 'words') and segment.words:
tmp_segments.append(
{"start": add_time + segment.start, "end": add_time + segment.end,
"text": simplified_text, "words": segment.words})
else:
tmp_segments.append(
{"start": add_time + segment.start, "end": add_time + segment.end,
"text": simplified_text}) # , "words": segment.words
asr_result = {'segments': tmp_segments, 'language': language}
```
current output:
```text
{
'language': 'zh',
'segments': [
{'end': 21.89, 'start': 17.49, 'text': '我是林秀芹 首先联合话题关注的是中德关系的新的进展'}, ......
{'end': 755.53, 'start': 748.93, 'text': '当然 谢谢李伟先生带来的分析 我们先休息下来 但关注的是世界经济论坛非洲峰会的相关话题 稍后再见'},
{'end': 787.29, 'start': 781.09, 'text': '谈非洲峰会呢 六号在南非闭幕 这一次的非洲峰会可以说是吸引全世界一个关注目光'},
... ... ]}
```
correct output:
```text
{
'language': 'zh',
'segments': [
{'end': 17.49, 'start': 14.8, 'text': '大家好 欢迎收看今天的 新闻今日谈'}, # lost content
{'end': 21.89, 'start': 17.49, 'text': '我是林秀芹 首先联合话题关注的是中德关系的新的进展'}, ......
{'end': 755.53, 'start': 748.93, 'text': '当然 谢谢李伟先生带来的分析 我们先休息下来 但关注的是世界经济论坛非洲峰会的相关话题 稍后再见'},
{'end': 781, 'start': 778, 'text': '欢迎回来 世界经济论坛'}, # lost content
{'end': 787.29, 'start': 781.09, 'text': '非洲峰会呢 六号在南非闭幕 这一次的非洲峰会可以说是吸引全世界一个关注目光'},
... ... ]}
```
env:
```
faster-whisper 1.1.0
```
How to adjust parameters or modify code to ensure normal output
help plz.
| open | 2024-12-24T09:08:54Z | 2024-12-29T16:09:48Z | https://github.com/SYSTRAN/faster-whisper/issues/1214 | [] | RichardQin1 | 4 |
autokey/autokey | automation | 319 | Mouse stops working on using HotKey | ## Classification:
Crash/Hang/Data Loss
## Reproducibility:
Sometimes
## Version
AutoKey version:
Used GUI (Gtk, Qt, or both): GTK
Installed via: Arch AUR repository (https://aur.archlinux.org/packages/autokey/)
Linux Distribution: Arch
## Summary
On using the shortcut ctrl + space to display the 'My Phrases' menu, every three to four times the mouse click will stop working. That is to say I can use the keyboard to interact with the desktop, I can move the mouse, but I cannot click anything.
It appears to be the same or similar issue described in https://github.com/autokey/autokey/issues/264, except I am using the latest version of AutoKey.
## Steps to Reproduce (if applicable)
Press ctrl + space keys together
## Expected Results
'My Phrases' menu is displayed.
## Actual Results
No longer able to click any desktop objects using mouse.
| open | 2019-11-06T09:51:53Z | 2020-04-06T08:47:34Z | https://github.com/autokey/autokey/issues/319 | [
"bug",
"duplicate",
"autokey-gtk"
] | zombieramboz | 4 |
lundberg/respx | pytest | 239 | Add type based matching | I was thinking that it would be useful to have typed based matching. One of the use cases would when app sends an generated uuid and all you care about is if the type is correct and doesn't really care about specific value.
ie.
```python
import uuid
import respx
import httpx
respx_mock.post("http://example.com", json={"id": uuid.UUID})
httpx.post("http://example.com", json={"id": uuid.uuid4()} # should be mocked by respx
```
This could be extended to other types like simple types, types from typing module, etc.
WDYT? | open | 2023-06-21T10:24:51Z | 2024-03-18T18:57:31Z | https://github.com/lundberg/respx/issues/239 | [
"enhancement"
] | macieyng | 3 |
matplotlib/matplotlib | data-visualization | 29,350 | [Bug]: Matplotlib causes segmentation fault when hovering mouse over graph | ### Bug summary
When hovering over a graph created with a GTK4 backend, it causes a segmentation fault.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
plt.plot(1, 2)
plt.show()
```
### Actual outcome
A graph window shows up. However, when you hover your mouse over the window, some cryptic errors are thrown, and then it causes a segmentation fault:
```
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_is_floating: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_ref_sink: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
** (python3:238467): CRITICAL **: 20:20:20.387: pygobject_register_wrapper: assertion 'PyObject_TypeCheck(self, &PyGObject_Type)' failed
Segmentation fault (core dumped)
```
### Expected outcome
An empty interactive graph window shows up, which you can hover your mouse over without causing a segmentation fault.
### Additional information
This bug started to happen when I recently updated Matplotlib to the latest version. In earlier versions, things worked as expected. However, I updated several libraries at the same time, so it might be one of them that is the problem.
It only happens when using a GTK4 backend (both Cairo and Agg cause problems). GTK3 works though.
Using faulthandler:
```
import matplotlib.pyplot as plt
import faulthandler
faulthandler.enable()
plt.plot(1,2)
plt.show()
```
You get the following output:
```
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_is_floating: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py:160: Warning: g_object_ref_sink: assertion 'G_IS_OBJECT (object)' failed
guiEvent=controller.get_current_event(),
** (python3:239212): CRITICAL **: 20:31:10.373: pygobject_register_wrapper: assertion 'PyObject_TypeCheck(self, &PyGObject_Type)' failed
Fatal Python error: Segmentation fault
Current thread 0x00007029b9993000 (most recent call first):
File "/usr/lib/python3/dist-packages/gi/overrides/Gio.py", line 42 in run
File "/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backends/_backend_gtk.py", line 206 in start_main_loop
File "/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/backend_bases.py", line 3547 in show
File "/home/enprogrammerare/.local/lib/python3.10/site-packages/matplotlib/pyplot.py", line 614 in show
File "<stdin>", line 1 in <module>
Extension modules: numpy._core._multiarray_umath, numpy.linalg._umath_linalg, PIL._imaging, kiwisolver._cext, gi._gi, cairo._cairo, gi._gi_cairo (total: 7)
Segmentation fault (core dumped)
```
### Operating system
Ubuntu 22.04.2
### Matplotlib Version
3.10.0
### Matplotlib Backend
GTK4
### Python version
3.10.12
### Jupyter version
_No response_
### Installation
pip | closed | 2024-12-19T19:34:32Z | 2025-02-13T13:36:53Z | https://github.com/matplotlib/matplotlib/issues/29350 | [] | en-programmerare | 8 |
2noise/ChatTTS | python | 173 | UnboundLocalError: local variable 'Normalizer' referenced before assignment | closed | 2024-06-01T10:01:02Z | 2024-06-24T08:29:18Z | https://github.com/2noise/ChatTTS/issues/173 | [
"bug"
] | wdyyyyyy | 10 | |
pydantic/pydantic-ai | pydantic | 973 | Dynamic agent creation (i.e. persisting system_prompt, result_type, etc in DB) | I have a use case where the users are able to create a custom agent with desired system_prompt, result_type, etc which get stored in a relational database (Postgres in my case). During the chat with the agent, the frontend just sends the agent_id and the backend grabs the rest of the information from the DB to instantiate a Pydantic AI Agent.
But I learned that Pydantic models cannot be easily serialized to JSON to be persisted in DB. What is the recommended strategy here to enable my dynamic agent creation use case?
Any guidance is appreciated! | open | 2025-02-23T17:07:49Z | 2025-02-25T17:25:17Z | https://github.com/pydantic/pydantic-ai/issues/973 | [
"need confirmation"
] | seunggs | 4 |
pydata/bottleneck | numpy | 393 | [BUG] Segmentation fault when working on a transposed numpy array with first dimension 1. | Issue raised as follow-up to: https://github.com/pydata/xarray/issues/6002
**Reproduce**
```python
import numpy as np
import bottleneck as bn
n_time = 1
spec_data = np.random.random(size=(n_time,192,121))
bn.nanmax(spec_data.transpose(0, 2, 1))
---> Segmentation fault
```
numpy.transpose returns a view, so I guess that's what causes bottleneck to segfault? Not sure, though, especially since changing the order does not trigger the segfault: spec_data.transpose(1, 0, 2)... maybe bottleneck doesn't like views with a first dimension of size 1?
**Expected behaviour**
Should not crash
**Environment**
Confirmed Windows & Linux , P37 and P39.
confirmed with:
bottleneck: 1.3.2
numpy: 1.21.4
| open | 2021-11-19T13:27:13Z | 2021-11-24T16:06:52Z | https://github.com/pydata/bottleneck/issues/393 | [
"bug"
] | RubendeBruin | 2 |
521xueweihan/HelloGitHub | python | 1,931 | 项目自荐 | awesome-flutter-plugins | ## 项目推荐
- 项目地址:https://github.com/jahnli/awesome-flutter-plugins
- 类别:Flutter
- 项目后续更新计划:新增类别、持续更新
- 项目描述:
- 必写:尽可能收集好用的Flutter插件以便更效率的开发
- 描述长度(不包含示例代码):Flutter 、dart、flutter桌面化插件
- 推荐理由:轻松找到你想要的Flutter插件、满足大多基本需求
- 截图:

| closed | 2021-10-18T01:04:17Z | 2022-12-28T08:50:03Z | https://github.com/521xueweihan/HelloGitHub/issues/1931 | [] | jahnli | 4 |
mlfoundations/open_clip | computer-vision | 590 | BeamSearchScorer.process() got an unexpected keyword argument 'beam_indices | When I try to run the CoCa generation code given in the README (or colab)
I get the following error in model.generate(im)
"TypeError: BeamSearchScorer.process() got an unexpected keyword argument 'beam_indices'" | closed | 2023-08-05T04:10:05Z | 2023-08-05T23:16:12Z | https://github.com/mlfoundations/open_clip/issues/590 | [] | SachinG007 | 0 |
marshmallow-code/flask-smorest | rest-api | 51 | The error handler intercepts trailing slash redirect | I had an issue with URLs with trailing slash where flask would normally redirect.
e.g. `@blp.route('/things/')` , a request for `/things` should redirect to `/things/` by default.
Example:
```python
from flask import Flask
from flask_rest_api import Api, Blueprint
app = Flask(__name__)
app.config['OPENAPI_VERSION'] = '3.0.2'
api = Api(app)
blp = Blueprint('blp', __name__)
@blp.route('/things/')
def get_things():
return 'things'
api.register_blueprint(blp)
```
When calling the endpoint without the trailing slash you get a serialized 308 response with no Location header instead of a proper redirect response:
```
curl -i http://localhost:5001/things
HTTP/1.0 308 PERMANENT REDIRECT
Content-Type: application/json
Content-Length: 42
Server: Werkzeug/0.15.1 Python/3.6.7
Date: Tue, 26 Mar 2019 20:57:42 GMT
{"status":"308 Permanent Redirect: None"}
```
Is this expected?
I looked at the errorhandler code and saw it captures any type of `HTTPException`, but the redirect triggers a `werkzeug.routing.RequestRedirect` which is a subclass of `HTTPException` and `RoutingException`
I worked around this in my app by overriding `handle_http_exception` like this
```python
from flask_rest_api import Api as BaseApi
from werkzeug.routing import RoutingException
class Api(BaseApi):
def handle_http_exception(self, error):
# Don't serialize redirects
if isinstance(error, RoutingException):
return error
return super().handle_http_exception(error)
```
But the framework probably should only register an error handler for codes above 400.
Cheers | closed | 2019-03-26T21:08:45Z | 2020-03-10T09:37:07Z | https://github.com/marshmallow-code/flask-smorest/issues/51 | [] | steinitzu | 5 |
alteryx/featuretools | scikit-learn | 2,086 | Add tests that confirm primitive input_types are the expected shapes | There are a number of assumptions we make about the shape of Primitive `input_types` lists:
- Its either a list of ColumnSchema objects or a list of lists of ColumnSchema objects (and not a combination)
- All sub-lists are the same length
- No `input_types` list or sublist is empty
As we may need to rely on these assumptions at some point, we should add tests that confirm these assumptions for all primitives, so that if we add a Primitive that breaks any of these assumptions in the future, we are notified. | open | 2022-05-19T17:48:59Z | 2024-03-31T17:06:55Z | https://github.com/alteryx/featuretools/issues/2086 | [
"good first issue"
] | tamargrey | 5 |
PeterL1n/BackgroundMattingV2 | computer-vision | 70 | How to create matting datasets? | HI, What method did you use to create the dataset from raw green screen videos? Thanks | closed | 2021-03-12T03:05:09Z | 2021-03-14T07:07:47Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/70 | [] | YaoooLiang | 1 |
eriklindernoren/ML-From-Scratch | data-science | 3 | k-means: TypeError: make_blobs() got an unexpected keyword argument 'noise' | # Reproduction
```python
python unsupervised_learning/k_means.py
```
# Diagnosis
This line caused the error:
```python
X, y = datasets.make_blobs(noise=0.1)
```
Checked all scikit documentation for make_blobs:
* http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html
* http://scikit-learn.org/0.17/modules/generated/sklearn.datasets.make_blobs.html
* http://scikit-learn.org/0.16/modules/generated/sklearn.datasets.make_blobs.html
* http://scikit-learn.org/0.15/modules/generated/sklearn.datasets.make_blobs.html
None of them has `noise` parameter.
Perhaps what's needed is `cluster_std`? | closed | 2017-02-26T11:53:06Z | 2017-02-26T11:58:27Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/3 | [] | gyosh | 1 |
lux-org/lux | pandas | 310 | [Feature Request] Geographic data support for Matplotlib | # Feature Request Description
Via #253, we now have support for geographic data types using Vegalite as the backend. We would like to extend support for this feature on [Matplotlib](https://matplotlib.org/) as well.
For example, running
```
lux.config.plotting_backend = "vegalite" # default
df = pd.read_csv("https://github.com/covidvis/covid19-vis/blob/master/data/interventionFootprintByState.csv?raw=True",index_col=0)
df
```
yields the following result:

When we configure the backend with
```
lux.config.plotting_backend = "matplotlib"
```
the output does not display for a Matplotlib backend.
## Solution
To resolve for now, we will call upon the `AltairRenderer` and warn users that the Choropleths are rendered via the Altair backend.
If users indicate enough interest, we will add a mirroring `Choropleth.py` file in `lux/vislib/matplotlib`, which will be called upon by the same `univariate` action when the backend is set to `matplotlib`. | closed | 2021-03-17T21:27:38Z | 2021-04-10T17:45:01Z | https://github.com/lux-org/lux/issues/310 | [] | micahtyong | 0 |
LibreTranslate/LibreTranslate | api | 63 | Issue with docker-compose | Hey,
Thanks for the good job ! But I have an issue with dockeer-compose, indeed when I use `docker-compose up -d --build` the container launches perfectly :

but when I go to http://localhost:5000 there is nothing.
However, with `docker run -ti --rm -p 5000:5000 libretranslate/libretranslate` it works.
I would like to know if I made something wrong knowing that I have the latest version. Docker-compose is very important for my deployment configuration.
Thank you in advance ! | closed | 2021-03-14T19:03:29Z | 2024-01-25T09:20:35Z | https://github.com/LibreTranslate/LibreTranslate/issues/63 | [] | ThomasBossuat | 9 |
jackzhenguo/python-small-examples | tensorflow | 21 | 关于python之基第四个例子**ascii展示对象** | # 问题:python之基第四个例子**ascii展示对象**,在定义Student类后,在第二步直接使用print对新手不友好。
# 建议:把创建实例的步骤补上
` xiaoming = Student('001', 'xiaoming')` | closed | 2019-12-18T03:01:08Z | 2019-12-19T09:33:03Z | https://github.com/jackzhenguo/python-small-examples/issues/21 | [] | 0xffm1 | 3 |
huggingface/transformers | tensorflow | 36,134 | 'MERTConfig' object has no attribute 'conv_pos_batch_norm' | ### System Info
https://huggingface.co/m-a-p/MERT-v1-95M
This model works fine on transformers==4.47.1
But starting 4.48.0 (Tried till 4.48.3) this error was seen:
'MERTConfig' object has no attribute 'conv_pos_batch_norm'
@ylacombe, @eustlb could you please take a look..
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to repro as given in the page https://huggingface.co/m-a-p/MERT-v1-95M :
```
# from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2FeatureExtractor
from transformers import AutoModel
import torch
from torch import nn
import torchaudio.transforms as T
from datasets import load_dataset
# loading our model weights
model = AutoModel.from_pretrained("m-a-p/MERT-v1-95M", trust_remote_code=True)
# loading the corresponding preprocessor config
processor = Wav2Vec2FeatureExtractor.from_pretrained("m-a-p/MERT-v1-95M",trust_remote_code=True)
# load demo audio and set processor
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
resample_rate = processor.sampling_rate
# make sure the sample_rate aligned
if resample_rate != sampling_rate:
print(f'setting rate from {sampling_rate} to {resample_rate}')
resampler = T.Resample(sampling_rate, resample_rate)
else:
resampler = None
# audio file is decoded on the fly
if resampler is None:
input_audio = dataset[0]["audio"]["array"]
else:
input_audio = resampler(torch.from_numpy(dataset[0]["audio"]["array"]))
inputs = processor(input_audio, sampling_rate=resample_rate, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
# take a look at the output shape, there are 13 layers of representation
# each layer performs differently in different downstream tasks, you should choose empirically
all_layer_hidden_states = torch.stack(outputs.hidden_states).squeeze()
print(all_layer_hidden_states.shape) # [13 layer, Time steps, 768 feature_dim]
# for utterance level classification tasks, you can simply reduce the representation in time
time_reduced_hidden_states = all_layer_hidden_states.mean(-2)
print(time_reduced_hidden_states.shape) # [13, 768]
# you can even use a learnable weighted average representation
aggregator = nn.Conv1d(in_channels=13, out_channels=1, kernel_size=1)
weighted_avg_hidden_states = aggregator(time_reduced_hidden_states.unsqueeze(0)).squeeze()
print(weighted_avg_hidden_states.shape) # [768]
```
### Expected behavior
Should give outputs as mentioned in the official page | closed | 2025-02-11T14:43:25Z | 2025-03-14T12:59:29Z | https://github.com/huggingface/transformers/issues/36134 | [
"bug"
] | Timothy-John | 2 |
zappa/Zappa | flask | 1,041 | Lambda update fails with ResourceConflictException | ## Context
Since today the `zappa update` method fails with the following error:
```
Downloading and installing dependencies..
Packaging project as zip.
Uploading ***********-1631808391.tar.gz (50.2MiB)..
100% 52.6M/52.6M [00:00<00:00, 101MB/s]
Uploading handler_***********-1631808473.zip (14.5MiB)..
100% 15.2M/15.2M [00:00<00:00, 43.1MB/s]
Updating Lambda function code..
Updating Lambda function configuration..
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 2785, in handle
sys.exit(cli.handle())
File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 510, in handle
self.dispatch_command(self.command, stage)
File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 557, in dispatch_command
self.update(self.vargs['zip'], self.vargs['no_upload'])
File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 975, in update
aws_kms_key_arn=self.aws_kms_key_arn,
File "/root/repo/venv/lib/python3.6/site-packages/zappa/core.py", line 1203, in update_lambda_configuration
'Mode': 'Active' if self.xray_tracing else 'PassThrough'
File "/root/repo/venv/lib/python3.6/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/root/repo/venv/lib/python3.6/site-packages/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the UpdateFunctionConfiguration operation: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-********:function:***********
==============
```
It seems like lambda introduced `function states` and the state needs to be checked during deployment.
https://forums.aws.amazon.com/thread.jspa?messageID=995846󳈆
In the error above this means that the state needs to be checked **after** _Updating Lambda function code_ and **before** _Updating Lambda function configuration_
Is there already a fix for this issue?
| closed | 2021-09-16T16:27:34Z | 2022-03-31T03:28:41Z | https://github.com/zappa/Zappa/issues/1041 | [] | illing2005 | 18 |
mckinsey/vizro | data-visualization | 473 | Consider adding a file uploader widget | ### Which package?
vizro
### What's the problem this feature will solve?
I am building a small pet project on Vizro now and ran into a limitation that there is no file upload widget. Per [this conversation (private chat)](https://mckinsey-hub.slack.com/archives/C02HQNRQYF2/p1710428365966559) I see that it is possible to achieve this functionality even now, but the workaround implies writing a lot of custom code.
I'm opening this issue to test the hypothesis that maybe such widget is worth being added to the package.
Hard to say without user research, but my feeling is that this feature might expand Vizro usability quite a lot. Because it will allow Vizro to be used in a whole new type of applications which are about **_displaying user-supplied data of known schema_**. In addition to a use case where Vizro already shines at, which is **_displaying pre-defined data from known source_**
### Describe the solution you'd like
Something as simple as [Streamlit version of that](https://docs.streamlit.io/develop/api-reference/widgets/st.file_uploader).
### Alternative Solutions
However I understand that Streamlit can provide such simple widget definition because of its script-like syntax where a widget is just assigned to a variable that should be collected from it. While Vizro features a completely different design pattern that requires users to explicitly build a `Page` object etc. And what makes it even more complex is that Vizro solution should have a YAML API to define that widget in addition to the Python workflow.
### Additional context
NA
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-05-12T17:49:40Z | 2024-05-17T01:31:14Z | https://github.com/mckinsey/vizro/issues/473 | [
"Feature Request :nerd_face:"
] | yury-fedotov | 2 |
plotly/dash | plotly | 2,348 | [Feature Request] Callback Errors on production environments with Debug=False | **Is your feature request related to a problem? Please describe.**
When errors occur during callbacks on a production environment, with debug=False. This breaks the production and more than likely restarts the worker.
**Describe the solution you'd like**
When starting up the server, allow for a mailbox item like smtp to be passed in order to send error alerts with traceback messages to the identified email address. ie `app.run(mail=mail, error_to=emailaddress)` Then, when an error occurs, raise PreventUpdate or some other standard error message, along the lines of "IT has been notified of the error that just occurred."
**Describe alternatives you've considered**
Wrapping all callbacks with the try except clause where an alert is sent to my email.
| closed | 2022-12-01T18:22:09Z | 2024-07-11T14:23:02Z | https://github.com/plotly/dash/issues/2348 | [] | BSd3v | 2 |
autokey/autokey | automation | 250 | Autokey could not start because can't create local folder | ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: `0.95.2`
Used GUI (Gtk, Qt, or both): both
Installed via: PPA
- http://mxrepo.com/mx/testrepo/pool/test/a/autokey
Linux Distribution: MX Linux MX-17.1 (aka Debian 9.x Stretch)
## Summary
*Autokey* can't create folder `~/.local/share/autokey/` and can't create file `~/.local/share/autokey/autokey.log`
## Actual Results
### Terminal log
```
$ autokey-qt
Traceback (most recent call last):
File "/usr/bin/autokey-qt", line 11, in <module>
load_entry_point('autokey==0.95.2', 'console_scripts', 'autokey-qt')()
File "/usr/lib/python3/dist-packages/autokey/qtapp.py", line 110, in __init__
self._configure_root_logger()
File "/usr/lib/python3/dist-packages/autokey/qtapp.py", line 169, in _configure_root_logger
backupCount=common.MAX_LOG_COUNT
File "/usr/lib/python3.5/logging/handlers.py", line 150, in __init__
BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib/python3.5/logging/handlers.py", line 57, in __init__
logging.FileHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib/python3.5/logging/__init__.py", line 1009, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib/python3.5/logging/__init__.py", line 1038, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/home/me/.local/share/autokey/autokey.log'
```
## Notes
If manually create folder `~/.local/share/autokey/` — then *Autokey* started normally. | closed | 2019-02-07T20:25:15Z | 2019-02-10T22:38:35Z | https://github.com/autokey/autokey/issues/250 | [
"duplicate"
] | ghost | 7 |
yzhao062/pyod | data-science | 446 | How many supervised algorithms are? | Hi everyone,
I have a dataset labeled with normal samples and outliers.
As far as I can see, only` pyod.models.xgbod import XGBOD` supports this configuration.
Are there more supervised algorithms supported?
Thanks in advance! 😄 | open | 2022-10-19T10:51:53Z | 2022-10-19T14:08:12Z | https://github.com/yzhao062/pyod/issues/446 | [] | JNaranjo-Alcazar | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.