repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
dynaconf/dynaconf
flask
203
Document the use of pytest with dynaconf
For testing in my project i want to add in my conftest.py something like that: ``` import pytest import os @pytest.fixture(scope='session', autouse=True) def settings(): os.environ['ENV_FOR_DYNACONF'] = 'testing' ``` But this is not work ;-(. What can you advise me ? I dont want start my test like that : `ENV_FOR_DYNACONF=testing pytest` because somebody can miss that command prefix and mess up some dev data.
closed
2019-08-08T10:41:39Z
2020-02-26T18:04:26Z
https://github.com/dynaconf/dynaconf/issues/203
[ "enhancement", "question", "Docs", "good first issue" ]
dyens
12
ading2210/poe-api
graphql
27
custom bots?
poe added new feature - custom bots. how i can access to him with api? ![image](https://user-images.githubusercontent.com/80632449/230710993-b72f9379-ebd2-4bd0-ac02-890a446b4629.png)
closed
2023-04-08T08:09:13Z
2023-04-10T06:47:31Z
https://github.com/ading2210/poe-api/issues/27
[ "bug" ]
dm-vev
4
Evil0ctal/Douyin_TikTok_Download_API
api
547
pip install f2之后命令行无法使用f2命令,是否需要配置PATH
pip install f2之后命令行无法使用f2命令,是否需要配置PATH
closed
2025-02-07T03:05:21Z
2025-02-07T05:42:21Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/547
[]
louis1027
1
serengil/deepface
deep-learning
1,419
[BUG]: is this an issue with deepface running as a container?
### Before You Report a Bug, Please Confirm You Have Done The Following... - [X] I have updated to the latest version of the packages. - [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue. ### DeepFace's version 0.0.94 ### Python version 3.8.12 ### Operating System debian 12 ### Dependencies root@1cfe87051553:/app/deepface/api/src# pip freeze absl-py==2.1.0 astunparse==1.6.3 beautifulsoup4==4.12.3 blinker==1.8.2 cachetools==5.5.0 certifi==2024.12.14 charset-normalizer==3.4.1 click==8.1.8 # Editable install with no version control (deepface==0.0.94) -e /app filelock==3.16.1 fire==0.7.0 Flask==3.0.3 Flask-Cors==5.0.0 flatbuffers==24.12.23 gast==0.4.0 gdown==5.2.0 google-auth==2.37.0 google-auth-oauthlib==1.0.0 google-pasta==0.2.0 grpcio==1.68.1 gunicorn==23.0.0 h5py==3.11.0 idna==3.10 importlib_metadata==8.5.0 itsdangerous==2.2.0 Jinja2==3.1.5 keras==2.13.1 libclang==18.1.1 Markdown==3.7 MarkupSafe==2.1.5 mtcnn==0.1.1 numpy==1.22.3 oauthlib==3.2.2 opencv-python==4.9.0.80 opt_einsum==3.4.0 packaging==24.2 pandas==2.0.3 Pillow==9.0.0 protobuf==4.25.5 pyasn1==0.6.1 pyasn1_modules==0.4.1 PySocks==1.7.1 python-dateutil==2.9.0.post0 pytz==2024.2 requests==2.32.3 requests-oauthlib==2.0.0 retina-face==0.0.17 rsa==4.9 six==1.17.0 soupsieve==2.6 tensorboard==2.13.0 tensorboard-data-server==0.7.2 tensorflow==2.13.1 tensorflow-estimator==2.13.0 tensorflow-io-gcs-filesystem==0.34.0 termcolor==2.4.0 tqdm==4.67.1 typing_extensions==4.5.0 tzdata==2024.2 urllib3==2.2.3 Werkzeug==3.0.6 wrapt==1.17.0 zipp==3.20.2 root@1cfe87051553:/app/deepface/api/src# ### Reproducible example ```Python # Base image FROM python:3.8.12 LABEL org.opencontainers.image.source https://github.com/serengil/deepface # Create required folder RUN mkdir -p /app && chown -R 1001:0 /app RUN mkdir /app/deepface # Switch to application directory WORKDIR /app # Install system dependencies RUN apt-get update && apt-get install -y \ wget \ ffmpeg \ libsm6 \ libxext6 \ libhdf5-dev \ && rm -rf /var/lib/apt/lists/* # Add NVIDIA repositories and install CUDA toolkit and cuDNN RUN wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb && \ dpkg -i cuda-keyring_1.1-1_all.deb && \ apt-get update && apt-get install -y \ cuda-toolkit-12-6 \ cudnn \ cudnn-cuda-12 \ && rm -rf /var/lib/apt/lists/* # Copy application files COPY ./deepface /app/deepface COPY ./requirements.txt /app/requirements.txt COPY ./requirements_local /app/requirements_local.txt COPY ./package_info.json /app/ COPY ./setup.py /app/ COPY ./README.md /app/ COPY ./entrypoint.sh /app/deepface/api/src/entrypoint.sh # Upgrade pip RUN pip install --no-cache-dir --upgrade pip # Install TensorFlow with GPU support RUN pip install --no-cache-dir tensorflow[and-cuda] # Install DeepFace dependencies RUN pip install --no-cache-dir -r /app/requirements_local.txt RUN pip install --no-cache-dir -e . # Environment variables ENV PYTHONUNBUFFERED=1 # Configure app directory and port WORKDIR /app/deepface/api/src EXPOSE 5100 ENTRYPOINT [ "sh", "entrypoint.sh" ] ``` ### Relevant Log Output junjun@i-c-u:/opt/deepface$ docker exec -it deepface-gpu bash python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" root@1cfe87051553:/app/deepface/api/src# python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" 2025-01-04 17:28:39.830945: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2025-01-04 17:28:39.883713: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2025-01-04 17:28:39.884144: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-01-04 17:28:41.048806: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2025-01-04 17:28:42.174258: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 2025-01-04 17:28:42.227964: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... [] ### Expected Result gpu detected and used ### What happened instead? container will use the cpu ### Additional Info other containers happily using the gpu. +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce GTX 1080 Ti Off | 00000000:0E:00.0 Off | N/A | | 45% 63C P2 82W / 280W | 7805MiB / 11264MiB | 23% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 9727 C ...in/ubuntu/python38/venv/bin/python3 1334MiB | | 0 N/A N/A 10034 C ...in/ubuntu/python38/venv/bin/python3 728MiB | | 0 N/A N/A 48866 C ...in/ubuntu/python38/venv/bin/python3 890MiB | | 0 N/A N/A 1349422 C /usr/bin/python 830MiB | | 0 N/A N/A 1710368 C ffmpeg 424MiB | | 0 N/A N/A 1710373 C ffmpeg 424MiB | | 0 N/A N/A 1710381 C ffmpeg 528MiB | | 0 N/A N/A 1710384 C ffmpeg 528MiB | | 0 N/A N/A 2037053 C ffmpeg 418MiB | | 0 N/A N/A 2037056 C ffmpeg 424MiB | | 0 N/A N/A 2037058 C ffmpeg 424MiB | | 0 N/A N/A 2037243 C ffmpeg 424MiB | | 0 N/A N/A 2037246 C ffmpeg 424MiB | +-----------------------------------------------------------------------------------------+ is this issue present in baremetal installs or lxd? just trying to understand if this is an issue with deepface running in a container.
closed
2025-01-04T17:40:37Z
2025-01-04T17:54:51Z
https://github.com/serengil/deepface/issues/1419
[ "bug", "dependencies" ]
levski
1
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,163
cal_gradient_penalty for WGANGP not used?
Hello, I have a question regarding your implementation of WGAN-GP based pix2pix. It seems you just used the WGAN loss ``` elif self.gan_mode == 'wgangp': if target_is_real: loss = -prediction.mean() else: loss = prediction.mean() ``` but the gradient penalty term has not been used, i.e., the `cal_gradient_penalty` function in networks.py has never been called for pix2pix implementation. Would you please let me know why? Thank you very much!
open
2020-10-11T18:13:27Z
2020-10-15T22:25:08Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1163
[]
zhulingchen
7
gradio-app/gradio
machine-learning
9,910
`allow_preview=False` not working as expected in custom Gallery class
### Describe the bug I'm using a custom `Gallery` class that inherits from `gr.Gallery` in my Gradio application. I've set the `allow_preview` parameter to `False` when creating the `Gallery` instance, but the preview functionality is still being triggered when I click on the images. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction 1. Create a Gradio application with the following code: ```python if __name__ == "__main__": class Gallery(gr.Gallery): def __init__(self, **kwargs): super().__init__(**kwargs) with gr.Blocks() as demo: gallery = Gallery(allow_preview=False) imgs = gr.State() print(gallery.allow_preview) demo.load(lambda: ["hello.png"], None, gallery) demo.launch() ``` 2. Run it, and observe that the `print(gallery.allow_preview)` statement outputs `False`, indicating that the `allow_preview` parameter is being set correctly. 3. Click on one of the images in the gallery. 4. Observe that the image is still being previewed, despite the `allow_preview=False` setting. ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Windows gradio version: 5.5.0 gradio_client version: 1.4.2 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.6.2.post1 audioop-lts is not installed. fastapi: 0.115.4 ffmpy: 0.4.0 gradio-client==1.4.2 is not installed. httpx: 0.27.2 huggingface-hub: 0.26.2 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 2.1.3 orjson: 3.10.11 packaging: 24.1 pandas: 2.2.3 pillow: 10.4.0 pydantic: 2.9.2 pydub: 0.25.1 python-multipart==0.0.12 is not installed. pyyaml: 6.0.2 ruff: 0.7.2 safehttpx: 0.1.1 semantic-version: 2.10.0 starlette: 0.41.2 tomlkit==0.12.0 is not installed. typer: 0.12.5 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.32.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.27.2 huggingface-hub: 0.26.2 packaging: 24.1 typing-extensions: 4.12.2 websockets: 12.0 ``` ### Severity I can work around it
closed
2024-11-06T14:56:23Z
2024-11-13T13:19:31Z
https://github.com/gradio-app/gradio/issues/9910
[ "bug", "pending clarification" ]
zhristophe
3
quantumlib/Cirq
api
7,028
The prerelease CI has been failing for the past couple of days
The error is doesn't happen during building cirq but at the last step when it tries to upload to pypi https://github.com/quantumlib/Cirq/actions/runs/13083990280/job/36512589142 ![Image](https://github.com/user-attachments/assets/8b39f3a7-5ec3-4185-a27b-7a99b0deaacb) ![Image](https://github.com/user-attachments/assets/f5faea1a-ee71-4321-96d6-f41286df18e3) cc: @pavoljuhas
closed
2025-02-04T03:00:44Z
2025-02-05T19:42:36Z
https://github.com/quantumlib/Cirq/issues/7028
[ "kind/health", "triage/accepted", "priority/p1" ]
NoureldinYosri
1
litl/backoff
asyncio
206
Add support for globally disabling retries
It would be very useful to disable retries (and sleeping and etc) in tests, similar to how it's supported by stamina: https://stamina.hynek.me/en/stable/tutorial.html#deactivating-retries-globally
open
2023-08-23T16:56:25Z
2024-09-25T07:27:20Z
https://github.com/litl/backoff/issues/206
[]
rouge8
2
deepinsight/insightface
pytorch
2,293
Partial FC v1&v2
Hi, dost partial fc v2 has any improvement on "training speed" or "GPU memory cost"? THX
open
2023-05-09T13:37:11Z
2023-05-09T13:37:11Z
https://github.com/deepinsight/insightface/issues/2293
[]
abcsimple
0
InstaPy/InstaPy
automation
6,307
Blocking users with InstaPy
I want to create an Instagram bot that blocks and/ or removes a bunch of accounts under certain circumstances. I've noticed that my engagement rate goes up if I remove users without profile pictures or with profile pictures that are just the color black. I like to block them, so they can't follow me back. Is there a way I can do this? Thanks for your time!
open
2021-09-06T19:50:34Z
2021-09-06T19:50:34Z
https://github.com/InstaPy/InstaPy/issues/6307
[]
Amartell15
0
openapi-generators/openapi-python-client
rest-api
680
Support for caching
Documentation is sparse and I do not see a way to enable caching for clients generated with openapi-python-client. Many APIs that leverage OpenAPI will also support content caching and send headers for TTL. When using the same openapi-python-client object to make the same calls, I notice it is making a new request and not serving from a cache. Does this client have support for caching at all? What would be the best way to enable caching with the Cache-Control and Expires headers from the responses?
closed
2022-10-02T08:06:19Z
2022-10-02T18:26:30Z
https://github.com/openapi-generators/openapi-python-client/issues/680
[ "👎 wont do" ]
ekrekeler
2
dagster-io/dagster
data-science
27,888
Asset partition [partition] depends on invalid partition keys. Issue with using multiple dynamic partitions definitions.
### What's the issue? I get the error `dagster._core.errors.DagsterInvariantViolationError: Asset partition AssetKeyPartitionKey(asset_key=AssetKey(['processed_file_chunk']), partition_key='file_1_chunk_2') depends on invalid partition keys {AssetKeyPartitionKey(asset_ke y=AssetKey(['file_chunks']), partition_key='file_1_chunk_2')}` for the provided reproduction code. ### What did you expect to happen? Proper processing of data from partitions in downstream assets. ### How to reproduce? ```import dagster as dg from dagster import AssetExecutionContext, DynamicPartitionsDefinition partition_a = DynamicPartitionsDefinition(name="partition_a") partition_b = DynamicPartitionsDefinition(name="partition_b") @dg.asset def files(context: AssetExecutionContext) -> dict: """Creates partitions dynamically for files.""" data = {"file_1": "content_1", "file_2": "content_2"} # Dynamically register files as partitions context.instance.add_dynamic_partitions( partitions_def_name="partition_a", partition_keys=list(data.keys()), ) return data # {file_1: content_1, file_2: content_2} @dg.asset(partitions_def=partition_a) def file_chunks(context: AssetExecutionContext, files) -> dict: """Processes a file partition into multiple chunks.""" file_key = context.partition_key # Example: "file_1" file_content = files[file_key] # Get file data print(f"Processing file: {file_key} -> {file_content}") # Simulate breaking file into chunks chunks = {f"{file_key}_chunk_1": "chunk_data_1", f"{file_key}_chunk_2": "chunk_data_2"} # Dynamically register chunk partitions context.instance.add_dynamic_partitions( partitions_def_name="partition_b", partition_keys=list(chunks.keys()), ) return chunks # Example: {"file_1_chunk_1": "chunk_data_1", "file_1_chunk_2": "chunk_data_2"} @dg.asset(partitions_def=partition_b) def processed_file_chunk(context: AssetExecutionContext, file_chunks) -> None: """Processes chunks dynamically.""" chunk_key = context.partition_key # Example: "file_1_chunk_1" print(f"Processing chunk: {chunk_key} -> {file_chunks[chunk_key]}") ``` ### Dagster version 1.10.1 ### Deployment type Local ### Deployment details _No response_ ### Additional information When the partitioned data is identical in each asset there is no error. ### Message from the maintainers Impacted by this issue? Give it a 👍! We factor engagement into prioritization.
open
2025-02-18T10:57:14Z
2025-02-18T15:45:06Z
https://github.com/dagster-io/dagster/issues/27888
[ "type: bug" ]
vlreinier
0
newpanjing/simpleui
django
146
自定义应用名称无效?
**bug描述** 注册model后,后台展示的应用名称是Otc,不知道怎么修改。试了下设置应用的verbose_name,但没有生效 **重现步骤** class OtcConfig(AppConfig): name = 'otc' verbose_name = '场外交易' **环境** 1.操作系统:macOs 2.python版本:3.6.7 3.django版本:2.2 4.simpleui版本:最新 **其他描述**
closed
2019-09-04T08:51:51Z
2020-10-26T08:08:45Z
https://github.com/newpanjing/simpleui/issues/146
[ "bug" ]
danvinhe
2
Anjok07/ultimatevocalremovergui
pytorch
1,786
i dont even know what is this error ima just report it
Last Error Received: Process: VR Architecture If this error persists, please contact the developers with the error details. Raw Error Details: MemoryError: "Unable to allocate 1.58 GiB for an array with shape (212462880,) and data type float64" Traceback Error: " File "UVR.py", line 9217, in process_start File "separate.py", line 1297, in seperate File "separate.py", line 1427, in spec_to_wav File "lib_v5\spec_utils.py", line 333, in cmb_spectrogram_to_wave File "lib_v5\spec_utils.py", line 289, in spectrogram_to_wave File "librosa\util\decorators.py", line 88, in inner_f return f(*args, **kwargs) File "librosa\core\spectrum.py", line 399, in istft y = np.zeros(shape, dtype=dtype) " Error Time Stamp [2025-03-20 19:42:11] Full Application Settings: vr_model: UVR-DeNoise aggression_setting: 5 window_size: 320 mdx_segment_size: Default batch_size: Default crop_size: 256 is_tta: False is_output_image: False is_post_process: False is_high_end_process: False post_process_threshold: 0.2 vr_voc_inst_secondary_model: No Model Selected vr_other_secondary_model: No Model Selected vr_bass_secondary_model: No Model Selected vr_drums_secondary_model: No Model Selected vr_is_secondary_model_activate: False vr_voc_inst_secondary_model_scale: 0.9 vr_other_secondary_model_scale: 0.7 vr_bass_secondary_model_scale: 0.5 vr_drums_secondary_model_scale: 0.5 demucs_model: Choose Model segment: Default overlap: 0.25 overlap_mdx: Default overlap_mdx23: 2 shifts: 2 chunks_demucs: Auto margin_demucs: 44100 is_chunk_demucs: False is_chunk_mdxnet: False is_primary_stem_only_Demucs: False is_secondary_stem_only_Demucs: False is_split_mode: True is_demucs_combine_stems: True is_mdx23_combine_stems: True demucs_voc_inst_secondary_model: No Model Selected demucs_other_secondary_model: No Model Selected demucs_bass_secondary_model: No Model Selected demucs_drums_secondary_model: No Model Selected demucs_is_secondary_model_activate: False demucs_voc_inst_secondary_model_scale: 0.9 demucs_other_secondary_model_scale: 0.7 demucs_bass_secondary_model_scale: 0.5 demucs_drums_secondary_model_scale: 0.5 demucs_pre_proc_model: No Model Selected is_demucs_pre_proc_model_activate: False is_demucs_pre_proc_model_inst_mix: False mdx_net_model: Choose Model chunks: Auto margin: 44100 compensate: Auto denoise_option: None is_match_frequency_pitch: True phase_option: Automatic phase_shifts: None is_save_align: False is_match_silence: True is_spec_match: False is_mdx_c_seg_def: True is_use_torch_inference_mode: False is_invert_spec: False is_deverb_vocals: False deverb_vocal_opt: Main Vocals Only voc_split_save_opt: Lead Only is_mixer_mode: False mdx_batch_size: Default mdx_voc_inst_secondary_model: No Model Selected mdx_other_secondary_model: No Model Selected mdx_bass_secondary_model: No Model Selected mdx_drums_secondary_model: No Model Selected mdx_is_secondary_model_activate: False mdx_voc_inst_secondary_model_scale: 0.9 mdx_other_secondary_model_scale: 0.7 mdx_bass_secondary_model_scale: 0.5 mdx_drums_secondary_model_scale: 0.5 is_save_all_outputs_ensemble: True is_append_ensemble_name: False chosen_audio_tool: Manual Ensemble choose_algorithm: Min Spec time_stretch_rate: 2.0 pitch_rate: 2.0 is_time_correction: True is_gpu_conversion: True is_primary_stem_only: False is_secondary_stem_only: True is_testing_audio: False is_auto_update_model_params: True is_add_model_name: False is_accept_any_input: False is_save_to_input_path: False apollo_overlap: 2 apollo_chunk_size: 5 apollo_model: Choose Model is_task_complete: False is_normalization: False is_use_directml: False is_wav_ensemble: False is_create_model_folder: False mp3_bit_set: 320k semitone_shift: 0 save_format: FLAC wav_type_set: PCM_16 device_set: Default help_hints_var: True set_vocal_splitter: No Model Selected is_set_vocal_splitter: False is_save_inst_set_vocal_splitter: False model_sample_mode: False model_sample_mode_duration: 30 demucs_stems: All Stems mdx_stems: All Stems Patch Version: UVR_Patch_1_15_25_22_30_BETA
open
2025-03-20T13:28:08Z
2025-03-20T13:28:08Z
https://github.com/Anjok07/ultimatevocalremovergui/issues/1786
[]
nwordexe
0
pytorch/pytorch
deep-learning
149,799
bug in pytorch/torch/nn/parameter:
### 🐛 Describe the bug ```python class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor): r"""A buffer that is not initialized. Uninitialized Buffer is a a special case of :class:`torch.Tensor` where the shape of the data is still unknown. Unlike a :class:`torch.Tensor`, uninitialized parameters hold no data and attempting to access some properties, like their shape, will throw a runtime error. The only operations that can be performed on a uninitialized parameter are changing its datatype, moving it to a different device and converting it to a regular :class:`torch.Tensor`. The default device or dtype to use when the buffer is materialized can be set during construction using e.g. ``device='cuda'``. """ cls_to_become = torch.Tensor def __new__( cls, requires_grad=False, device=None, dtype=None, persistent=True ) -> None: factory_kwargs = {"device": device, "dtype": dtype} data = torch.empty(0, **factory_kwargs) ret = torch.Tensor._make_subclass(cls, data, requires_grad) ret.persistent = persistent ret._is_buffer = True return ret # ret is not None probablity of issue here # suggest debugging : class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor): # as it is def __new__(cls, requires_grad=False, device=None, dtype=None, persistent=True): factory_kwargs = {"device": device, "dtype": dtype} data = torch.empty(0, **factory_kwargs) # Ensure we are subclassing correctly ret = super().__new__(cls, data, requires_grad) # Set attributes ret.persistent = persistent ret._is_buffer = True return ret # avoid annotation that method def __new__ return None ``` ### Versions wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py
closed
2025-03-22T08:46:52Z
2025-03-24T16:21:31Z
https://github.com/pytorch/pytorch/issues/149799
[]
said-ml
1
PrefectHQ/prefect
data-science
17,481
Flow Run time is displaying (1m 60s) instead of (2m 00s)
### Bug summary In the flow run, I am getting 1 minute and 60 seconds instead of 2 minutes. Small thing but figured I should share. ### Version info ```Text Version: 3.2.12 API version: 0.8.4 Python version: 3.12.3 Git commit: 826eb1a7 Built: Mon, Mar 10, 2025 4:36 PM OS/Arch: darwin/arm64 Profile: default Server type: cloud Pydantic version: 2.10.6 Integrations: prefect-gcp: 0.6.4 ``` ### Additional context _No response_
closed
2025-03-14T20:10:38Z
2025-03-14T20:16:31Z
https://github.com/PrefectHQ/prefect/issues/17481
[ "bug" ]
matthewkrausse
1
jupyter/nbviewer
jupyter
513
Unable to export to pdf
When I try to run `ipython nbconvert —to pdf Creating_and_configuring_CERN_VM_for_ATLAS.ipynb` ([gist](http://nbviewer.ipython.org/gist/wsfreund/7e57bde9e08a9f7c4c46/)), I get the following error: ``` Underfull \hbox (badness 10000) in paragraph at lines 597--599 []\OT1/cmr/m/n/10 ======================\OML/cmm/m/it/10 > \OT1/cmr/m/n/10 TeX Live in-stal-la-tion pro-ce-dure [6] [7] [8] [9] ! Missing \endcsname inserted. <to be read again> \& l.913 ...erref[Latex-\&\#x28;TexLive-2015\&\#x29;] {explained before here}. I ? ! Emergency stop. <to be read again> \& l.913 ...erref[Latex-\&\#x28;TexLive-2015\&\#x29;] {explained before here}. I ! ==> Fatal error occurred, no output PDF file produced! Transcript written on notebook.log. Traceback (most recent call last): File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/bin/ipython", line 11, in <module> sys.exit(start_ipython()) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/IPython/__init__.py", line 118, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/traitlets/config/application.py", line 592, in launch_instance app.start() File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 349, in start return self.subapp.start() File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/nbconvertapp.py", line 286, in start self.convert_notebooks() File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/nbconvertapp.py", line 409, in convert_notebooks self.convert_single_notebook(notebook_filename) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/nbconvertapp.py", line 380, in convert_single_notebook output, resources = self.export_single_notebook(notebook_filename, resources) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/nbconvertapp.py", line 332, in export_single_notebook output, resources = self.exporter.from_filename(notebook_filename, resources=resources) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/exporters/exporter.py", line 166, in from_filename return self.from_notebook_node(nbformat.read(f, as_version=4), resources=resources, **kw) File "/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/exporters/pdf.py", line 139, in from_notebook_node raise RuntimeError("PDF creating failed") RuntimeError: PDF creating failed ``` Debug information: ``` [NbConvertApp] Config changed: [NbConvertApp] {'NbConvertApp': {'export_format': u'pdf', 'log_level': 10}} [NbConvertApp] Searching [u'/afs/cern.ch/user/w/wsfreund/RingerProjectNBs', '/afs/cern.ch/user/w/wsfreund/.jupyter', '/afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/etc/jupyter', '/usr/local/etc/jupyter', '/etc/jupyter'] for config files [NbConvertApp] Attempting to load config file jupyter_config.py in path /etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.json in path /etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.py in path /usr/local/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.json in path /usr/local/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.py in path /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.json in path /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_config.py in path /afs/cern.ch/user/w/wsfreund/.jupyter [NbConvertApp] Attempting to load config file jupyter_config.json in path /afs/cern.ch/user/w/wsfreund/.jupyter [NbConvertApp] Attempting to load config file jupyter_config.py in path /afs/cern.ch/user/w/wsfreund/RingerProjectNBs [NbConvertApp] Attempting to load config file jupyter_config.json in path /afs/cern.ch/user/w/wsfreund/RingerProjectNBs [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.py in path /etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.json in path /etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.py in path /usr/local/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.json in path /usr/local/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.py in path /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.json in path /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/etc/jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.py in path /afs/cern.ch/user/w/wsfreund/.jupyter [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.json in path /afs/cern.ch/user/w/wsfreund/.jupyter [NbConvertApp] Loaded config file: /afs/cern.ch/user/w/wsfreund/.jupyter/jupyter_nbconvert_config.py [NbConvertApp] Config changed: [NbConvertApp] {'NbConvertApp': {'export_format': u'pdf', 'log_level': 10}, 'Exporter': {'template_path': ['/afs/cern.ch/user/w/wsfreund/.local/share/jupyter/templates']}} [NbConvertApp] Loaded config file: /afs/cern.ch/user/w/wsfreund/.jupyter/jupyter_nbconvert_config.json [NbConvertApp] Config changed: [NbConvertApp] {'NbConvertApp': {'export_format': u'pdf', 'log_level': 10, u'postprocessor_class': u'post_embedhtml.EmbedPostProcessor'}, 'Exporter': {'template_path': [u'/afs/cern.ch/user/w/wsfreund/.local/share/jupyter/templates'], u'preprocessors': [u'pre_codefolding.CodeFoldingPreprocessor', u'pre_pymarkdown.PyMarkdownPreprocessor']}} [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.py in path /afs/cern.ch/user/w/wsfreund/RingerProjectNBs [NbConvertApp] Attempting to load config file jupyter_nbconvert_config.json in path /afs/cern.ch/user/w/wsfreund/RingerProjectNBs /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/IPython/nbconvert.py:13: ShimWarning: The `IPython.nbconvert` package has been deprecated. You should import from ipython_nbconvert instead. "You should import from ipython_nbconvert instead.", ShimWarning) [NbConvertApp] Converting notebook Creating_and_configuring_CERN_VM_for_ATLAS.ipynb to pdf [NbConvertApp] Notebook name is 'Creating_and_configuring_CERN_VM_for_ATLAS' [NbConvertApp] Applying preprocessor: coalesce_streams [NbConvertApp] Applying preprocessor: SVG2PDFPreprocessor [NbConvertApp] Applying preprocessor: LatexPreprocessor [NbConvertApp] Applying preprocessor: HighlightMagicsPreprocessor [NbConvertApp] Applying preprocessor: ExtractOutputPreprocessor [NbConvertApp] Applying preprocessor: CodeFoldingPreprocessor [NbConvertApp] Applying preprocessor: PyMarkdownPreprocessor [NbConvertApp] Attempting to load template article.tplx [NbConvertApp] Loaded template article.tplx [NbConvertApp] Attempting to load template article.tplx [NbConvertApp] Loaded template article.tplx /afs/cern.ch/user/w/wsfreund/.pyenv/versions/2.7.4/lib/python2.7/site-packages/nbconvert/utils/pandoc.py:49: RuntimeWarning: You are using an old version of pandoc (1.9.4.1) Recommended version is 1.12.1. Try updating.http://johnmacfarlane.net/pandoc/installing.html. Continuing with doubts... check_pandoc_version() [NbConvertApp] Writing 67163 bytes to notebook.tex [NbConvertApp] Building PDF [NbConvertApp] Running pdflatex 3 times: [u'pdflatex', u'notebook.tex'] [NbConvertApp] CRITICAL | pdflatex failed: [u'pdflatex', u'notebook.tex'] This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015) (preloaded format=pdflatex) # Continues with pdfTex until error. ``` I am using: ``` ipython --version 4.0.0 ``` ``` python --version Python 2.7.4 ``` and ``` pdfTeX 3.14159265-2.6-1.40.16 (TeX Live 2015) kpathsea version 6.2.1 Copyright 2015 Peter Breitenlohner (eTeX)/Han The Thanh (pdfTeX). There is NO warranty. Redistribution of this software is covered by the terms of both the pdfTeX copyright and the Lesser GNU General Public License. For more information about these matters, see the file named COPYING and the pdfTeX source. Primary author of pdfTeX: Peter Breitenlohner (eTeX)/Han The Thanh (pdfTeX). Compiled with libpng 1.6.17; using libpng 1.6.17 Compiled with zlib 1.2.8; using zlib 1.2.8 Compiled with xpdf version 3.04 ``` I would appreciate any help on this.
open
2015-10-08T20:48:40Z
2018-07-15T21:16:34Z
https://github.com/jupyter/nbviewer/issues/513
[ "type:Bug", "tag:Upstream", "status:Needs Reproduction" ]
wsfreund
5
miguelgrinberg/python-socketio
asyncio
144
CancelledError cannot be handled properly in socketio
We can't handle cancellederror correctly therefor our applications receieve so many cancellederros in errorlog. and the service were shutdown sometimes. we asked experts in aiohttp and aiojobs , we were told that the actual cause could be only related with python-socketio. [The more information are in the thread which were just moved from aiojobs](https://github.com/aio-libs/aiojobs/issues/18)
closed
2017-11-06T10:57:00Z
2017-11-09T02:34:12Z
https://github.com/miguelgrinberg/python-socketio/issues/144
[ "investigate" ]
larryclean
9
graphistry/pygraphistry
jupyter
487
[BUG] Login with token throw error, there is no handling of refresh function for login with token
**Describe the bug** Reported by Leo when trying to login with JWT token in Louie integration ``` graphistry.register(api=3, token=xyz) ... g.plot() ``` is failing with an org_name KeyError bug and trying to specify as `graphistry.register(api=3, token=xyz, org_name=user)` does not help (https://graphistry.slack.com/archives/D01SYERTH9A/p1684136101863949) I'm now trying: ``` print('redoing gauth') graphistry.api_token(tok) graphistry.org_name(user) graphistry.register(api=3, token=tok, org_name=user) graphistry.PyGraphistry.relogin = lambda: 1 ``` as a workaround **To Reproduce** Code, including data, than can be run without editing: ```python import pandas as pd import graphistry #graphistry.register(api=3, username='...', password='...') graphistry.edges(pd.from_csv('https://data.csv'), 's', 'd')).plot() ``` **Expected behavior** No error plot() successfully run **Actual behavior** What did happen **Screenshots** If applicable, any screenshots to help explain the issue **PyGraphistry API client environment** Latest version **Additional context** After debugging, found that when login with token, relogin function is not implemented at all. The correct implementation should be getting the refresh_token when login and use refresh token to get a new access token.
closed
2023-05-20T05:06:12Z
2023-07-23T06:45:14Z
https://github.com/graphistry/pygraphistry/issues/487
[ "bug" ]
vaimdev
0
aleju/imgaug
machine-learning
758
Cropping with Center Points from Bounding Boxes
Hey folks, I have an object detection problem involving high resolution images (16:9 aspect ratio, 5280x2970) where it would be desirable to apply a `Crop` augmentation but use the center point of one of the bounding boxes to do so. I couldn't find an augmentation in the API that fit this use case exactly so I created a working version using `CropToFixedSize` and its `position` argument via ```python def crop_centered_on_bounding_box( image: np.ndarray, bboxes: List[BoundingBox], width: int, height: int, bbox_idx_to_center_on: Optional[int] = None, ): """Crop `image` to size `width` x `height` centered on the bounding box in `bboxes` at index `bbox_idx_to_center_on`. If `bbox_idx_to_center_on` is None, a random bounding box will be selected from the list Parameters ---------- image : np.ndarray Image with shape (height, width, channels) bboxes : List[BoundingBox] List of `imgaug` `BoundingBox` width : int Width of cropped image height : int Height of cropped image bbox_idx_to_center_on: int, optional An id / index for `bboxes` which will specify which bounding box the resulting crop will center around By default, one will be selected at random Returns ------- image : np.ndarray Resized and cropped image bboxes : List[BoundingBox] Bounding boxes that are within the resized and cropped image """ # Cropped region will be roughly centered on one of the bounding boxes at random # if no `bbox_idx_to_center_on` is specified if bbox_idx_to_center_on is None: bbox_to_center_on: BoundingBox = random.choice(bboxes) else: bbox_to_center_on: BoundingBox = bboxes[bbox_idx_to_center_on] bboxes = BoundingBoxesOnImage(bboxes, shape=image.shape) curr_height, curr_width, _ = image.shape center_x = bbox_to_center_on.center_x / curr_width center_y = bbox_to_center_on.center_y / curr_height aug = iaa.CropToFixedSize(width, height, position=(1 - center_x, 1 - center_y)) image, bboxes = aug(image=image, bounding_boxes=bboxes) bboxes.remove_out_of_image().clip_out_of_image() return image, bboxes ``` I'd be interested in contributing this into the API but am unsure if this use case is aligned with the design of `imgaug` since it is dependent on a bounding box to accomplish the augmentation. Any guidance on whether it makes sense to contribute this or a pointer to an existing augmentation that already does this would be appreciated. Thanks!
open
2021-04-05T18:19:36Z
2021-04-05T18:20:05Z
https://github.com/aleju/imgaug/issues/758
[]
seanytak
0
iterative/dvc
machine-learning
9,817
`dvc exp run --dry --allow-missing` modifies dvc files
Reproduction steps: ```bash cd "$(mktemp -d)" git init && dvc init echo -e "outs:\n- path: data" > data.dvc dvc stage add -n test -d data ls cat data.dvc echo "data" >> .gitignore git add . git commit -m "init" dvc exp run --dry --allow-missing -vv cat data.dvc ```
closed
2023-08-08T04:01:13Z
2023-09-13T06:06:48Z
https://github.com/iterative/dvc/issues/9817
[ "bug", "p1-important", "A: experiments" ]
skshetry
9
Guovin/iptv-api
api
830
[Bug]: 把节目合并,比如cctv1的10个链接放到cctv1后面,而不是10个cctv1
参考这个:https://mp.weixin.qq.com/s/38opj8gzxDEldWNMpnOfHg 比如实际是这样的: ``` 央视频道,#genre# CCTV-1,http://116.128.242.83:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=1&authid=0 CCTV-1,http://113.195.45.40:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0 CCTV-1,http://58.19.38.162:9901/tsfile/live/1000_1.m3u8?key=txiptv&playlive=1&authid=0 CCTV-1,http://101.66.199.54:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0 CCTV-1,http://101.66.199.235:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0 CCTV-1,http://[2409:8087:7008:20::8]:80/dbiptv.sn.chinamobile.com/PLTV/88888888/224/3221226231/index.m3u8 CCTV-1,http://[2409:8087:1a0a:df::4038]:80/ottrrs.hl.chinamobile.com/TVOD/88888888/224/3221226559/index.m3u8 CCTV-1,http://[2409:8087:1a01:df::7005]:80/ottrrs.hl.chinamobile.com/PLTV/2/224/3221226016/2.m3u8 CCTV-1,http://[2409:8087:1a01:df::4077]:80/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8 CCTV-1,http://[2409:8087:1a0a:df::404b]:80/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8 ``` 合并后,这样的: ``` 央视频道,#genre# CCTV-1综合,http://116.128.242.83:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=1&authid=0#http://113.195.45.40:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0#http://58.19.38.162:9901/tsfile/live/1000_1.m3u8?key=txiptv&playlive=1&authid=0#http://101.66.199.54:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0#http://101.66.199.235:9901/tsfile/live/0001_1.m3u8?key=txiptv&playlive=0&authid=0#http://[2409:8087:7008:20::8]:80/dbiptv.sn.chinamobile.com/PLTV/88888888/224/3221226231/index.m3u8#http://[2409:8087:1a0a:df::4038]:80/ottrrs.hl.chinamobile.com/TVOD/88888888/224/3221226559/index.m3u8#http://[2409:8087:1a01:df::7005]:80/ottrrs.hl.chinamobile.com/PLTV/2/224/3221226016/2.m3u8#http://[2409:8087:1a01:df::4077]:80/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8#http://[2409:8087:1a0a:df::404b]:80/ottrrs.hl.chinamobile.com/PLTV/88888888/224/3221226016/index.m3u8 ```
closed
2025-01-14T10:41:45Z
2025-02-23T15:57:25Z
https://github.com/Guovin/iptv-api/issues/830
[ "enhancement", "wontfix" ]
xxl6097
16
viewflow/viewflow
django
395
List View Boolean Field Icon
![image](https://github.com/viewflow/viewflow/assets/36492073/f71070f5-fe92-4b4c-92ba-ebf94b871d16) Currently check boxes are used to represent boolean values. This is confusing as they look like they can be ticked by the user, like the check boxes on the very left can be. I suggest switching these to different icons like in V1. These are the ones I chose for my modification. ``` # list.py def format_value(self, obj, value): if getattr(self.model_field, "flatchoices", None): return dict(self.model_field.flatchoices).get(value, "") elif isinstance(self.model_field, ModelFieldColumn.BOOLEAN_FIELD_TYPES): if value is None: return Icon("check_indeterminate_small") elif value is True: return Icon("check") else: return Icon("close") else: return super().format_value(obj, value) ```
closed
2023-09-12T00:35:08Z
2024-02-13T05:26:08Z
https://github.com/viewflow/viewflow/issues/395
[ "request/enhancement", "dev/site" ]
SamuelLayNZ
3
tortoise/tortoise-orm
asyncio
952
Isolation level support
Hey there, Suggesting we get isolation level support that asyncpg already does, On lines 157 and 229, if we can pass an isolation parameter and asyncpg can handle the rest, but personally I don't know how everything integrates. https://github.com/tortoise/tortoise-orm/blob/850a2cae961bd5a79c61ca635cbe8175aff0d5c0/tortoise/backends/asyncpg/client.py
closed
2021-10-13T02:33:36Z
2024-09-01T22:06:17Z
https://github.com/tortoise/tortoise-orm/issues/952
[ "enhancement" ]
tinducvo
5
netbox-community/netbox
django
17,939
netbox-rqworker failed to start on fresh installation of v4.1.4
### Deployment Type Self-hosted ### Triage priority N/A ### NetBox Version v4.1.4 ### Python Version 3.11 ### Steps to Reproduce Follow netbox installation procedure. Procedure installs newest "rq" version. ### Expected Behavior netbox-rqworker service should start ### Observed Behavior netbox-rqworker failed. ''' $ /opt/netbox/venv/bin/python3 /opt/netbox/netbox/manage.py rqworker high default low Traceback (most recent call last): File "/opt/netbox/netbox/manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 275, in fetch_command klass = load_command_class(app_name, subcommand) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/netbox/venv/lib/python3.11/site-packages/django/core/management/__init__.py", line 48, in load_command_class module = import_module("%s.management.commands.%s" % (app_name, name)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1206, in _gcd_import File "<frozen importlib._bootstrap>", line 1178, in _find_and_load File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/opt/netbox/netbox/core/management/commands/rqworker.py", line 3, in <module> from django_rq.management.commands.rqworker import Command as _Command File "/opt/netbox/venv/lib/python3.11/site-packages/django_rq/management/commands/rqworker.py", line 5, in <module> from rq import Connection ImportError: cannot import name 'Connection' from 'rq' (/opt/netbox/venv/lib/python3.11/site-packages/rq/__init__.py) ''' After having uninstalled rq pip package from venv and installed rq==1.16.2 service started. I read about same issue in v3.5 about rq worker, this might be related.
closed
2024-11-06T09:09:54Z
2024-11-06T13:05:40Z
https://github.com/netbox-community/netbox/issues/17939
[]
boxstep
0
nvbn/thefuck
python
1,181
Computer
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with the following basic information: --> The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0 and Bash 4.4.12(1)-release`): FILL THIS IN Your system (Debian 7, ArchLinux, Windows, etc.): FILL THIS IN How to reproduce the bug: FILL THIS IN The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck): FILL THIS IN If the bug only appears with a specific application, the output of that application and its version: FILL THIS IN Anything else you think is relevant: FILL THIS IN <!-- It's only with enough information that we can do something to fix the problem. -->
closed
2021-04-02T01:49:17Z
2021-06-23T21:42:29Z
https://github.com/nvbn/thefuck/issues/1181
[]
Electronick79
0
HIT-SCIR/ltp
nlp
200
模型百度云下载链接无法访问
您好,请问http://pan.baidu.com/share/link?shareid=1988562907&uk=2738088569 这个模型百度云的模型下载链接无法访问,有没有别的下载地址可以下载到模型?或者说能否修复一下链接,谢谢。
closed
2016-12-22T02:04:06Z
2016-12-23T09:11:00Z
https://github.com/HIT-SCIR/ltp/issues/200
[]
lzhibin
2
tensorflow/datasets
numpy
5,335
[data request] <dataset educação superior no Brasil>
* Name of dataset: <name> * URL of dataset: <url> * License of dataset: <license type> * Short description of dataset and use case(s): <description> Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize. And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md).
open
2024-03-24T16:00:36Z
2024-03-25T13:37:21Z
https://github.com/tensorflow/datasets/issues/5335
[ "dataset request" ]
karhyne
1
babysor/MockingBird
deep-learning
53
请问如何恰当调整CPU和GPU的占用率呢
请教一下,GPU和CPU利用率只有13%左右,该怎么调整训练参数?
closed
2021-08-27T08:46:42Z
2021-10-01T03:18:35Z
https://github.com/babysor/MockingBird/issues/53
[]
TypicalSpider
3
microsoft/unilm
nlp
1,432
BEIT-3 Pre-trained model and code
Nice job! But when I was reading the code, I didn't notice any code that improved pretraining. When will you release it?
closed
2024-01-16T01:17:30Z
2024-01-16T02:58:26Z
https://github.com/microsoft/unilm/issues/1432
[]
Tzx11
1
tiangolo/uvicorn-gunicorn-fastapi-docker
fastapi
97
[QUESTION] How to stop the container from inside
Hi, I'm trying to stop the container when a certain scenario happens. The app stops when it runs outside docker with a simple `sys.exit()`, but if I run the app inside the docker container, it automatically restarts. Is there a way to prevent the reload?
closed
2021-06-30T09:29:19Z
2021-06-30T10:14:33Z
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/97
[]
Riccorl
1
deezer/spleeter
tensorflow
437
[Bug] Spleeter adding small padding to output audio files
## Description During an effort to reduce memory footprint by splitting input files in chunks of 30 seconds, discussed on [this thread](https://github.com/deezer/spleeter/issues/391#issuecomment-652202433) we noticed that Spleeter is adding a tiny padding after each output stem file, what makes a small gap when stitching back the 30's chunks in one single stem. Sometimes this gap can be unnoticeable, but when processing a song and mixing it back, it is easy to spot the hiccup in the song. Also, after analyzing the waveform, it's clear that a gap is added by Spleeter: ![image](https://user-images.githubusercontent.com/5693297/86246711-ab1cb680-bb68-11ea-9c83-33b11020481b.png) In order to make sure it is related to Spleeter, I've tried separating and stitching other files not processed via Spleeter and the stitching was flawless. During the entire experiment, I've used only lossless(wav) files to avoid issues with padding that some lossy files would cause. [Here](https://www.dropbox.com/s/1f0qz92yaoqedhl/gap.mp3?dl=0) is the file that generated the waveform above, you can notice a hiccup (gap) every 30 seconds when listening carefully. ## Step to reproduce #### 1 - Use an example wav file that has more than 30 seconds and split it into 30s chunks using FFmpeg or Sox. You can rename your file to myfile.wav to reuse the code below: FFmpeg: `ffmpeg -i myfile.wav -f segment -segment_time 30 -c copy myfile-%03d.wav` Sox: `sox myfile.wav myfile-.wav trim 0 30 : newfile : restart` #### 2 - Process all the chunks using Spleeter: `spleeter separate -i myfile-* -p spleeter:2stems -B tensorflow -o out` #### 3 - Move first 2 accompaniment stems together for stitching: ``` mv ./out/myfile-002/accompaniment.wav ./out/myfile-001/accompaniment2.wav cd ./out/myfile-001 ``` #### 4 - Stitch accompaniment.wav and accompaniment2.wav using Sox or FFmpeg: FFmpeg: `ffmpeg -f concat -safe 0 -i <(for f in ./accompaniment*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav` Sox: `sox accompaniment.wav accompaniment2.wav output.wav` #### 5 - Listen to output.wav and notice the hiccup during the transition at ~30s. You can also use this [shell script](https://www.dropbox.com/s/3gimj0hstutsts5/separate.sh?dl=0) by @amo13 ## Environment | | | | ----------------- | ------------------------------- | | OS | Linux using Docker | Installation type | Conda | | RAM available | 6GB | | Hardware spec | Docker using 8 CPUs | ## Additional context [Stitching discussion](https://github.com/deezer/spleeter/issues/391)
closed
2020-07-01T13:46:28Z
2021-05-22T10:21:16Z
https://github.com/deezer/spleeter/issues/437
[ "bug", "invalid" ]
geraldoramos
6
geex-arts/django-jet
django
196
sidebar requires perfect mouse movement to keep a application open
Hi, Using Jet for a while when developing for a while and it has a small annoyance when selecting a model. When overing on an application, and wanting to select a model, you often hover over another application making another list of models open. This happens frequently since the list of models is much higher than the applications, and so you would instinctively move your mouse up to select a model. See image. ![image](https://cloud.githubusercontent.com/assets/333846/24454976/4777d486-148e-11e7-9f4b-a97652461c43.png) This happens on the latest chrome.
open
2017-03-29T12:46:13Z
2018-01-25T15:05:18Z
https://github.com/geex-arts/django-jet/issues/196
[]
stitch
7
MagicStack/asyncpg
asyncio
837
Feature request: 'get_attributes' without prepared statements
Hello! Right now I'm trying out the `AsyncSession` in the `SQLAlchemy@1.4` which uses `asyncpg` under the hood and as it turns out that specific configuration does not work well with `pgbouncer` even if you disable statement caching both in `asyncpg` and `SQLAlchemy`. As discussed in the corresponding issue — https://github.com/sqlalchemy/sqlalchemy/issues/6467 — `SQLAlchemy` uses `Connection.prepare()` regardless of user's decision on whether or not to use prepared statements. It is required to fetch the query attributes. It looks like if there would be an ability to fetch attributes of the query without preparing it first, the issue could be mitigated. Therefore, I wanted to ask if there is a way to fetch those attributes without calling `Connection.prepare()` first? cc @zzzeek
closed
2021-10-12T15:29:33Z
2021-11-16T06:13:21Z
https://github.com/MagicStack/asyncpg/issues/837
[]
nikitagashkov
7
ansible/awx
automation
15,271
AWX Receptor Node: IPv6 flapping (working, broken)
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary I deployed AWX (v24.5.0) in k8s via helm. I added a receptor node (execution). If the server which i want to provision has an ipv4 address everything is working normal. If the server i want to run my playbooks against as an ipv6 address, sometimes the job finishes successful and sometimes it failes. It is exactly the same job template. If i just click on re-run it sometimes works and sometimes not. Cannot see any pattern when it works and when not ... ### AWX version 24.5.0 ### Select the relevant components - [ ] UI - [ ] UI (tech preview) - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [X] Other ### Installation method kubernetes ### Modifications no ### Ansible version _No response_ ### Operating system Debian ### Web browser Firefox ### Steps to reproduce - deploy awx in k8s via helm - add debian (11.9) receptor execution node - provision servers with ipv6 address ### Expected results **Working** ``` Identity added: /runner/artifacts/1145/ssh_key_data (/runner/artifacts/1145/ssh_key_data) ansible-playbook [core 2.15.12] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible ansible collection location = /runner/requirements_collections:/root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.9.18 (main, Jan 24 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3) jinja version = 3.1.4 libyaml = True No config file found; using defaults setting up inventory plugins Loading collection ansible.builtin from host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method Parsed /runner/inventory/hosts inventory source with script plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.9/site-packages/ansible/plugins/callback/default.py Loading callback plugin awx_display of type stdout, v2.0 from /runner/artifacts/1145/callback/awx_display.py Skipping callback 'awx_display', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: CloudCheckSSH.yml **************************************************** Positional arguments: CloudCheckSSH/CloudCheckSSH.yml verbosity: 4 remote_user: shell connection: smart timeout: 10 become_method: sudo tags: ('all',) inventory: ('/runner/inventory',) subset: 2001:4178:6:1416:0000:000b:a:14 extra_vars: ('@/runner/env/extravars',) forks: 5 1 plays in CloudCheckSSH/CloudCheckSSH.yml PLAY [--> CloudCheckSSH] ******************************************************* TASK [Gathering Facts] ********************************************************* task path: /runner/project/CloudCheckSSH/CloudCheckSSH.yml:4 <2001:4178:6:1416:0000:000b:a:14> ESTABLISH SSH CONNECTION FOR USER: root <2001:4178:6:1416:0000:000b:a:14> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/d04603209b"' 2001:4178:6:1416:0000:000b:a:14 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' <2001:4178:6:1416:0000:000b:a:14> (0, b'/root\\n', b'OpenSSH_8.7p1, OpenSSL 3.2.1 30 Jan 2024\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for \\'final all\\' host 2001:4178:6:1416:0000:000b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: not matched \\'final\\'\\r\\ndebug2: match not found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug1: configuration requests final Match pass\\r\\ndebug2: resolve_canonicalize: hostname 2001:4178:6:1416:0000:000b:a:14 is address\\r\\ndebug2: resolve_canonicalize: canonicalised address "2001:4178:6:1416:0000:000b:a:14" => "2001:4178:6:1416:0:b:a:14"\\r\\ndebug1: re-parsing configuration\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for \\'final all\\' host 2001:4178:6:1416:0:b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: matched \\'final\\'\\r\\ndebug2: match found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug3: expanded UserKnownHostsFile \\'~/.ssh/known_hosts\\' -> \\'/root/.ssh/known_hosts\\'\\r\\ndebug3: expanded UserKnownHostsFile \\'~/.ssh/known_hosts2\\' -> \\'/root/.ssh/known_hosts2\\'\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug1: Control socket "/runner/cp/d04603209b" does not exist\\r\\ndebug3: ssh_connect_direct: entering\\r\\ndebug1: Connecting to 2001:4178:6:1416:0:b:a:14 [2001:4178:6:1416:0:b:a:14] port 22.\\r\\ndebug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x48\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug1: fd 3 clearing O_NONBLOCK\\r\\ndebug1: Connection established.\\r\\ndebug3: timeout: 9997 ms remain after connect\\r\\ndebug1: identity file /root/.ssh/id_rsa type -1\\r\\ndebug1: identity file /root/.ssh/id_rsa-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_dsa type -1\\r\\ndebug1: identity file /root/.ssh/id_dsa-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_ecdsa type -1\\r\\ndebug1: identity file /root/.ssh/id_ecdsa-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_ecdsa_sk type -1\\r\\ndebug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_ed25519 type -1\\r\\ndebug1: identity file /root/.ssh/id_ed25519-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_ed25519_sk type -1\\r\\ndebug1: identity file /root/.ssh/id_ed25519_sk-cert type -1\\r\\ndebug1: identity file /root/.ssh/id_xmss type -1\\r\\ndebug1: identity file /root/.ssh/id_xmss-cert type -1\\r\\ndebug1: Local version string SSH-2.0-OpenSSH_8.7\\r\\ndebug1: Remote protocol version 2.0, remote software version OpenSSH_9.2p1 Debian-2+deb12u2\\r\\ndebug1: compat_banner: match: OpenSSH_9.2p1 Debian-2+deb12u2 pat OpenSSH* compat 0x04000000\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug1: Authenticating to 2001:4178:6:1416:0:b:a:14:22 as \\'root\\'\\r\\ndebug1: load_hostkeys: fopen /root/.ssh/known_hosts: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /root/.ssh/known_hosts2: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory\\r\\ndebug3: order_hostkeyalgs: no algorithms matched; accept original\\r\\ndebug3: send packet: type 20\\r\\ndebug1: SSH2_MSG_KEXINIT sent\\r\\ndebug3: receive packet: type 20\\r\\ndebug1: SSH2_MSG_KEXINIT received\\r\\ndebug2: local client KEXINIT proposal\\r\\ndebug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,ext-info-c,kex-strict-c-v00@openssh.com\\r\\ndebug2: host key algorithms: ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ssh-ed25519@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,rsa-sha2-512,rsa-sha2-256\\r\\ndebug2: ciphers ctos: aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes128-gcm@openssh.com,aes128-ctr\\r\\ndebug2: ciphers stoc: aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes128-gcm@openssh.com,aes128-ctr\\r\\ndebug2: MACs ctos: hmac-sha2-256-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha2-256,hmac-sha1,umac-128@openssh.com,hmac-sha2-512\\r\\ndebug2: MACs stoc: hmac-sha2-256-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha2-256,hmac-sha1,umac-128@openssh.com,hmac-sha2-512\\r\\ndebug2: compression ctos: zlib@openssh.com,zlib,none\\r\\ndebug2: compression stoc: zlib@openssh.com,zlib,none\\r\\ndebug2: languages ctos: \\r\\ndebug2: languages stoc: \\r\\ndebug2: first_kex_follows 0 \\r\\ndebug2: reserved 0 \\r\\ndebug2: peer server KEXINIT proposal\\r\\ndebug2: KEX algorithms: sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,kex-strict-s-v00@openssh.com\\r\\ndebug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519\\r\\ndebug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com\\r\\ndebug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com\\r\\ndebug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1\\r\\ndebug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1\\r\\ndebug2: compression ctos: none,zlib@openssh.com\\r\\ndebug2: compression stoc: none,zlib@openssh.com\\r\\ndebug2: languages ctos: \\r\\ndebug2: languages stoc: \\r\\ndebug2: first_kex_follows 0 \\r\\ndebug2: reserved 0 \\r\\ndebug3: kex_choose_conf: will use strict KEX ordering\\r\\ndebug1: kex: algorithm: curve25519-sha256\\r\\ndebug1: kex: host key algorithm: ssh-ed25519\\r\\ndebug1: kex: server->client cipher: aes256-gcm@openssh.com MAC: <implicit> compression: zlib@openssh.com\\r\\ndebug1: kex: client->server cipher: aes256-gcm@openssh.com MAC: <implicit> compression: zlib@openssh.com\\r\\ndebug1: kex: curve25519-sha256 need=32 dh_need=32\\r\\ndebug1: kex: curve25519-sha256 need=32 dh_need=32\\r\\ndebug3: send packet: type 30\\r\\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\\r\\ndebug3: receive packet: type 31\\r\\ndebug1: SSH2_MSG_KEX_ECDH_REPLY received\\r\\ndebug1: Server host key: ssh-ed25519 SHA256:atqPBuFfxO6Sc9gPTvfkVqgaxu03/rGssvmMhSUg00U\\r\\ndebug1: load_hostkeys: fopen /root/.ssh/known_hosts: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /root/.ssh/known_hosts2: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory\\r\\ndebug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory\\r\\nWarning: Permanently added \\'2001:4178:6:1416:0:b:a:14\\' (ED25519) to the list of known hosts.\\r\\ndebug1: check_host_key: hostkey not known or explicitly trusted: disabling UpdateHostkeys\\r\\ndebug3: send packet: type 21\\r\\ndebug1: ssh_packet_send2_wrapped: resetting send seqnr 3\\r\\ndebug2: set_newkeys: mode 1\\r\\ndebug1: rekey out after 4294967296 blocks\\r\\ndebug1: SSH2_MSG_NEWKEYS sent\\r\\ndebug1: expecting SSH2_MSG_NEWKEYS\\r\\ndebug3: receive packet: type 21\\r\\ndebug1: ssh_packet_read_poll2: resetting read seqnr 3\\r\\ndebug1: SSH2_MSG_NEWKEYS received\\r\\ndebug2: set_newkeys: mode 0\\r\\ndebug1: rekey in after 4294967296 blocks\\r\\ndebug1: Will attempt key: /runner/artifacts/1145/ssh_key_data RSA SHA256:+mnZxEhj4lNVhNfzvE860S9Yz7I5nMkYNBJe8HaGOnI agent\\r\\ndebug1: Will attempt key: /root/.ssh/id_rsa \\r\\ndebug1: Will attempt key: /root/.ssh/id_dsa \\r\\ndebug1: Will attempt key: /root/.ssh/id_ecdsa \\r\\ndebug1: Will attempt key: /root/.ssh/id_ecdsa_sk \\r\\ndebug1: Will attempt key: /root/.ssh/id_ed25519 \\r\\ndebug1: Will attempt key: /root/.ssh/id_ed25519_sk \\r\\ndebug1: Will attempt key: /root/.ssh/id_xmss \\r\\ndebug2: pubkey_prepare: done\\r\\ndebug3: send packet: type 5\\r\\ndebug3: receive packet: type 7\\r\\ndebug1: SSH2_MSG_EXT_INFO received\\r\\ndebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,sk-ssh-ed25519@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com,ssh-dss,ssh-rsa,rsa-sha2-256,rsa-sha2-512>\\r\\ndebug1: kex_input_ext_info: publickey-hostbound@openssh.com (unrecognised)\\r\\ndebug3: receive packet: type 6\\r\\ndebug2: service_accept: ssh-userauth\\r\\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\\r\\ndebug3: send packet: type 50\\r\\ndebug3: receive packet: type 51\\r\\ndebug1: Authentications that can continue: publickey,password\\r\\ndebug3: start over, passed a different list publickey,password\\r\\ndebug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey\\r\\ndebug3: authmethod_lookup publickey\\r\\ndebug3: remaining preferred: ,gssapi-keyex,hostbased,publickey\\r\\ndebug3: authmethod_is_enabled publickey\\r\\ndebug1: Next authentication method: publickey\\r\\ndebug1: Offering public key: /runner/artifacts/1145/ssh_key_data RSA SHA256:+mnZxEhj4lNVhNfzvE860S9Yz7I5nMkYNBJe8HaGOnI agent\\r\\ndebug3: send packet: type 50\\r\\ndebug2: we sent a publickey packet, wait for reply\\r\\ndebug3: receive packet: type 60\\r\\ndebug1: Server accepts key: /runner/artifacts/1145/ssh_key_data RSA SHA256:+mnZxEhj4lNVhNfzvE860S9Yz7I5nMkYNBJe8HaGOnI agent\\r\\ndebug3: sign_and_send_pubkey: RSA SHA256:+mnZxEhj4lNVhNfzvE860S9Yz7I5nMkYNBJe8HaGOnI\\r\\ndebug3: sign_and_send_pubkey: signing using rsa-sha2-256 SHA256:+mnZxEhj4lNVhNfzvE860S9Yz7I5nMkYNBJe8HaGOnI\\r\\ndebug3: send packet: type 50\\r\\ndebug3: receive packet: type 52\\r\\ndebug1: Enabling compression at level 6.\\r\\nAuthenticated to 2001:4178:6:1416:0:b:a:14 ([2001:4178:6:1416:0:b:a:14]:22) using "publickey".\\r\\ndebug1: pkcs11_del_provider: called, provider_id = (null)\\r\\ndebug1: setting up multiplex master socket\\r\\ndebug3: muxserver_listen: temporary control path /runner/cp/d04603209b.Q9xNMF20HvKiTMbY\\r\\ndebug2: fd 4 setting O_NONBLOCK\\r\\ndebug3: fd 4 is O_NONBLOCK\\r\\ndebug3: fd 4 is O_NONBLOCK\\r\\ndebug1: channel 0: new [/runner/cp/d04603209b]\\r\\ndebug3: muxserver_listen: mux listener channel 0 fd 4\\r\\ndebug2: fd 3 setting TCP_NODELAY\\r\\ndebug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x20\\r\\ndebug1: control_persist_detach: backgrounding master process\\r\\ndebug2: control_persist_detach: background process is 22\\r\\ndebug2: fd 4 setting O_NONBLOCK\\r\\ndebug1: forking to background\\r\\ndebug1: Entering interactive session.\\r\\ndebug1: pledge: id\\r\\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\\r\\ndebug1: multiplexing control connection\\r\\ndebug2: fd 5 setting O_NONBLOCK\\r\\ndebug3: fd 5 is O_NONBLOCK\\r\\ndebug1: channel 1: new [mux-control]\\r\\ndebug3: channel_post_mux_listener: new mux channel 1 fd 5\\r\\ndebug3: mux_master_read_cb: channel 1: hello sent\\r\\ndebug2: set_control_persist_exit_time: cancel scheduled exit\\r\\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\\r\\ndebug2: mux_master_process_hello: channel 1 client version 4\\r\\ndebug2: mux_client_hello_exchange: master version 4\\r\\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\\r\\ndebug3: mux_client_request_session: entering\\r\\ndebug3: mux_client_request_alive: entering\\r\\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\\r\\ndebug2: mux_master_process_alive_check: channel 1: alive check\\r\\ndebug3: mux_client_request_alive: done pid = 24\\r\\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 75\\r\\ndebug2: mux_master_process_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm", cmd "/bin/sh -c \\'echo ~root && sleep 0\\'", env 0\\r\\ndebug3: mux_client_request_session: session request sent\\r\\ndebug3: mux_master_process_new_session: got fds stdin 6, stdout 7, stderr 8\\r\\ndebug1: channel 2: new [client-session]\\r\\ndebug2: mux_master_process_new_session: channel_new: 2 linked to control channel 1\\r\\ndebug2: channel 2: send open\\r\\ndebug3: send packet: type 90\\r\\ndebug3: receive packet: type 80\\r\\ndebug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0\\r\\ndebug3: receive packet: type 4\\r\\ndebug1: Remote: /root/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding\\r\\ndebug3: receive packet: type 4\\r\\ndebug1: Remote: /root/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding\\r\\ndebug3: receive packet: type 91\\r\\ndebug2: channel_input_open_confirmation: channel 2: callback start\\r\\ndebug2: client_session2_setup: id 2\\r\\ndebug1: Sending command: /bin/sh -c \\'echo ~root && sleep 0\\'\\r\\ndebug2: channel 2: request exec confirm 1\\r\\ndebug3: send packet: type 98\\r\\ndebug3: mux_session_confirm: sending success reply\\r\\ndebug2: channel_input_open_confirmation: channel 2: callback done\\r\\ndebug2: channel 2: open confirm rwindow 0 rmax 32768\\r\\ndebug1: mux_client_request_session: master session id: 2\\r\\ndebug2: channel 2: rcvd adjust 2097152\\r\\ndebug3: receive packet: type 99\\r\\ndebug2: channel_input_status_confirm: type 99 id 2\\r\\ndebug2: exec request accepted on channel 2\\r\\ndebug3: receive packet: type 96\\r\\ndebug2: channel 2: rcvd eof\\r\\ndebug2: channel 2: output open -> drain\\r\\ndebug2: channel 2: obuf empty\\r\\ndebug2: chan_shutdown_write: channel 2: (i0 o1 sock -1 wfd 7 efd 8 [write])\\r\\ndebug2: channel 2: output drain -> closed\\r\\ndebug3: receive packet: type 98\\r\\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\\r\\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\\r\\ndebug3: receive packet: type 98\\r\\ndebug1: client_input_channel_req: channel 2 rtype eow@openssh.com reply 0\\r\\ndebug2: channel 2: rcvd eow\\r\\ndebug2: chan_shutdown_read: channel 2: (i0 o3 sock -1 wfd 6 efd 8 [write])\\r\\ndebug2: channel 2: input open -> closed\\r\\ndebug3: receive packet: type 97\\r\\ndebug2: channel 2: rcvd close\\r\\ndebug3: channel 2: will not send data after close\\r\\ndebug2: channel 2: send close\\r\\ndebug3: send packet: type 97\\r\\ndebug2: channel 2: is dead\\r\\ndebug2: channel 2: gc: notify user\\r\\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\\r\\ndebug2: channel 1: rcvd close\\r\\ndebug2: channel 1: output open -> drain\\r\\ndebug2: chan_shutdown_read: channel 1: (i0 o1 sock 5 wfd 5 efd -1 [closed])\\r\\ndebug2: channel 1: input open -> closed\\r\\ndebug2: channel 2: gc: user detached\\r\\ndebug2: channel 2: is dead\\r\\ndebug2: channel 2: garbage collecting\\r\\ndebug1: channel 2: free: client-session, nchannels 3\\r\\ndebug3: channel 2: status: The following connections are open:\\r\\n #1 mux-control (t16 nr0 i3/0 o1/16 e[closed]/0 fd 5/5/-1 sock 5 cc -1)\\r\\n #2 client-session (t4 r0 i3/0 o3/0 e[write]/0 fd -1/-1/8 sock -1 cc -1)\\r\\n\\r\\ndebug2: channel 1: obuf empty\\r\\ndebug2: chan_shutdown_write: channel 1: (i3 o1 sock 5 wfd 5 efd -1 [closed])\\r\\ndebug2: channel 1: output drain -> closed\\r\\ndebug2: channel 1: is dead (local)\\r\\ndebug2: channel 1: gc: notify user\\r\\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\\r\\ndebug2: channel 1: gc: user detached\\r\\ndebug2: channel 1: is dead (local)\\r\\ndebug3: mux_client_read_packet: read header failed: Broken pipe\\r\\ndebug2: channel 1: garbage collecting\\r\\ndebug1: channel 1: free: mux-control, nchannels 2\\r\\ndebug2: Received exit status from master 0\\r\\ndebug3: channel 1: status: The following connections are open:\\r\\n #1 mux-control (t16 nr0 i3/0 o3/0 e[closed]/0 fd 5/5/-1 sock 5 cc -1)\\r\\n\\r\\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\\r\\n') <2001:4178:6:1416:0000:000b:a:14> ESTABLISH SSH CONNECTION FOR USER: root <2001:4178:6:1416:0000:000b:a:14> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/d04603209b"' 2001:4178:6:1416:0000:000b:a:14 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1718316863.213383-19-63963168782651 `" && echo ansible-tmp-1718316863.213383-19-63963168782651="` echo /root/.ansible/tmp/ansible-tmp-1718316863.213383-19-63963168782651 `" ) && sleep 0'"'"'' ``` ### Actual results **Broken / Network unreachable** ``` Identity added: /runner/artifacts/1144/ssh_key_data (/runner/artifacts/1144/ssh_key_data) ansible-playbook [core 2.15.12] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible ansible collection location = /runner/requirements_collections:/root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible-playbook python version = 3.9.18 (main, Jan 24 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (/usr/bin/python3) jinja version = 3.1.4 libyaml = True No config file found; using defaults setting up inventory plugins Loading collection ansible.builtin from host_list declined parsing /runner/inventory/hosts as it did not pass its verify_file() method Parsed /runner/inventory/hosts inventory source with script plugin Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python3.9/site-packages/ansible/plugins/callback/default.py Loading callback plugin awx_display of type stdout, v2.0 from /runner/artifacts/1144/callback/awx_display.py Skipping callback 'awx_display', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: CloudCheckSSH.yml **************************************************** Positional arguments: CloudCheckSSH/CloudCheckSSH.yml verbosity: 4 remote_user: shell connection: smart timeout: 10 become_method: sudo tags: ('all',) inventory: ('/runner/inventory',) subset: 2001:4178:6:1416:0000:000b:a:14 extra_vars: ('@/runner/env/extravars',) forks: 5 1 plays in CloudCheckSSH/CloudCheckSSH.yml PLAY [--> CloudCheckSSH] ******************************************************* TASK [Gathering Facts] ********************************************************* task path: /runner/project/CloudCheckSSH/CloudCheckSSH.yml:4 <2001:4178:6:1416:0000:000b:a:14> ESTABLISH SSH CONNECTION FOR USER: root <2001:4178:6:1416:0000:000b:a:14> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/d04603209b"' 2001:4178:6:1416:0000:000b:a:14 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"'' <2001:4178:6:1416:0000:000b:a:14> (255, b'', b'OpenSSH_8.7p1, OpenSSL 3.2.1 30 Jan 2024\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for \\'final all\\' host 2001:4178:6:1416:0000:000b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: not matched \\'final\\'\\r\\ndebug2: match not found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug1: configuration requests final Match pass\\r\\ndebug2: resolve_canonicalize: hostname 2001:4178:6:1416:0000:000b:a:14 is address\\r\\ndebug2: resolve_canonicalize: canonicalised address "2001:4178:6:1416:0000:000b:a:14" => "2001:4178:6:1416:0:b:a:14"\\r\\ndebug1: re-parsing configuration\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for \\'final all\\' host 2001:4178:6:1416:0:b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: matched \\'final\\'\\r\\ndebug2: match found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug3: expanded UserKnownHostsFile \\'~/.ssh/known_hosts\\' -> \\'/root/.ssh/known_hosts\\'\\r\\ndebug3: expanded UserKnownHostsFile \\'~/.ssh/known_hosts2\\' -> \\'/root/.ssh/known_hosts2\\'\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug1: Control socket "/runner/cp/d04603209b" does not exist\\r\\ndebug3: ssh_connect_direct: entering\\r\\ndebug1: Connecting to 2001:4178:6:1416:0:b:a:14 [2001:4178:6:1416:0:b:a:14] port 22.\\r\\ndebug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x48\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug1: connect to address 2001:4178:6:1416:0:b:a:14 port 22: Network is unreachable\\r\\nssh: connect to host 2001:4178:6:1416:0:b:a:14 port 22: Network is unreachable\\r\\n') fatal: [2001:4178:6:1416:0000:000b:a:14]: UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh: OpenSSH_8.7p1, OpenSSL 3.2.1 30 Jan 2024\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for 'final all' host 2001:4178:6:1416:0000:000b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: not matched 'final'\\r\\ndebug2: match not found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug1: configuration requests final Match pass\\r\\ndebug2: resolve_canonicalize: hostname 2001:4178:6:1416:0000:000b:a:14 is address\\r\\ndebug2: resolve_canonicalize: canonicalised address \\"2001:4178:6:1416:0000:000b:a:14\\" => \\"2001:4178:6:1416:0:b:a:14\\"\\r\\ndebug1: re-parsing configuration\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug3: /etc/ssh/ssh_config line 55: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf\\r\\ndebug2: checking match for 'final all' host 2001:4178:6:1416:0:b:a:14 originally 2001:4178:6:1416:0000:000b:a:14\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: matched 'final'\\r\\ndebug2: match found\\r\\ndebug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\\r\\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\\r\\ndebug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-]\\r\\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512]\\r\\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/root/.ssh/known_hosts'\\r\\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/root/.ssh/known_hosts2'\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug1: Control socket \\"/runner/cp/d04603209b\\" does not exist\\r\\ndebug3: ssh_connect_direct: entering\\r\\ndebug1: Connecting to 2001:4178:6:1416:0:b:a:14 [2001:4178:6:1416:0:b:a:14] port 22.\\r\\ndebug3: set_sock_tos: set socket 3 IPV6_TCLASS 0x48\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug1: connect to address 2001:4178:6:1416:0:b:a:14 port 22: Network is unreachable\\r\\nssh: connect to host 2001:4178:6:1416:0:b:a:14 port 22: Network is unreachable", "unreachable": true } PLAY RECAP ********************************************************************* 2001:4178:6:1416:0000:000b:a:14 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Additional information I even did a tcpdump on the interface on my receptor node and i can see that when awx says `Network is unreachable` it DOES NOT try to connect. I can not see any tcp packet on the interface. When it is working i see normal tcp packets on the interface. I did some ICMP / manual SSH connection attempts and it always works. I run a ICMP test for about 60 minutes from the receptor node to the target server and no single package got dropped. Network is table and running without any issues.
closed
2024-06-13T22:37:18Z
2024-06-18T10:59:54Z
https://github.com/ansible/awx/issues/15271
[ "type:bug", "needs_triage", "community" ]
discostur
1
twopirllc/pandas-ta
pandas
322
stoch and stochrsi not working
python 3.9 0.2.97b0 **Describe the bug** ```sh Traceback (most recent call last): File "C:\Users\equal\AppData\Roaming\JetBrains\PyCharm2021.1\scratches\scratch_2.py", line 15, in <module> df["fastk"] = ta.stoch(high=df["high"], low=df["low"], close=df["close"], smooth_k=14) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\frame.py", line 3163, in __setitem__ self._set_item(key, value) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\frame.py", line 3243, in _set_item NDFrame._set_item(self, key, value) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\generic.py", line 3829, in _set_item self._mgr.insert(len(self._info_axis), key, value) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\internals\managers.py", line 1203, in insert block = make_block(values=value, ndim=self.ndim, placement=slice(loc, loc + 1)) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\internals\blocks.py", line 2751, in make_block return klass(values, ndim=ndim, placement=placement) File "C:\Users\equal\PycharmProjects\event_que\venv\lib\site-packages\pandas\core\internals\blocks.py", line 142, in __init__ raise ValueError( ValueError: Wrong number of items passed 2, placement implies 1 ``` **To Reproduce** ```python import ccxt import pandas as pd import pandas_ta as ta exchange = ccxt.binance() bars = exchange.fetch_ohlcv('BTC/USDT', timeframe='1m', limit=100) df = pd.DataFrame(bars[:-1], columns=['timestamp', 'open', 'high', 'low', 'close', 'volume']) df.set_index(pd.DatetimeIndex(df["timestamp"]), inplace=True) df.ta.cores = 16 df["fastk"] = ta.stoch(high=df["high"], low=df["low"], close=df["close"], smooth_k=14) print(df) ``` **Expected behavior** i would expect to see a dataframe with pandas data series of the stoch
closed
2021-06-30T14:30:06Z
2021-07-08T22:49:55Z
https://github.com/twopirllc/pandas-ta/issues/322
[ "wontfix", "info" ]
ReubenHawley
4
RobertCraigie/prisma-client-py
pydantic
268
Add support for inline conditionals for query building
## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently it is annoying to write update mutation data with conditional inputs, for example: ```py data: UserUpdateInput = {} if name: data["name"] = name if colour != 'red': data["colour"] = colour await User.prisma().update( data=data, where={"id": user_id}, ) ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> We should support something like this: ```py await User.prisma().update( data={ 'name': name if name else prisma.omit, 'colour': colour if colour != 'red' else prisma.omit, }, where={ 'id': user_id, }, ) ``` We may also want to export the `prisma.omit` special value to the client instance as well to avoid an extra import.
open
2022-02-03T23:28:29Z
2022-02-03T23:28:43Z
https://github.com/RobertCraigie/prisma-client-py/issues/268
[ "kind/improvement", "topic: types", "topic: client", "level/advanced", "priority/medium" ]
RobertCraigie
0
tiangolo/uvicorn-gunicorn-fastapi-docker
fastapi
52
How to run as a user and not root
Hello, I need to run as a user other than root ``` FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-alpine3.10 COPY ./app /app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt \ && addgroup -S appgroup && adduser -S appuser -G appgroup USER appuser ``` Yet when I run image from above dockerfile it errors with below msg Same code works without issues when I remove `USER` line above and run as root user ``` Checking for script in /app/prestart.sh Running script /app/prestart.sh Running inside /app/prestart.sh, you could add migrations to this file, e.g.: #! /usr/bin/env bash # Let the DB start sleep 10; # Run migrations alembic upgrade head {"loglevel": "info", "workers": 2, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 120, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"} [2020-06-18 19:16:07 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2020-06-18 19:16:07 +0000] [1] [ERROR] Retrying in 1 second. [2020-06-18 19:16:08 +0000] [1] [ERROR] Retrying in 1 second. [2020-06-18 19:16:09 +0000] [1] [ERROR] Retrying in 1 second. [2020-06-18 19:16:10 +0000] [1] [ERROR] Retrying in 1 second. [2020-06-18 19:16:11 +0000] [1] [ERROR] Retrying in 1 second. [2020-06-18 19:16:12 +0000] [1] [ERROR] Can't connect to ('0.0.0.0', 80) ``` what am I missing?
closed
2020-06-18T19:19:02Z
2020-11-03T16:33:49Z
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/52
[]
hopenbr
3
slackapi/python-slack-sdk
asyncio
1,520
Curl Request for Slack File Upload using files.completeUploadExternal
I have tried python slack sdk for files upload v2 it is working fine but this is not related to slack sdk I have followed those 3 steps but the file not uploaded please take this request thankyou. curl -F filename="a.txt" -F length="581" -H "Authorization: Bearer [TOKEN]" https://slack.com/api/files.getUploadURLExternal curl -F filename="@a.txt" -H "Authorization: Bearer [TOKEN]" -v POST https://files.slack.com/upload/v1/CwABAAAA4woAAbyqtuwP6glQCgACF9zc-uR07lAMAAMLAAEAAAAJRTI3U0ZHUzJXCwACAAAAC1UwNzk5RDdNVzY4CwADAAAAC0YwN0E0NTlUWjdFAAoABAAAAAAAAAJFCwAFAAAAMGV5SlBJam9pUlRJM1UwWkhVekpYSWl3aVJpSTZJa1l3TjBFME5UbFVXamRGSW4wPQsABgAAAEthcm46YXdzOmttczp1cy1lYXN0LTE6MjIyNjg1NTU5NTg4OmtleS8zYzA4YzViOS04OTM0LTRjNjItYmIzZC1lNjQwYjhhNmIyOWIACwACAAAAFHZ7csHyC81im6bPYPgNlxom4mtvAA curl -F 'files=[{"id":"F07A459TZ7E","title":"a.txt"}]' -F channels=C123456 -H "Authorization: Bearer [TOKEN]" -v POST https://slack.com/api/files.completeUploadExternal this is simple text file of 1 line
closed
2024-06-27T13:19:16Z
2024-06-27T20:34:04Z
https://github.com/slackapi/python-slack-sdk/issues/1520
[ "question" ]
Bhargava1999
38
encode/httpx
asyncio
2,694
Square brackets no longer work
A GET request such as `/?arr[]=1&arr[]=2` is perfectly legal and it means that you are trying to send an array of values to the endpoint. Unluckily after 0.23.3 it stopped working because requests are sent as `/?arr%5B%5D=1&arr%5B%5D=2`
closed
2023-05-04T09:48:59Z
2023-05-09T13:20:14Z
https://github.com/encode/httpx/issues/2694
[]
sathia-musso
3
sanic-org/sanic
asyncio
2,527
Add `os.sendfile` and `os.splice` support
Basically `os.sendfile` can be used to copy bytes from file descriptor to socket a'la to serve static files and `os.splice` can be used to copy bytes from pipe to socket. In both cases copying bytes to/from userspace Python process are bypassed. This is just example snippet what I would like to do with Sanic: ``` from prometheus_client import Counter counter_streamed_bytes = Counter( "backup_service_streamed_bytes", "Bytes transmitted") counter_finished_streams = Counter( "backup_service_finished_streams", "Seemingly successfully finished streamings") ["exitcode"]) class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header('Content-type', 'application/gzip') self.end_headers() cmd = "dd", "if=/dev/urandom", "bs=1024", "count=1024" proc = Popen(cmd, stdout=PIPE, stdin=PIPE) while True: bytes_sent = os.splice(proc.stdout.fileno(), self.wfile.fileno(), 65536) if not bytes_sent: break counter_streamed_bytes.inc(bytes_sent) proc.stdin.close() while True: buf = proc.stdout.read() if not buf: break proc.stdout.close() proc.wait() counter_finished_streams.labels(proc.returncode).inc() ```
closed
2022-08-16T07:28:16Z
2022-08-19T13:28:12Z
https://github.com/sanic-org/sanic/issues/2527
[ "question" ]
laurivosandi
8
benbusby/whoogle-search
flask
870
Is a temporary problem?
Hi i'm deployng on portainer but i receive this demployment error: > failed to deploy a stack: whoogle Pulling whoogle Error Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution There are some problems? Thank you
closed
2022-10-29T09:28:26Z
2022-10-29T10:36:55Z
https://github.com/benbusby/whoogle-search/issues/870
[ "question" ]
banphi
2
christabor/flask_jsondash
plotly
7
Need a better way to configure which charts and libs to load
Currently, it loads anything and everything, including obscure javascript libraries for certain scenarios. It would make more sense to load them dynamically based on a user config, which would populate both the library link and the select field data. The actual libs could still live inside the repo.
closed
2016-05-02T19:54:20Z
2016-05-03T18:24:08Z
https://github.com/christabor/flask_jsondash/issues/7
[]
christabor
0
plotly/plotly.py
plotly
4,425
Mapbox stamen styles not working
It appears that the stamen styles (stamen-terrain, stamen-toner, stamen-watercolor) no longer work on mapbox maps. Using one of these styles, just gives a blank map under the data provided to the plotting function. I could provide an example of trying this myself, but it appears that the example pages using this style also have this problem such as https://plotly.com/python/mapbox-density-heatmaps/. Is this a bug or is there some issue with mapbox no longer supporting stamen styles?
closed
2023-11-15T22:27:00Z
2024-05-23T16:45:25Z
https://github.com/plotly/plotly.py/issues/4425
[]
shawnrosofsky
12
deepfakes/faceswap
machine-learning
1,482
No tensorflow 2.10 version macbook
*Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum) or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without response.* 02/01/2025 17:54:06 ERROR The maximum supported Tensorflow is version (2, 10) but you have version (2, 18) installed. Please downgrade Tensorflow. (faceswap) ll@MacBook-Pro faceswap % pip install tensorflow==2.10 ERROR: Could not find a version that satisfies the requirement tensorflow==2.10 (from versions: 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0) ERROR: No matching distribution found for tensorflow==2.10 **Crash reports MUST be included when reporting bugs.** **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Python Version [e.g. 3.5, 3.6] - Conda Version [e.g. 4.5.12] - Commit ID [e.g. e83819f] - **Additional context** Add any other context about the problem here. **Crash Report** The crash report generated in the root of your Faceswap folder
open
2025-02-01T10:01:13Z
2025-02-01T10:01:13Z
https://github.com/deepfakes/faceswap/issues/1482
[]
isold23
0
open-mmlab/mmdetection
pytorch
11,299
请问mmedetection-main如何提取后处理
比如对于retinanet,我使用的设备不支持nms算子,我想将后处理部分包含nms的部分提取出来,然后将之前的部分单独转化成onnx文件,后面的部分再转成另外的onnx文件,这样我的设备可以单独跑不包含后处理和nms算子的onnx模型。
open
2023-12-20T06:43:53Z
2023-12-20T06:44:20Z
https://github.com/open-mmlab/mmdetection/issues/11299
[]
Jasonlaiya
0
biolab/orange3
scikit-learn
6,587
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
orange can not be started ``` export QT_DEBUG_PLUGINS=1 python -m Orange.canvas Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway. QFactoryLoader::QFactoryLoader() checking directory path "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms" ... QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqeglfs.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "eglfs" ] }, "archreq": 0, "className": "QEglFSIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("eglfs") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqlinuxfb.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "linuxfb" ] }, "archreq": 0, "className": "QLinuxFbIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("linuxfb") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqminimal.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "minimal" ] }, "archreq": 0, "className": "QMinimalIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("minimal") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqminimalegl.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "minimalegl" ] }, "archreq": 0, "className": "QMinimalEglIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("minimalegl") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqoffscreen.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "offscreen" ] }, "archreq": 0, "className": "QOffscreenIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("offscreen") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqvnc.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "vnc" ] }, "archreq": 0, "className": "QVncIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("vnc") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-egl.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "wayland-egl" ] }, "archreq": 0, "className": "QWaylandEglPlatformIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("wayland-egl") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-generic.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "wayland" ] }, "archreq": 0, "className": "QWaylandIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("wayland") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-egl.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "wayland-xcomposite-egl" ] }, "archreq": 0, "className": "QWaylandXCompositeEglPlatformIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("wayland-xcomposite-egl") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwayland-xcomposite-glx.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "wayland-xcomposite-glx" ] }, "archreq": 0, "className": "QWaylandXCompositeGlxPlatformIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("wayland-xcomposite-glx") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqwebgl.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "webgl" ] }, "archreq": 0, "className": "QWebGLIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("webgl") QFactoryLoader::QFactoryLoader() looking at "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so" Found metadata in lib /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "xcb" ] }, "archreq": 0, "className": "QXcbIntegrationPlugin", "debug": false, "version": 331520 } Got keys from plugin meta data ("xcb") QFactoryLoader::QFactoryLoader() checking directory path "/home/dust/miniconda3/bin/platforms" ... Cannot load library /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so: (/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/../../lib/libQt5XcbQpa.so.5: undefined symbol: _ZN23QPlatformVulkanInstance22presentAboutToBeQueuedEP7QWindow, version Qt_5_PRIVATE_API) QLibraryPrivate::loadPlugin failed on "/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so" : "Cannot load library /home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/libqxcb.so: (/home/dust/miniconda3/lib/python3.10/site-packages/PyQt5/Qt5/plugins/platforms/../../lib/libQt5XcbQpa.so.5: undefined symbol: _ZN23QPlatformVulkanInstance22presentAboutToBeQueuedEP7QWindow, version Qt_5_PRIVATE_API)" qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb. Aborted (core dumped) ``` ubuntu 23.04 please offer orange at: https://flathub.org/ https://snapcraft.io/
closed
2023-09-25T19:09:26Z
2023-10-05T12:34:36Z
https://github.com/biolab/orange3/issues/6587
[ "bug report" ]
dustofdust
2
aminalaee/sqladmin
sqlalchemy
726
Not able to edit fields in sql admin when using s3 filetype
### Checklist - [X] The bug is reproducible against the latest release or `master`. - [X] There are no similar issues or pull requests to fix it yet. ### Describe the bug Hey, I raised the earlier in fastapi-storages as well. So i am not able to edit other fields when i saved a entry in my table with column being file type. It asks me to upload an image to save the model. My model looks like this: `class PartnerDashboardPlaceholder1(Base): __tablename__ = "Carousel1" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) title = Column(String(255), nullable=True) subtitle = Column(String(255), nullable=True) image = Column(FileType(), nullable=True) # image_url = Column(String(255), nullable=True) read_more_link: Mapped[str] = mapped_column(String(), default=None, nullable=True) is_active: Mapped[bool] = mapped_column(Boolean(), default=False) updated_at = Column(DateTime, server_default=func.now(), onupdate=func.now())` I get this error when i try to save it: ` (builtins.AttributeError) 'NoneType' object has no attribute 'read' [SQL: UPDATE "Carousel1" SET title=$1::VARCHAR, subtitle=$2::VARCHAR, image=$3::VARCHAR, updated_at=$4::TIMESTAMP WITHOUT TIME ZONE WHERE "Carousel1".id = $5::INTEGER] [parameters: [{'image': <starlette.datastructures.UploadFile object at 0x7fb2383e9110>, 'title': 'Test data', 'subtitle': 'Test123123123', 'updated_at': datetime.datetime(2024, 1, 16, 11, 13, 55), 'Carousel1_id': 1}]] ` ![Capture](https://github.com/aminalaee/fastapi-storages/assets/10103234/371fb537-c262-447c-a70a-cef5fe33bb77) Also the field doesn't shows anything even thou there is a file saved in the field. Can someone help if i am doing something wrong? ### Steps to reproduce the bug _No response_ ### Expected behavior _No response_ ### Actual behavior _No response_ ### Debugging material _No response_ ### Environment Python - 3.11.4 sqladmin - 0.16.1 fastapi-storages - 0.3.0 ### Additional context Raised this earlier in fastapi storages as well and it was pointed out that this may be an issue with sqladmin.
closed
2024-03-11T10:27:59Z
2024-10-03T15:10:53Z
https://github.com/aminalaee/sqladmin/issues/726
[]
Shauryadhaka
4
slackapi/python-slack-sdk
asyncio
672
File with multi-byte characters in the filename can not be uploaded.
### Description File with multi-byte characters in the filename can not be uploaded. ### What type of issue is this? (place an `x` in one of the `[ ]`) - [x] bug - [ ] enhancement (feature request) - [ ] question - [ ] documentation related - [ ] testing related - [ ] discussion ### Requirements (place an `x` in each of the `[ ]`) * [x] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slackclient/blob/master/.github/contributing.md) and have done my best effort to follow them. * [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct). * [x] I've searched for any related issues and avoided creating a duplicate issue. --- ### Bug Report #### Reproducible in: slackclient version: 2.5.0 python version: 3.8.1 OS version(s): Xubuntu 18.04 #### Steps to reproduce: 1. Run the following python code. ```python client = slack.WebClient(token=MY_TOKEN) client.files_upload(channels="#random", file="あ.txt") ``` `あ.txt` exists in the current directory. #### Expected result: I expected that `あ.txt` is uploaded. #### Actual result: The following error occurred. ``` File "/home/vagrant/.pyenv/versions/3.8.1/lib/python3.8/site-packages/slack/web/client.py", line 970, in files_upload return self.api_call("files.upload", files={"file": file}, data=kwargs) File "/home/vagrant/.pyenv/versions/3.8.1/lib/python3.8/site-packages/slack/web/base_client.py", line 171, in api_call return self._event_loop.run_until_complete(future) File "/home/vagrant/.pyenv/versions/3.8.1/lib/python3.8/asyncio/base_events.py", line 612, in run_until_complete return future.result() File "/home/vagrant/.pyenv/versions/3.8.1/lib/python3.8/site-packages/slack/web/base_client.py", line 207, in _send f = open(v.encode("ascii", "ignore"), "rb") FileNotFoundError: [Errno 2] No such file or directory: b'/home/vagrant/Downloads/.txt' ```
closed
2020-05-07T18:30:14Z
2020-05-15T04:02:14Z
https://github.com/slackapi/python-slack-sdk/issues/672
[ "Version: 2x", "bug", "web-client" ]
yuji38kwmt
2
CorentinJ/Real-Time-Voice-Cloning
deep-learning
1,020
ValueError: operands could not be broadcast together with shapes (2400,) (4000,) (2400,)
ValueError Traceback (most recent call last) in () 14 print("first record a voice or upload a voice file!") 15 else: ---> 16 synthesize(embedding, text) 2 frames in synthesize(embed, text) 6 #with io.capture_output() as captured: 7 specs = synthesizer.synthesize_spectrograms([text], [embed]) ----> 8 generated_wav = vocoder.infer_waveform(specs[0]) 9 generated_wav = np.pad(generated_wav, (0, synthesizer.sample_rate), mode="constant") 10 clear_output() /content/Real-Time-Voice-Cloning/vocoder/inference.py in infer_waveform(mel, normalize, batched, target, overlap, progress_callback) 61 mel = mel / hp.mel_max_abs_value 62 mel = torch.from_numpy(mel[None, ...]) ---> 63 wav = _model.generate(mel, batched, target, overlap, hp.mu_law, progress_callback) 64 return wav /content/Real-Time-Voice-Cloning/vocoder/models/fatchord_version.py in generate(self, mels, batched, target, overlap, mu_law, progress_callback) 251 fade_out = np.linspace(1, 0, 20 * self.hop_length) 252 output = output[:wave_len] --> 253 output[-20 * self.hop_length:] *= fade_out 254 255 self.train() ValueError: operands could not be broadcast together with shapes (2400,) (4000,) (2400,) Can anyone help with this error?
open
2022-02-22T15:53:40Z
2023-05-16T06:38:35Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1020
[]
LinkleZe
2
mlfoundations/open_clip
computer-vision
91
AttributeError: 'XClipAdapter' object has no attribute 'max_text_len'
when I run the code in README.md,my terminal shown that "AttributeError: 'XClipAdapter' object has no attribute 'max_text_len'". I need help.
closed
2022-05-16T16:52:33Z
2022-05-16T16:53:50Z
https://github.com/mlfoundations/open_clip/issues/91
[]
etali
1
plotly/plotly.py
plotly
4,401
Warn about `write_image` hanging with latest Kaleido
https://github.com/plotly/Kaleido/issues/134 Unfixed problem, had to spend time debugging. A warning should be thrown, and it'd help to suggest the commonly-working fix of downgrading to `kaleido==0.1.0.post1` (`0.1.0` via `-conda-forge` also worked for me).
closed
2023-10-26T13:57:14Z
2024-10-25T14:10:39Z
https://github.com/plotly/plotly.py/issues/4401
[ "bug", "sev-3" ]
OverLordGoldDragon
6
plotly/dash-table
dash
759
N Largest & N Smallest filter support for rows
Like #756 but for rows instead of columns
closed
2020-04-20T22:11:58Z
2020-04-27T17:47:58Z
https://github.com/plotly/dash-table/issues/759
[]
chriddyp
1
jupyter/nbviewer
jupyter
752
Notebook validation failed
It always gives me this error msg (below) when there is a jupyter widget in the code. However, the progress bar always doesn't work, with only 'A Jupyter Widget' showing below the code. Notebook Validation failed: {'version_major': 2, 'version_minor': 0, 'model_id': 'cb0d5407d8bf4818ab290a3d27430195'} is not valid under any of the given schemas: { "version_major": 2, "version_minor": 0, "model_id": "cb0d5407d8bf4818ab290a3d27430195" }
closed
2018-01-05T10:42:14Z
2018-07-08T01:39:59Z
https://github.com/jupyter/nbviewer/issues/752
[ "status:Need Info", "status:Inactive" ]
FENGSHAN95
2
koaning/scikit-lego
scikit-learn
462
[FEATURE] sample_weights for ZeroInflatedRegressor
Hi! Last time, I tried to use the `ZeroInflatedRegressor` together with your `DecayEstimator`. This resulted in an error because I forgot to implement the `sample_weights` keyword :D But this should be easy to do, just pass it completely to the classifier as well as parts of it (where the classifier says that the output is non-zero) to the regressor. Sounds good?
closed
2021-04-29T07:43:27Z
2021-05-02T13:45:53Z
https://github.com/koaning/scikit-lego/issues/462
[ "enhancement" ]
Garve
1
CatchTheTornado/text-extract-api
api
94
[bug] Problem with installation in docker
And I encountered such a problem that not all containers are started, as a result, nothing works. > fastapi_app-1 | exec /app/scripts/entrypoint.sh: no such file or directory > fastapi_app-1 exited with code 1 > celery_worker-1 | exec /app/scripts/entrypoint.sh: no such file or directory > celery_worker-1 exited with code 1 It doesn't find the script file, even though it is in this folder. I don't understand how to fix it
open
2025-01-18T21:40:57Z
2025-02-06T16:31:59Z
https://github.com/CatchTheTornado/text-extract-api/issues/94
[]
mesheni
6
tqdm/tqdm
pandas
726
New Line
Even though `disable=True` is passed, there is still a new line character, that is produced. Is there a workaround? (I am using the hyperopt package, and even though I am disabling the progressbar, there is an annoying newline character. https://github.com/hyperopt/hyperopt/blob/master/hyperopt/fmin.py#L197)
open
2019-05-06T15:02:04Z
2019-07-20T03:53:23Z
https://github.com/tqdm/tqdm/issues/726
[ "question/docs ‽", "to-review 🔍", "p2-bug-warning ⚠" ]
r0f1
1
fastapi/sqlmodel
sqlalchemy
508
Generate SQLModel table class from dictionary
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python `class CRFForm(CRFFormBase, table=True): patient_id: Optional[uuid.UUID] = Field(default_factory=uuid.uuid4, primary_key=True, nullable=False, title="Generate a random UUID4 - PK") patient_name: str = Field(nullable=False, title='patient_name', description='patient_name', minlength=10, maxlength=50) age: int = Field(nullable=False, title='age', description='age', le=120) is_married: Optional[bool] = Field(default=True, title="is_married")` ``` ### Description Given the following input (for say). `input_dict = { "table_name": "CRFForm", "columns": [ { "name": "patient_id", "title": "", "description": "", "type": uuid.UUID, "default": uuid.uuid4(), "nullable": False, "primary": True }, { "name": "patient_name", "title": "", "description": "", "type": str, "minlength": 10, "maxlength": 50, "nullable": False, "unique": True }, { "name": "age", "title": "", "description": "", "type": int, "nullable": False, "le": 120 }, { "name": "is_married", "title": "", "description": "", "type": bool, "nullable": False } ] }` ### Wanted Solution output --->` Generate SQLModel class from the above dictionary by satisfying the table relations and constraints` ### Wanted Code ```python `class CRFForm(CRFFormBase, table=True): patient_id: Optional[uuid.UUID] = Field(default_factory=uuid.uuid4, primary_key=True, nullable=False, title="Generate a random UUID4 - PK") patient_name: str = Field(nullable=False, title='patient_name', description='patient_name', minlength=10, maxlength=50) age: int = Field(nullable=False, title='age', description='age', le=120) is_married: Optional[bool] = Field(default=True, title="is_married")` ``` ### Alternatives _No response_ ### Operating System Linux, Windows, macOS ### Operating System Details Windows 10 Laptop and tianglo 3.9 slimbust image ### SQLModel Version 0.8 ### Python Version 3.9 ### Additional Context _No response_
closed
2022-11-29T15:31:09Z
2022-12-16T15:43:05Z
https://github.com/fastapi/sqlmodel/issues/508
[ "feature" ]
Udayaprasad
5
activeloopai/deeplake
computer-vision
2,596
[BUG] Rcursion Error
### Severity P0 - Critical breaking issue or missing functionality ### Current Behavior ``` /lib/python3.10/site-packages/deeplake/core/version_control/commit_node.py", line 28, in copy node = CommitNode(self.branch, self.commit_id) RecursionError: maximum recursion depth exceeded ``` i used 4 gpus to pre-compute the data tensor and save it into deeplake dataset. the workflow like this: ``` a = deeplake('xxx', overwrite=False) for i in range(100000): a.extend(xxx) if i % 100 == 0: a.commit() ``` ### Steps to Reproduce ``` a = deeplake('xxx', overwrite=False) for i in range(100000): a.extend(xxx) if i % 100 == 0: a.commit() ``` ### Expected/Desired Behavior /lib/python3.10/site-packages/deeplake/core/version_control/commit_node.py", line 28, in copy node = CommitNode(self.branch, self.commit_id) RecursionError: maximum recursion depth exceeded ### Python Version _No response_ ### OS _No response_ ### IDE _No response_ ### Packages _No response_ ### Additional Context _No response_ ### Possible Solution _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR (Thank you!)
closed
2023-09-18T00:50:14Z
2023-09-21T20:25:42Z
https://github.com/activeloopai/deeplake/issues/2596
[ "bug" ]
ChawDoe
1
huggingface/transformers
pytorch
35,981
Docs: return type of `get_default_model_and_revision` might be incorrectly documented?
The return type here is documented as `Union[str, Tuple[str, str]]` https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/pipelines/base.py#L385-L387 The docstring just says `str` https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/pipelines/base.py#L404 But I think that only `Tuple[str, str]` might be correct? For example, if I run ```python from transformers import Pipeline # from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification from transformers.pipelines import PIPELINE_REGISTRY from transformers import pipeline from transformers.utils import direct_transformers_import, is_tf_available, is_torch_available import numpy as np def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} PIPELINE_REGISTRY.register_pipeline( "custom-text-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification if is_torch_available() else None, tf_model=TFAutoModelForSequenceClassification if is_tf_available() else None, default={"pt": ("hf-internal-testing/tiny-random-distilbert", "2ef615d")}, type="text", ) assert "custom-text-classification" in PIPELINE_REGISTRY.get_supported_tasks() _, task_def, _ = PIPELINE_REGISTRY.check_task("custom-text-classification") classifier = pipeline('custom-text-classification') ``` then I get ```python ValueError Traceback (most recent call last) <ipython-input-6-0cc5199a8521> in <cell line: 53>() 51 _, task_def, _ = PIPELINE_REGISTRY.check_task("custom-text-classification") 52 ---> 53 classifier = pipeline('custom-text-classification') /usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 898 if model is None: 899 # At that point framework might still be undetermined --> 900 model, default_revision = get_default_model_and_revision(targeted_task, framework, task_options) 901 revision = revision if revision is not None else default_revision 902 logger.warning( ValueError: too many values to unpack (expected 2) ``` It looks like `pipeline` expects a tuple, not a string --- Looks like this may have just been forgotten during #17667?
closed
2025-01-31T10:34:48Z
2025-02-13T10:59:16Z
https://github.com/huggingface/transformers/issues/35981
[]
MarcoGorelli
1
sammchardy/python-binance
api
1,396
Authentication using Ed25519 API keys
**Describe the improvement** Hi there, Is there a chance to use the Ed25519 keys to authenticate? Binance has implemented them July 2023. There's a description here but I didn't manage to implement it in python: https://binance-docs.github.io/apidocs/spot/en/#signed-trade-user_data-and-margin-endpoint-security Thanks a lot for your work everyone ( @sammchardy ) **To implement** Adding Ed25519 keys for binance-python project **Challenging part** Ed25519 is supported by newer Python version in Crypto module, but importKey function doesn't deal with that well... This is the announcement post: https://www.binance.com/en/support/announcement/binance-now-supports-ed25519-api-keys-2023-07-19-30372026b6af4fbbb9b38ab5c3f91755 > Users are advised to switch to Ed25519 API keys as they offer optimized API performance and enhanced security
closed
2024-01-31T19:24:30Z
2024-12-15T17:07:34Z
https://github.com/sammchardy/python-binance/issues/1396
[]
iden83
19
ydataai/ydata-profiling
pandas
1,130
Feature Request: pre-commit hook workflow to mimic github PR actions
### Missing functionality At the moment, there is no easy way to check that a PR will pass basics actions such as the linter, commit message format, etc. Therefore, when someone open a PR, it most likely fail, then the contributor has to either add more commits just to solve these issues or worse, reword/amend/rebase with a force-push (e.g. when a commit message is not well formatted). See for instance this PR: https://github.com/ydataai/pandas-profiling/pull/1127 It failed on the length of the commit message. The contributor has no other way than rewriting the history that already has been pushed to the remote branch. Not a good practice. This can be demotivating for contributors. They should be able to check that before pushing/open their PR. ### Proposed feature Use precommit hooks to mimic the behavior of the GitHub Action workflow triggered when a PR is opened. https://pre-commit.com/ The contributor would then see his commits checked before pushing and can adjust accordingly. ### Alternatives considered _No response_ ### Additional context _No response_
closed
2022-10-27T08:13:25Z
2022-11-15T16:20:46Z
https://github.com/ydataai/ydata-profiling/issues/1130
[ "code quality 📈" ]
aquemy
0
gee-community/geemap
jupyter
1,165
cloud filtering option in the dynamic_world_timeseries()
User feedback: dynamic_world_timeseries method doesn't have options for cloud filtering. Compare with " .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 35)); " used in https://developers.google.com/earth-engine/tutorials/community/introduction-to-dynamic-world-pt-1
closed
2022-07-29T23:01:10Z
2022-07-29T23:57:53Z
https://github.com/gee-community/geemap/issues/1165
[ "Feature Request" ]
simonff
3
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
726
How To upload input image at webpage demo?
Output image could be saved, however, input image can't be uploaded. My model requires complex input. Using line tool is really difficult. So is it possible to upload image file as input? I understand a little of Python, but nothing of JavaScript. Maybe I shouldn't put this issue here, but I can't find a better place. Thanks! ![afdasf](https://user-images.githubusercontent.com/20597187/62614698-84e19d00-b93e-11e9-8f75-e84cc417eb8f.png)
closed
2019-08-07T10:13:09Z
2022-02-10T10:04:58Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/726
[]
RuikangSun
2
jumpserver/jumpserver
django
14,300
[Question] 管理k8s集群时,初始化连接(获取到namespace,pod,container这些信息)时,直接通过apiserver list获取的吗?还是有缓存层?
### 产品版本 v4.20 ### 版本类型 - [X] 社区版 - [ ] 企业版 - [ ] 企业试用版 ### 安装方式 - [ ] 在线安装 (一键命令安装) - [ ] 离线包安装 - [ ] All-in-One - [ ] 1Panel - [X] Kubernetes - [ ] 源码安装 ### 环境信息 linux aws ec2 helm部署 ### 🤔 问题描述 我希望能够通过jumpserver管理我的云资源,在方案评估中遇到了一个问题,希望大佬们能帮忙解答一下。 我们集群规模比较大,如果频繁访问apiserver对集群压力会比较大。所以,在这里咨询一下,在jumpserver连接k8s资源的时候,其实连接初始化时,会初始化获取pod、ns、container这些信息,请问,是每次连接都会直接走apiserver获取吗?还是说,jumpserver内部做了缓存层。 ### 期望结果 _No response_ ### 补充信息 _No response_
closed
2024-10-14T09:38:09Z
2024-11-28T03:25:14Z
https://github.com/jumpserver/jumpserver/issues/14300
[ "🤔 Question" ]
yxxchange
3
aminalaee/sqladmin
fastapi
93
Support for registering custom converters
### Checklist - [X] There are no similar issues or pull requests for this yet. ### Is your feature related to a problem? Please describe. There doesn't seem to be an obvious way to register converter functions with `@converts` or subclass `ModelConverter`. This might also be a bug where `ModelConverterBase.get_converter` is unable to recognize `TypeDecorator` types that extend a type that already has a converter. ### Describe the solution you would like. Possibly utilizing a global registry for `@converts`. ### Describe alternatives you considered _No response_ ### Additional context Encountered while trying to create a `ModelAdmin` for a `SQLModel` (related to #57) `Exception: Could not find field converter for column name (<class 'sqlmodel.sql.sqltypes.AutoString'>).` where `AutoString` extends `String` EDIT: Got it to work by setting the `sa_column=` on the SQLModel field: ```python class MyModel(SQLModel): # name: str = Field(..., index=True) # broken name: str = Field(..., sa_column=Column(String(length=512))) # works ``` I believe the feature request still has value
closed
2022-03-16T17:18:05Z
2022-06-15T07:56:20Z
https://github.com/aminalaee/sqladmin/issues/93
[ "enhancement" ]
lovetoburnswhen
6
dmlc/gluon-nlp
numpy
815
have `make test` clean up its files
## Description When I type `make test` in the gluon-nlp directory, it creates several files and doesn't clean them up: ``` imdb_lstm_200_0000.params logs net.params net.states test-682b5d15.bpe test_glob_00 test_glob_01 test_glob_11 test_numpy_dataset.npy test_numpy_dataset.npz test_tsv.tsv ``` I think it would be ideal to configure things so that these files go into a a temporary subdirectory, which would be deleted upon the completion of `make test`. Unless there are any issues with relative paths, I think it would be possible to do something like this: ``` #Makefile test: mkdir tmp cd tmp py.test -v --capture=no --durations=0 ../tests/unittest scripts cd .. rm tmp ``` Or, if that's not your style, there are other ways this could be done, too. ## References n/a
open
2019-07-05T03:35:23Z
2019-07-05T05:27:38Z
https://github.com/dmlc/gluon-nlp/issues/815
[ "enhancement" ]
forresti
3
plotly/dash
data-science
3,154
dcc.send_data_frame polars support
Currently, dcc.send_data_frame only supports pandas writers. I was wondering if we can add support for [polars](https://github.com/pola-rs/polars) as well. Previously, I had to all my operations in polars and then convert to pandas at the end to utilize dcc.send_data_frame. However, I made a modified workaround currently which I am using. ``` import polars as pl from dash import Dash, html, dcc, callback, Output, Input import io import base64 def polars_to_send_data_frame(df: pl.DataFrame, filename: str, **csv_kwargs): buffer = io.StringIO() df.write_csv(buffer, **csv_kwargs) return { 'content': base64.b64encode(buffer.getvalue().encode('utf-8')).decode('utf-8'), 'filename': filename, 'type': 'text/csv', 'base64': True } app = Dash(__name__) # Sample data df = pl.DataFrame({ 'A': range(5), 'B': ['foo', 'bar', 'baz', 'qux', 'quux'], 'C': [1.1, 2.2, 3.3, 4.4, 5.5] }) app.layout = html.Div([ html.Button("Download CSV", id="btn"), dcc.Download(id="download") ]) @callback( Output("download", "data"), Input("btn", "n_clicks"), prevent_initial_call=True ) def download_csv(n_clicks): return polars_to_send_data_frame(df, "data.csv") if __name__ == '__main__': app.run(debug=True) ``` I am wondering if I can make a pr to add support for polars!
closed
2025-02-06T23:39:03Z
2025-02-06T23:39:56Z
https://github.com/plotly/dash/issues/3154
[]
omarirfa
1
OpenInterpreter/open-interpreter
python
747
🌋 LLaVA: Large Language and Vision Assistant support
### Is your feature request related to a problem? Please describe. OpenAI is cool but it's also very expensive. ### Describe the solution you'd like LLAVA could be a great candidate as an alternative to GPT4V. `https://huggingface.co/mys/ggml_llava-v1.5-7b` I was able to load it through LMStudio but unfortunately it crashes and it requires more work. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
closed
2023-11-11T01:55:11Z
2024-03-18T20:42:09Z
https://github.com/OpenInterpreter/open-interpreter/issues/747
[ "External" ]
ilteris
5
python-visualization/folium
data-visualization
1,599
map.get_bounds() fails for map with GeometryCollection
**Describe the bug** If you add a GeoJson with a GeometryCollection to a Map, Map.get_bounds fails **To Reproduce** ``` import folium m = folium.Map(location=[39.949610, -75.150282], zoom_start=16) geojson_data = { "geometries": [ { "coordinates": [ [ [-86.1570813, 39.7567006], [-86.1570169, 39.7566965], [-86.1570169, 39.7566429], [-86.1566146, 39.7566181], [-86.1566092, 39.7566676], [-86.1565288, 39.7566965], [-86.1567645, 39.7572846], [-86.1568399, 39.7572821], [-86.156904, 39.7574413], [-86.1568345, 39.7574718], [-86.1568131, 39.7585688], [-86.1570223, 39.7585729], [-86.1570227, 39.7585614], [-86.1570809, 39.7567123], [-86.1570813, 39.7567006], ] ], "type": "Polygon", }, ], "type": "GeometryCollection", } folium.GeoJson(geojson_data).add_to(m) m.get_bounds() ``` **Expected behavior** I expected this to return a list of two points **Environment (please complete the following information):** - Browser [e.g. chrome, firefox] - Jupyter Notebook or html files? - Python version (check it with `import sys; print(sys.version_info)`) sys.version_info(major=3, minor=9, micro=11, releaselevel='final', serial=0) - folium version (check it with `import folium; print(folium.__version__)`) 0.12.1.post1 - branca version (check it with `import branca; print(branca.__version__)`) 0.5.0 **Additional context** Add any other context about the problem here. **Possible solutions** List any solutions you may have come up with. folium is maintained by volunteers. Can you help making a fix for this issue?
closed
2022-05-26T21:28:07Z
2022-11-18T10:50:53Z
https://github.com/python-visualization/folium/issues/1599
[]
blackary
2
flasgger/flasgger
rest-api
436
Backward incompatible change in apispec 4.0.0
### References Breaking Commit: https://github.com/marshmallow-code/apispec/commit/ee8002b466aeebb753bdf93047198b3ff63f02d0#diff-d2965e63925ff25611aef4a29719e18095c4a035c9a855e65d4d8ea7908298b8 Code that is broken: https://github.com/flasgger/flasgger/blob/master/flasgger/marshmallow_apispec.py#L128 ### Logs ``` 127.0.0.1 - - [17/Oct/2020 10:08:31] "GET /apispec_1.json HTTP/1.1" 500 - Traceback (most recent call last): File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__ return self.wsgi_app(environ, start_response) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app response = self.handle_exception(e) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs))) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs))) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/views.py", line 89, in view return self.dispatch_request(*args, **kwargs) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flask/views.py", line 163, in dispatch_request return meth(*args, **kwargs) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flasgger/base.py", line 133, in get return jsonify(self.loader()) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flasgger/base.py", line 399, in get_apispecs specs = get_specs( File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flasgger/utils.py", line 134, in get_specs convert_schemas(apispec_swag, apispec_definitions) File "/Users/kyrison/src/timeless-msg-api/venv/lib/python3.8/site-packages/flasgger/marshmallow_apispec.py", line 128, in convert_schemas new[k] = schema2parameters(v) TypeError: schema2parameters() missing 1 required keyword-only argument: 'location' ``` ``` marshmallow = "^3.8.0" apispec = "^4.0.0" ``` ### Workaround Downgrade to apispec <4.0.0 ``` $ poetry add apispec="<4.0.0" Updating dependencies Resolving dependencies... (0.7s) Writing lock file Package operations: 0 installs, 1 update, 0 removals - Updating apispec (4.0.0 -> 3.3.2) ``` ``` 127.0.0.1 - - [17/Oct/2020 10:26:56] "GET /apispec_1.json HTTP/1.1" 200 - ```
closed
2020-10-17T17:28:18Z
2020-12-23T09:50:45Z
https://github.com/flasgger/flasgger/issues/436
[]
kyrivanderpoel
0
miguelgrinberg/Flask-Migrate
flask
439
Detect if flask-migrate is running in create app
In the create_app function I have a function that adds a job to Flask-APScheduler, the interval is retrieved from the database. ``` from app.jobs import AddUpdateJob with app.app_context(): AddUpdateJob() ``` The problem is that when `flask db upgrade` is run it fails because it tries to add the job to the scheduler but the database has not been initialised yet and therefore it cannot find the table that holds the value. The question I have is how I can determine if flask db upgrade is running and prevent the code, that accesses the database, from executing (like `if not app.debug`, etc.)? Many thanks in advance and thanks for all the great work you do.
closed
2021-09-28T15:36:21Z
2021-09-28T18:12:15Z
https://github.com/miguelgrinberg/Flask-Migrate/issues/439
[ "question" ]
coolhva
2
babysor/MockingBird
deep-learning
70
合并preprocess audio to one step
closed
2021-08-31T14:13:25Z
2021-08-31T14:13:40Z
https://github.com/babysor/MockingBird/issues/70
[]
babysor
0
Significant-Gravitas/AutoGPT
python
9,678
Make tests clean up their created DB objects
Migration `20250318043016_update_store_submissions_format` fails locally for me because some of the integration tests leave behind their test data in the DB and it violates one of the added uniqueness constraints: <img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/352c8f65-88bf-41c1-b4e7-a301f23a9fe1/0d21f9a9-07f9-488e-9f60-325efd836602?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi8zNTJjOGY2NS04OGJmLTQxYzEtYjRlNy1hMzAxZjIzYTlmZTEvMGQyMWY5YTktMDdmOS00ODhlLTlmNjAtMzI1ZWZkODM2NjAyIiwiaWF0IjoxNzQyODI0NzIzLCJleHAiOjMzMzEzMzg0NzIzfQ.dtPcQfZCVaiFMF3hRByKsaLM2_oWQqFL2ogx_7iv9pU " alt="image.png" width="2187" data-linear-height="354" /> The solution is to make sure all integration tests that interact with the DB also clean up their created objects afterwards.
open
2025-03-24T13:58:43Z
2025-03-24T13:59:14Z
https://github.com/Significant-Gravitas/AutoGPT/issues/9678
[ "DX", "tech debt" ]
Pwuts
0
jumpserver/jumpserver
django
14,736
[Question] dfdf
### Product Version df bdfbd ### Product Edition - [X] Community Edition - [ ] Enterprise Edition - [ ] Enterprise Trial Edition ### Installation Method - [X] Online Installation (One-click command installation) - [ ] Offline Package Installation - [ ] All-in-One - [ ] 1Panel - [ ] Kubernetes - [ ] Source Code ### Environment Information dfbdfbdfbdfbfd df df df df df df df ### 🤔 Question Description df df df ddf df df df df ### Expected Behavior _No response_ ### Additional Information _No response_
closed
2024-12-28T19:13:49Z
2024-12-30T06:13:42Z
https://github.com/jumpserver/jumpserver/issues/14736
[ "🤔 Question" ]
antonewaccount
1
langmanus/langmanus
automation
93
o-series model does not support temperature=0
Error log: ```bash litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: O-series models don't support temperature=0.0. Only temperature=1 is supported. To drop unsupported openai params from the call, set `litellm.drop_params = True` ``` I am tried o1 and o1-preview on Azure. Both of them reported this error. I also tried DeepSeek-R1 on Azure, but it reported a different issue... It does not support `n` parameter? I tried editing the llm.py file to change the default temperature to 1.0, but it did not work either. Any clue?
open
2025-03-21T06:25:05Z
2025-03-24T05:15:29Z
https://github.com/langmanus/langmanus/issues/93
[ "bug" ]
AspadaX
2
miguelgrinberg/Flask-Migrate
flask
352
error when using "flask db migrate"
I am coming back to a flask project that i was building earlier in the year and trying to reestablish the database connection to a new clean db. Previously, i could do `flask db migrate` flask db upgrade` with no problem however, i am receiving an error ``` AssertionError: The sqlalchemy extension was not registered to the current application. Please make sure to call init_app() first. ``` It isn't until i move the Init_app out of the main excecution block that it actually works. I've never had this issue, what could be the problem ? ``` if __name__ == "__main__": db.init_app(app) ma.init_app(app) app.run(port=5000, debug=True) ```
closed
2020-06-23T17:12:47Z
2020-06-27T21:28:37Z
https://github.com/miguelgrinberg/Flask-Migrate/issues/352
[ "question" ]
francis-chang
1
iterative/dvc
data-science
10,437
dvc exp apply - fails
Hi, I just created a new dvc repo. I destroy my previous repo and init a new one. Then I simply create a new pipeline and run an experiment then when trying to apply the experiment I hit this issue. ```bash dvc exp apply -vv test ``` ```python 2024-05-21 17:22:22,313 DEBUG: v3.50.0 (pip), CPython 3.10.12 on Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35 2024-05-21 17:22:22,313 DEBUG: command: /home/eamrerp/.local/bin/dvc exp apply -vv test 2024-05-21 17:22:22,314 TRACE: Namespace(quiet=0, verbose=2, cprofile=False, cprofile_dump=None, yappi=False, yappi_separate_threads=False, viztracer=False, viztracer_depth=None, viztracer_async=False, pdb=False, instrument=False, instrument_open=False, show_stack=False, cd='.', cmd='apply', force=True, experiment='test', func=<class 'dvc.commands.experiments.apply.CmdExperimentsApply'>, parser=DvcParser(prog='dvc', usage=None, description='Data Version Control', formatter_class=<class 'dvc.cli.formatter.RawTextHelpFormatter'>, conflict_handler='error', add_help=False)) 2024-05-21 17:22:22,612 ERROR: unexpected error - [Errno 5] Input/output error Traceback (most recent call last): File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/cli/__init__.py", line 211, in main ret = cmd.do_run() File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/cli/command.py", line 27, in do_run return self.run() File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/commands/experiments/apply.py", line 19, in run self.repo.experiments.apply(self.args.experiment) File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/repo/experiments/__init__.py", line 334, in apply return apply(self.repo, *args, **kwargs) File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/repo/__init__.py", line 57, in wrapper with lock_repo(repo): File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/repo/__init__.py", line 45, in lock_repo with repo.lock: File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/lock.py", line 137, in __enter__ self.lock() File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/lock.py", line 119, in lock lock_retry() File "/home/eamrerp/.local/lib/python3.10/site-packages/funcy/decorators.py", line 47, in wrapper return deco(call, *dargs, **dkwargs) File "/home/eamrerp/.local/lib/python3.10/site-packages/funcy/flow.py", line 99, in retry return call() File "/home/eamrerp/.local/lib/python3.10/site-packages/funcy/decorators.py", line 68, in __call__ return self._func(*self._args, **self._kwargs) File "/home/eamrerp/.local/lib/python3.10/site-packages/dvc/lock.py", line 110, in _do_lock self._lock = zc.lockfile.LockFile(self._lockfile) File "/home/eamrerp/.local/lib/python3.10/site-packages/zc/lockfile/__init__.py", line 120, in __init__ super().__init__(path) File "/home/eamrerp/.local/lib/python3.10/site-packages/zc/lockfile/__init__.py", line 100, in __init__ self._on_lock() File "/home/eamrerp/.local/lib/python3.10/site-packages/zc/lockfile/__init__.py", line 128, in _on_lock self._fp.truncate() OSError: [Errno 5] Input/output error 2024-05-21 17:22:22,690 DEBUG: link type reflink is not available ([Errno 95] no more link types left to try out) 2024-05-21 17:22:22,690 DEBUG: Removing '/mnt/d/repo/.pIjNm0rEPNQ8WUL3XAUPsA.tmp' 2024-05-21 17:22:22,697 DEBUG: Removing '/mnt/d/repo/.pIjNm0rEPNQ8WUL3XAUPsA.tmp' 2024-05-21 17:22:22,700 DEBUG: Removing '/mnt/d/repo/.pIjNm0rEPNQ8WUL3XAUPsA.tmp' 2024-05-21 17:22:22,703 DEBUG: Removing '/mnt/d/repo/common-repo/.dvc/cache/files/md5/.Q0W3MKEM_1tixXZIxgT5Gg.tmp' 2024-05-21 17:22:22,719 DEBUG: Version info for developers: DVC version: 3.50.0 (pip) ------------------------- Platform: Python 3.10.12 on Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35 Subprojects: dvc_data = 3.15.1 dvc_objects = 5.1.0 dvc_render = 1.0.2 dvc_task = 0.4.0 scmrepo = 3.3.1 Supports: http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3), https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3) Config: Global: /home/eamrerp/.config/dvc System: /etc/xdg/dvc Cache types: hardlink, symlink Cache directory: 9p on drvfs Caches: local Remotes: None Workspace directory: 9p on drvfs Repo: dvc, git Repo.site_cache_dir: /var/tmp/dvc/repo/80dc6aaf08a6c608dfe4ff9c3c907a02 ```
closed
2024-05-21T17:31:45Z
2024-07-24T15:09:50Z
https://github.com/iterative/dvc/issues/10437
[ "awaiting response" ]
marioperezj
4
litestar-org/litestar
pydantic
3,913
Bug: Unexpected logger behaviour. Unable to use build-in default logger in Litestar
### Description Hi, I am currently using Litestar for my new services. And after FastAPI I can't get the logger to work correctly. First. During development we use std readable custom logs. And on the server we use json logs. I didn't find this fix in the documentation in sours, I guess it's not there. Additional mention of this [issue](https://github.com/litestar-org/litestar/issues/3827). Second. After several tests with Litestar, I could not jam your default logger. Using a custom logger based on the default logger, where clear formatters, logging levels, handlers, and interceptors are defined, it doesn't want to work as I expect it to work in litestar as it does in other parts of the service(s). For example, when doing local development during an exception I expect to get an expanded log of my pattern, but instead I get the default Litestar log. P.S. I'm not sure if it's a bug or a feature, but in that case I'd suggest considering stopping your default logger initialization or accepting a pre-created and prepared users' logger, as is done in the [Faststream](https://faststream.airt.ai/latest/getting-started/logging/?h=logger#using-your-own-loggers) framework. ### URL to code causing the issue _No response_ ### MCVE ```python # log.py import json import logging import sys from dataclasses import dataclass from datetime import datetime from logging import Formatter, Handler, StreamHandler from logging.config import dictConfig from typing import TYPE_CHECKING, Annotated, Any import stackprinter from loguru import logger as loguru_logger from pydantic import Field @dataclass class SerializeJsonLog: timestamp: Annotated[str, Field(alias="timestamp")] timestamp: str thread: int | str | None env: str level: str source: str message: str exceptions: list[str] | str | None = None trace_id: str | None = None span_id: str | None = None parent_id: str | None = None props: str | None = None if TYPE_CHECKING: # pragma: no cover from types import FrameType LEVEL_TO_NAME: dict[int, str] = { logging.CRITICAL: "Critical", logging.ERROR: "Error", logging.WARNING: "Warning", logging.INFO: "Info", logging.DEBUG: "Debug", logging.NOTSET: "Trace", } class LoguruInterceptHandler(Handler): def emit(self, record: logging.LogRecord) -> None: # Get corresponding Loguru level if it exists. frame: FrameType | None level: str | int try: level = loguru_logger.level(record.levelname).name except ValueError: level = record.levelno # Find caller from where originated the logged message. frame = logging.currentframe() depth = 0 while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__): frame = frame.f_back depth += 1 loguru_logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage()) class JSONLogFormatter(Formatter): """Custom class-formatter for writing logs to json.""" def __init__(self, app_env: str, *args: Any, **kwargs: Any) -> None: self.app_env: str = app_env super().__init__(*args, **kwargs) def format(self, record: logging.LogRecord, *args: list, **kwargs: dict) -> str: """Write LogRecord to json.""" log_object: dict = self._format_log_object(record) return json.dumps(log_object, ensure_ascii=False) def _format_log_object(self, record: logging.LogRecord) -> dict: now = datetime.fromtimestamp(record.created).astimezone().isoformat() if "/site-packages/" in record.pathname: parts = record.pathname.split("/site-packages/")[-1].split("/") module_path = ".".join(parts).replace(".py", "") else: module_path = record.name log_obj = SerializeJsonLog( timestamp=now, thread=record.process, env=self.app_env, level=LEVEL_TO_NAME[record.levelno], source=f"{module_path}:{record.funcName}:{record.lineno}", message=record.getMessage(), ) if hasattr(record, "props"): log_obj.props = record.props if record.exc_info: # Stackprinter gets all debug information # https://github.com/cknd/stackprinter/blob/master/stackprinter/__init__.py#L28-L137 log_obj.exceptions = str( stackprinter.format( record.exc_info, suppressed_paths=[ r"lib/python.*/site-packages/starlette.*", ], add_summary=False, ).split("\n"), ) elif record.exc_text: log_obj.exceptions = record.exc_text # # Pydantic to dict json_log = log_obj.model_dump( exclude_unset=True, by_alias=True, ) # getting additional fields if hasattr(record, "request_json_fields"): json_log.update(record.request_json_fields) return json_log def configure_logging(app_env: str, log_level: str) -> None: base_logging_handlers: dict[str, Any] = {} loguru_logger.remove() if app_env in ["dev", "test"]: log_format = ( "<green>{time:YYYY-MM-DD HH:mm:ss.SSS}</green> | " "<level>{level: <8}</level> | " "<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | " "<level>{message}</level>" ) loguru_logger.add(sys.stderr, level=log_level, format=log_format, colorize=True, serialize=False) base_logging_handlers.update({"intercept": {"()": LoguruInterceptHandler}}) else: base_logging_handlers.update( { "stream": { "class": StreamHandler, "stream": sys.stdout, "level": log_level, "formatter": "json", } } ) cfg: dict[str, Any] = { "version": 1, "disable_existing_loggers": False, "handlers": base_logging_handlers, "formatters": { "json": { "()": JSONLogFormatter, "app_env": app_env, }, }, "loggers": { # * Our own services loggers "base": { "handlers": base_logging_handlers, "level": log_level, "propagate": False, }, "base.faststream": { "handlers": base_logging_handlers, "level": "WARNING", "propagate": False, }, # * Root Python logger (Libs) "root": { "handlers": base_logging_handlers, "level": "WARNING", # ? Set this level in `DEBUG` or in `INFO` will show ALL logs "propagate": False, }, # * Uvicorn loggers (Server launch) "uvicorn": { "handlers": base_logging_handlers, "level": "DEBUG", "propagate": False, }, "uvicorn.access": { "handlers": base_logging_handlers, "level": "DEBUG", "propagate": False, }, "uvicorn.error": { "handlers": base_logging_handlers, "level": "DEBUG", "propagate": False, }, }, } dictConfig(cfg) # main.py from fastapi import APIRouter, FastAPI from litestar import Litestar, get configure_logging("dev", "DEBUG") # !My custom logger dict config definition @get("/") async def failed_endpoint_litestar() -> dict[str, str]: _ = 1 / 0 # ! Nothing in stdout :'( return {"message": "Hello World"} router = APIRouter() @router.get("/") async def failed_endpoint_fastapi() -> dict[str, str]: _ = 1 / 0 # ! I've got my excepted log in stdout here return {"message": "Hello World"} litestar_app = Litestar(route_handlers=[failed_endpoint_litestar]) fastapi_app = FastAPI() fastapi_app.include_router(router) ``` ### Steps to reproduce ```bash Litestar - `uv run python -m uvicorn src.report:litestar_app --host 0.0.0.0 --port 80 --reload` FastAPI - `uv run python -m uvicorn src.report:fastapi_app --host 0.0.0.0 --port 80 --reload` ``` ### Screenshots ```bash Below ``` ### Logs ```bash Below ``` ### Litestar Version Version: "litestar==2.13.0", ### Platform - [X] Linux - [X] Mac - [ ] Windows - [ ] Other (Please specify in the description above)
open
2024-12-25T00:32:47Z
2024-12-25T14:23:15Z
https://github.com/litestar-org/litestar/issues/3913
[ "Bug :bug:" ]
RoTorEx
4
waditu/tushare
pandas
1,050
复权因子和复权数据似乎有问题?
sz00001 = ts.pro_bar(ts_code='000001.SZ',adj='hfq',start_date='20190501')   | close | ts_code | trade_date -- | -- | -- | -- 1334.18 | 000001.SZ | 20190524 1327.70 | 000001.SZ | 20190523 1339.58 | 000001.SZ | 20190522 1356.87 | 000001.SZ |   登录雪球,检查 20190524的后复权close的值为: 1529.60 采用pro.daily 和 pro.adj_factor抓取数据,并按照 如下公式计算,得到的和pro_bar抓取的相同,但依然和雪球的数据不一致; 后复权 | 当日收盘价 × 当日复权因子 | hfq 我的ID:18601369567
closed
2019-05-26T03:09:26Z
2019-05-26T13:30:29Z
https://github.com/waditu/tushare/issues/1050
[]
ztp1978
1
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,557
problem with proxy
Hello, I already realized that it is not recommended to use proxies but based on my infrastructure I really needed to activate globaleaks after a proxy (Nginx Proxy Manager). Is there any way to configure? Globaleaks Version: v4.12.5
closed
2023-07-25T08:37:09Z
2023-07-25T17:51:52Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3557
[]
carlosribeiro
2
deeppavlov/DeepPavlov
tensorflow
767
Logs for Tensorboard when tuning pre-trained model
Hello! It would be nice to save logs for tensorboard when training pre-trained model as a continuation of already saved logs. Thank you!
closed
2019-03-18T12:49:46Z
2023-07-07T06:03:15Z
https://github.com/deeppavlov/DeepPavlov/issues/767
[ "enhancement" ]
dilyararimovna
3
cobrateam/splinter
automation
388
CookieManager.add does not support path etc (only name/value)
`selenium/webdriver/remote/webdriver.py` allows for additional keys, like `path`, `domain`, `secure` and `expiry`: ``` def add_cookie(self, cookie_dict): """ Adds a cookie to your current session. :Args: - cookie_dict: A dictionary object, with required keys - "name" and "value"; optional keys - "path", "domain", "secure", "expiry" Usage: driver.add_cookie({'name' : 'foo', 'value' : 'bar'}) driver.add_cookie({'name' : 'foo', 'value' : 'bar', 'path' : '/'}) driver.add_cookie({'name' : 'foo', 'value' : 'bar', 'path' : '/', 'secure':True}) """ self.execute(Command.ADD_COOKIE, {'cookie': cookie_dict}) ``` But this is not supported through splinter's CookieManager: ``` class CookieManager(CookieManagerAPI): def __init__(self, driver): self.driver = driver def add(self, cookies): if isinstance(cookies, list): for cookie in cookies: for key, value in cookie.items(): self.driver.add_cookie({'name': key, 'value': value}) return for key, value in cookies.items(): self.driver.add_cookie({'name': key, 'value': value}) ``` The workaround appears to be calling `driver.add_cookie` directly. (There appears to be another issue with this workaround though, because the cookie does not seem to "stick", when `visit()` has not been called before - using the cookie from Django's `client` for admin login; I'll have to debug this. Looks like #244)
closed
2015-04-15T04:13:25Z
2021-07-19T18:46:31Z
https://github.com/cobrateam/splinter/issues/388
[]
blueyed
0
jmcnamara/XlsxWriter
pandas
833
feature request: insert_image() with SVG format
### Feature Request: insert_image() with format *.svg Dear developers, 1, Are there any plans to support "*.svg" format in the function of "worksheet.insert_image()" ? 2,What is the format of the icon in the sample code? is svg? `worksheet.conditional_format('A1:C1', {'type': 'icon_set', 'icon_style': '3_traffic_lights'})` thank u!!
closed
2021-10-14T02:21:49Z
2021-10-26T04:27:08Z
https://github.com/jmcnamara/XlsxWriter/issues/833
[ "feature request" ]
ShangChien
2
mwaskom/seaborn
data-science
3,701
Feature Request: Continuous axes heat map
Feature Request: Continuous axes heat map. This would function similarly to the existing heatmap feature but allow for continuous axes rather than purely categorical. On the backend, it would behave more similarly to a 2d histplot, but instead of performing a count of data the function would accept an array_like containing values or perhaps keywords corresponding to aggregators (e.g. 'min', 'max', etc.). A special case would be 'count' which would behave like a regular histogram. Many thanks for your excellent work maintaining an excellent library.
closed
2024-05-31T03:55:38Z
2025-01-26T15:39:56Z
https://github.com/mwaskom/seaborn/issues/3701
[]
HThawley
1
blacklanternsecurity/bbot
automation
1,627
Occasional CPU Spikes in 2.0
![image](https://github.com/user-attachments/assets/44557e76-a9f1-40a2-8517-725e13ff39e0)
closed
2024-08-04T00:01:35Z
2024-11-14T05:11:11Z
https://github.com/blacklanternsecurity/bbot/issues/1627
[ "bug" ]
TheTechromancer
2
WeblateOrg/weblate
django
13,545
Something wrong spotted in #11373
### Describe the issue Today I just revisited #11373 and tried to manually removed the ending `</b>` in https://hosted.weblate.org/translate/f-droid/website/zh_Hans/?checksum=440af90d7078d5eb#history. It then displayed a toast which read “Following fixups were applied to translation: Unsafe HTML”. So it seems that the reason why I could not revert the translation in #11373 is that the Unsafe HTML quality check or the Unsafe HTML cleanup automatic fixup has taken effect. But no toast was displayed to tell the user(me). And the `ignore-safe-html` flag did not take effect correctly (TBH, I know little about Weblate’s checks and automatic fixups. And I do not how F-Droid’s Weblate project is configured). I think there must be something wrong here. Some bugs? At least something needs to be improved. ### I already tried - [x] I've read and searched [the documentation](https://docs.weblate.org/). - [x] I've searched for similar filed issues in this repository. ### Steps to reproduce the behavior 0. Get permission to review the strings of the [F-Droid](https://hosted.weblate.org/projects/f-droid/) project 1. Go to https://hosted.weblate.org/translate/f-droid/website/zh_Hans/?checksum=440af90d7078d5eb#history 2. Revert the current translation ### Expected behavior That translation is reverted. ### Screenshots ![Image](https://github.com/user-attachments/assets/5fb1027d-3835-4106-a27c-6e3b53d24884) ### Exception traceback ```pytb ``` ### How do you run Weblate? weblate.org service ### Weblate versions Weblate 5.10-dev ### Weblate deploy checks ```shell ``` ### Additional context At least a “Following fixups were applied to translation: Unsafe HTML” toast should be displayed in the above scenario mentioned.
closed
2025-01-16T14:54:43Z
2025-01-16T15:29:43Z
https://github.com/WeblateOrg/weblate/issues/13545
[]
Geeyun-JY3
1
hankcs/HanLP
nlp
622
句子”仍有很长的路要走“分词错误
<!-- 这是HanLP的issue模板,用于规范提问题的格式。本来并不打算用死板的格式限制大家,但issue区实在有点混乱。有时候说了半天才搞清楚原来对方用的是旧版、自己改了代码之类,浪费双方宝贵时间。所以这里用一个规范的模板统一一下,造成不便望海涵。除了注意事项外,其他部分可以自行根据实际情况做适量修改。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是:1.3.4 我使用的版本是:1.3.4 ## 句子”仍有很长的路要走“分词错误 <!-- 请详细描述问题,越详细越可能得到解决 --> ## 复现问题 没有修改 ### 步骤 1. 首先…… 2. 然后…… 3. 接着…… ### 触发代码 ``` public void testIssue1234() throws Exception { System.out.println(HanLP.segment("仍有很长的路要走")); } ``` ### 期望输出 ``` [仍/d, 有/vyou, 很长/d, 的/ude1, 路/n, 要/v, 走/v] ``` ### 实际输出 <!-- HanLP实际输出了什么?产生了什么效果?错在哪里?--> ``` [仍/d, 有/vyou, 很长/d, 的/ude1, 路要/nr, 走/v] ``` ## 其他信息 <!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。--> ## 分析 从结果看,好像是人名识别影响了。 但这个句子,不应该有人名的,这里错误将 路要 分成了人名了。
closed
2017-09-08T07:58:17Z
2020-01-01T11:08:07Z
https://github.com/hankcs/HanLP/issues/622
[ "ignored" ]
iwaller
3
akfamily/akshare
data-science
4,985
AKShare 接口问题报告
**详细问题描述** 1. 请先详细阅读文档对应接口的使用方式:https://akshare.akfamily.xyz:已阅 2. 操作系统版本,目前只支持 64 位操作系统:macOS 3. Python 版本,目前只支持 3.8 以上的版本:3.11 4. AKShare 版本,请升级到最新版:1.14.13 5. 接口的名称和相应的调用代码 申万指数 index_realtime_sw、index_hist_sw 2个接口 6. 接口报错的截图或描述 正常请求时:Connection to www.swhyresearch.com timed out. 不是知道是不是IP被ban了 我换了个 ip,报错成了:Expecting value: line 1 column 1 (char 0) 7. 期望获得的正确结果 正常请求
closed
2024-06-22T03:22:33Z
2024-06-22T09:21:18Z
https://github.com/akfamily/akshare/issues/4985
[ "bug" ]
callcter
2
kennethreitz/responder
graphql
15
test error : No module named 'graphene'
My last two tests raised a `ModuleNotFoundError: No module named 'graphene'`, [here](https://travis-ci.org/kennethreitz/responder/builds/440734486?utm_source=github_status&utm_medium=notification) and [here](https://travis-ci.org/kennethreitz/responder/builds/440723390)
closed
2018-10-12T17:23:03Z
2018-10-12T17:59:35Z
https://github.com/kennethreitz/responder/issues/15
[]
taoufik07
1
mirumee/ariadne-codegen
graphql
190
how to use `remote_schema_headers` ?
I don't understand how to configure `remote_schema_headers` in `pyproject.toml`. Something like ``` [tool.ariadne-codegen] schema_path = "schema.graphql" queries_path = "queries.graphql" remote_schema_headers = {"Authorization" = "Bearer: token"} ``` isn't even valid toml. Could anyone kindly share an example ?
closed
2023-08-05T19:07:09Z
2023-08-05T19:32:59Z
https://github.com/mirumee/ariadne-codegen/issues/190
[]
dd-ssc
1
d2l-ai/d2l-en
tensorflow
2,253
Error in the section GoogleLeNet
When running the notebook using TensorFlow in google Colab, I encountered the following error ![image](https://user-images.githubusercontent.com/50707331/185252203-aa066d3b-e82d-46a8-97c3-e87ee027f1cf.png) It is in the section of GoogLeNet
closed
2022-08-17T22:09:40Z
2023-05-15T14:28:17Z
https://github.com/d2l-ai/d2l-en/issues/2253
[]
COD1995
3
cookiecutter/cookiecutter-django
django
4,970
Why doesn't add RunServerPlus options for reload in windows when using docker?
## Description What are you proposing? How should it be implemented? Add this code when window = y when generating project, for reloading django code in window. ``` end of base.py # RunServerPlus # ------------------------------------------------------------------------------ {% if cookiecutter.windows == 'y' %} # After how many seconds auto-reload should scan for updates in poller-mode RUNSERVERPLUS_POLLER_RELOADER_INTERVAL = 5 # Werkzeug reloader type [auto, watchdog, or stat] RUNSERVERPLUS_POLLER_RELOADER_TYPE = 'stat' {% endif %} ``` ## Rationale Why should this feature be implemented? In window with docker, somehow(might be filesytem problem) reload doens't work with default RELOADER_TYPE. So we should add above code but when the first time using cookiecutter, all of users don't realize what happens like me. So it should be set by default when windows option as y in generating project step.
closed
2024-04-05T08:34:59Z
2024-04-16T18:23:06Z
https://github.com/cookiecutter/cookiecutter-django/issues/4970
[ "enhancement" ]
quroom
3
exaloop/codon
numpy
405
Custom exceptions failed to compile
This code works as expected in python but failed to compile. ``` class CustomException(Exception): pass def raise_exception(): raise CustomException("custom exception raised") raise_exception() ``` this is the command to compile. `codon build -release -exe custom_exception.py` And the output: ``` custom_exception.py:6:5-53: error: exceptions must derive from BaseException ╰─ custom_exception.py:9:1-16: error: during the realization of raise_exception() ```
closed
2023-06-09T21:48:34Z
2024-11-10T06:12:19Z
https://github.com/exaloop/codon/issues/405
[ "bug", "stdlib" ]
likecodingloveproblems
6
pandas-dev/pandas
python
61,141
BUG: astype transforms NA to "NA"
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas a = pandas.Series([pandas.NA], dtype = "str") # This is tight print(type(a[0])) <class 'pandas._libs.missing.NAType'> print(type(a.astype("str")[0])) <class 'str'> ``` ### Issue Description When we work with missing data, and we do transformation from NA to "str", is does not keep the NA value, instead returns the string "NA". ### Expected Behavior Return NA instead of "NA" ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.12.9 python-bits : 64 OS : Linux OS-release : 6.12.16-gentoo-x86_64 Version : #1 SMP PREEMPT_DYNAMIC Tue Feb 25 08:36:23 -03 2025 machine : x86_64 processor : AMD Ryzen 7 5800H with Radeon Graphics byteorder : little LC_ALL : None LANG : es_CL.utf8 LOCALE : es_CL.UTF-8 pandas : 2.2.3 numpy : 2.2.3 pytz : 2025.1 dateutil : 2.9.0.post0 pip : 25.0.1 Cython : None sphinx : None IPython : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : None lxml.etree : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.1 qtpy : None pyqt5 : None </details>
closed
2025-03-17T20:26:18Z
2025-03-19T18:58:21Z
https://github.com/pandas-dev/pandas/issues/61141
[ "Bug", "Missing-data", "Strings" ]
latot
6
amidaware/tacticalrmm
django
1,935
Automated Patch Policies for Workstation get applied to Servers too
**Server Info (please complete the following information):** - OS: Debian 12 - RMM Version 0.19.2 **Installation Method:** - [ ] Standard **Agent Info (please complete the following information):** - Agent version 2.8 - Agent OS: Windows Server 2022 **Describe the bug** We have created 2 Automation Policies for automated Windows Updates for Servers and Workstation both are configured the same but the Workstation Policy has the additional Task to execute "shutdown /s" 3 hours after patches those Policies are applied to the client menu after running once the Servers got the Shutdown command from the Workstation Policy and were added to the Agents **To Reproduce** i have not been able to reproduce this problem with test VMs **Expected behavior** Servers should not inherit Workstation policy **Screenshots** Server Policy ![image](https://github.com/user-attachments/assets/275b4681-7157-4fa4-97ee-c3524afc825a) Workstation Policy ![image](https://github.com/user-attachments/assets/627ff8b0-e95a-467f-855b-dc9af68eaf06) Workstation Task ![image](https://github.com/user-attachments/assets/9dfcef64-70de-4b10-bbd4-8321bec54304) ![image](https://github.com/user-attachments/assets/7f40e680-1489-476c-98c3-e261cc2ea876) ![image](https://github.com/user-attachments/assets/8b94ee62-4626-4447-9a29-f6e81416a6c7) Client Configuration ![image](https://github.com/user-attachments/assets/694f65b2-a340-4947-bdf4-ee8df65eb83e) Server-Agent Configuration it got after running once ![image](https://github.com/user-attachments/assets/f728024a-ae29-4bd7-be8c-22cd377e4b8b)
open
2024-07-26T08:19:13Z
2024-07-26T08:21:00Z
https://github.com/amidaware/tacticalrmm/issues/1935
[]
PIT-IT
0
vvbbnn00/WARP-Clash-API
flask
205
[Bug] WARNING:app_background:429 Client Error: Too Many Requests for url: https://api.cloudflareclient.com/v0i2308311933/reg
**为了更快地定位问题并减少不必要的麻烦,请在发起issue前检查是否已有相关issue存在, 谢谢!** **📜 检查表** - [ ] 我已了解目前issue主要处理`bug`和接收`feature request` ,其他类型的问题(如操作、配置问题,客户端问题等)应发布在[discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions) 中,否则该issue可能被关闭。 - [ ] 我确认这是由本项目引起的bug,而非Cloudflare WARP自身服务限制、客户端不支持等其他原因造成的(若不确定,请同样发布在[discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions))。 - [ ] 我已核实该bug尚未在其他issue中被提及。 **Bug描述** 请清晰具体地描述bug,包括观察到的异常行为和预期行为。 **复现步骤** 请提供复现bug所需的具体步骤,例如: 1. 执行命令 `docker-compose up -d` 2. 点击按钮 `Button1` 3. 访问URL `http://xxxx` **预期行为** 简述执行上述步骤后预期的正常行为。 **截屏** 如有,请附上相关截图。 **环境信息** 请准确填写相关信息,如是服务器端问题,请提供服务器环境信息: - 设备型号:[e.g. iPhone13/云服务器] - 系统版本:[e.g. iOS15.1/Windows 10] - 软件名称:[e.g. ShadowRocket] - 软件版本:[e.g. v2.2.45 (2171)] - 订阅类型:[e.g. 自动检测/Clash/ShadowRocket/Surge/...] - 出现问题时项目的 `Commit Hash`:[e.g. dfdb97f1a61ac5deb0db8f012c59d8050c9587e6] **额外信息** 如有额外信息(如运行日志、响应数据等)可助于排查问题,请在此处添加。注意保护个人隐私,不要公开 敏感信息(包括但不限于:IP地址、LicenseKey、PrivateKey、SECRET_KEY)。 已单独部署代理池docker,还是出现此错误WARNING:app_background:429 Client Error: Too Many Requests for url: https://api.cloudflareclient.com/v0i2308311933/reg 启动参数加上PROXY_POOL_URL <html> <body> <!--StartFragment--> PROXY_POOL_URL | http://192.168.31.89:5010/get -- | -- <!--EndFragment--> </body> </html>
closed
2024-05-15T02:03:01Z
2024-06-12T07:03:29Z
https://github.com/vvbbnn00/WARP-Clash-API/issues/205
[]
907739769
2
yinkaisheng/Python-UIAutomation-for-Windows
automation
235
foundIndex倒序查找问题
需要查找的对象是一个由TextControl组成的消息队列,现在我需要获取的是最新的一条消息,而它又排在索引的最后一位,亦也无法确定一共有多少条消息。foundIndex貌似不支持输入负数。请问应该如何精准定位位于索引最后一位的控件?
open
2023-02-13T01:42:41Z
2023-06-23T09:36:43Z
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/235
[]
Kaguya233qwq
1
Miserlou/Zappa
flask
1,431
Endpoint request timed out -- sometimes django application while calling apis on ios device
## Context Django application deployed using zappa with production environment. Apis are delivered through django rest framework. Python 3.6 ## Expected Behavior When endpoint is called from any device, it should work ## Actual Behavior Sometimes when the endpoint is called from the ios device, it returns endpoint request timeout. ## Possible Fix Not sure what to do for this. ## Your Environment "production": { "aws_region": "us-west-2", "django_settings": "xxxx.settings", "profile_name": "default", "s3_bucket": "zappa-xxxxxxxx", "runtime": "python3.6", "debug": true, "delete_local_zip": true, "delete_s3_zip": true, "exclude": ["*.rar"], "keep_warm": true, "keep_warm_expression": "rate(4 minutes)", "timeout_seconds": 300, "manage_roles": false, "role_name": "xxxxx-ZappaLambdaExecutionRole", "events": [{ "function": "xxxxxxxx", "expression": "rate(7 days)" },{ "function": "xxxxxxxx", "expression": "rate(7 days)" },{ "function": "xxxxxxx", "expression": "rate(12 hours)" }], "slim_handler": true }
open
2018-03-05T13:40:35Z
2019-09-24T04:42:21Z
https://github.com/Miserlou/Zappa/issues/1431
[]
codalprashant
3
allure-framework/allure-python
pytest
828
Pyright check is failing for functions call decorated with allure step
[//]: # ( . Note: for support questions, please use Stackoverflow or Gitter**. . This repository's issues are reserved for feature requests and bug reports. . . In case of any problems with Allure Jenkins plugin** please use the following repository . to create an issue: https://github.com/jenkinsci/allure-plugin/issues . . Make sure you have a clear name for your issue. The name should start with a capital . letter and no dot is required in the end of the sentence. An example of good issue names: . . - The report is broken in IE11 . - Add an ability to disable default plugins . - Support emoji in test descriptions ) #### I'm submitting a ... - [x] bug report - [ ] feature request - [ ] support request => Please do not submit support request here, see note at the top of this template. #### What is the current behavior? Pyright(1.1.380) validation fails for functions calls if they have an allure step decorator. #### If the current behaviour is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem Here is a abstract code example that shows an issue ```python import allure @allure.step def get_number_from_str(num: str) -> float: return float(num) def test_add_numbers(): assert get_number_from_str("2.0") + get_number_from_str("2.5") == 4.5 # noqa: PLR2004 ``` Which leads to a following output: ```bash # pyright . test_test.py test_test.py:10:12 - error: Operator "+" not supported for types "Unknown | object" and "Unknown | object"   Operator "+" not supported for types "object" and "object" (reportOperatorIssue) test_test.py:10:32 - error: Argument of type "Literal['2.0']" cannot be assigned to parameter "func" of type "_TFunc@__call__" in function "__call__"   Type "Literal['2.0']" is not assignable to type "(...) -> Any"     Type "Literal['2.0']" is not assignable to type "(...) -> Any" (reportArgumentType) test_test.py:10:61 - error: Argument of type "Literal['2.5']" cannot be assigned to parameter "func" of type "_TFunc@__call__" in function "__call__"   Type "Literal['2.5']" is not assignable to type "(...) -> Any"     Type "Literal['2.5']" is not assignable to type "(...) -> Any" (reportArgumentType) 3 errors, 0 warnings, 0 informations ``` If I delete `@allure.step` decorator, everything starts working fine. #### What is the expected behavior? Expected behaviour is taht type checking by `pyright` would pass here #### What is the motivation / use case for changing the behavior? #### Please tell us about your environment: Python 3.12.3 pyright: 1.1.380 - Allure version: 2.13.5 - Test framework: pytest@8.3.3 - Allure adaptor: allure-pytest@2.13.5 #### Other information [//]: # ( . e.g. detailed explanation, stacktraces, related issues, suggestions . how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc )
open
2024-09-17T15:00:15Z
2025-03-20T08:57:23Z
https://github.com/allure-framework/allure-python/issues/828
[]
vbotay
1