repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
horovod/horovod
deep-learning
3,228
Support for ElasticRayExecutor on ray.tune
I'm looking for best practices when implementing the Horovod `ElasticRayExecutor` (0.23.0) on `ray.tune` (1.7.0) The ray examples folder contain [code](https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/horovod_simple.py#L46) for the non-elastic RayExecutor w/ ray.tune. And the Horovod [docs](https://horovod.readthedocs.io/en/stable/ray_include.html#elastic-ray-executor) contain some helpful context for running Elastic Ray+Horovod w/o ray.tune. But there's not yet an elastic ray.tune implementation. It appears that the [`_HorovodTrainable`](https://github.com/ray-project/ray/blob/master/python/ray/tune/integration/horovod.py#L113) trainable uses the non-elastic RayExecutor so one would have to write a new `DistributedTrainable` class to make this compatible. How to do this is not immediately clear to me, though, given that the [`RayExecutor`](https://github.com/horovod/horovod/blob/c4306ec45ab823f71b999bdc30a5995d3f8193fe/horovod/ray/runner.py#L129) API differs from the [`ElasticRayExecutor`](https://github.com/horovod/horovod/blob/master/horovod/ray/elastic.py#L149)... Any tips would be greatly appreciated!
open
2021-10-16T23:47:06Z
2021-10-24T17:34:41Z
https://github.com/horovod/horovod/issues/3228
[ "enhancement" ]
nmatare
3
Evil0ctal/Douyin_TikTok_Download_API
api
115
[BUG] API返回中的official_api中的值需要进行修改
***发生错误的平台?*** 如:抖音/TikTok ***发生错误的端点?*** 如:API-V1/API-V2/Web APP ***提交的输入值?*** 如:短视频链接 ***是否有再次尝试?*** 如:是,发生错误后X时间后错误依旧存在。 ***你有查看本项目的自述文件或接口文档吗?*** 如:有,并且很确定该问题是程序导致的。
closed
2022-12-02T11:25:21Z
2022-12-02T23:01:45Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/115
[ "BUG", "Fixed" ]
Evil0ctal
1
ymcui/Chinese-LLaMA-Alpaca
nlp
775
使用langchain检索式问答输入问题后报错 电脑带有A100显卡
### 提交前必须检查以下项目 - [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。 - [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行 - [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案 - [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行 ### 问题类型 模型推理 ### 基础模型 LLaMA-13B ### 操作系统 Linux ### 详细描述问题 ``` python3 langchain_sum.py \ --model_path /root/7_17_alpaca \ --file_path doc.txt \ --chain_type refine ``` ### 依赖情况(代码类问题务必提供) accelerate 0.20.3 aiohttp 3.8.4 aiosignal 1.3.1 anyio 3.7.0 argilla 1.12.1 async-timeout 4.0.2 attrs 21.2.0 Automat 20.2.0 backoff 2.2.1 bcrypt 3.2.0 beautifulsoup4 4.12.2 blinker 1.4 certifi 2023.5.7 cffi 1.15.1 chardet 4.0.0 charset-normalizer 3.1.0 chromadb 0.3.23 click 8.0.3 clickhouse-connect 0.6.8 cloud-init 19.1.21 cmake 3.27.0 colorama 0.4.4 colorclass 2.2.2 command-not-found 0.3 commonmark 0.9.1 compressed-rtf 1.0.6 configobj 5.0.8 constantly 15.1.0 cryptography 41.0.2 dataclasses-json 0.5.12 datasets 2.13.1 dbus-python 1.2.18 decorator 4.4.2 deepspeed 0.9.5 Deprecated 1.2.14 dill 0.3.6 diskcache 5.6.1 distro 1.7.0 distro-info 1.1build1 duckdb 0.8.1 easygui 0.98.3 ebcdic 1.1.1 et-xmlfile 1.1.0 exceptiongroup 1.1.2 extract-msg 0.41.1 faiss-cpu 1.7.4 fastapi 0.99.1 filelock 3.12.2 frozenlist 1.3.3 fsspec 2023.6.0 gpt4all 0.3.4 greenlet 2.0.2 h11 0.14.0 hjson 3.1.0 hnswlib 0.7.0 httpcore 0.16.3 httplib2 0.20.2 httptools 0.6.0 httpx 0.23.3 huggingface-hub 0.15.1 hyperlink 21.0.0 idna 3.3 IMAPClient 2.3.1 importlib-metadata 4.6.4 incremental 21.3.0 iniconfig 2.0.0 jeepney 0.7.1 Jinja2 3.1.2 joblib 1.3.1 jsonpatch 1.32 jsonpointer 2.3 jsonschema 4.17.3 keyring 23.5.0 langchain 0.0.197 langchainplus-sdk 0.0.20 langsmith 0.0.11 lark-parser 0.12.0 launchpadlib 1.10.16 lazr.restfulclient 0.14.4 lazr.uri 1.0.6 lit 16.0.6 llama-cpp-python 0.1.50 lxml 4.9.3 lz4 4.3.2 Markdown 3.4.3 MarkupSafe 2.1.2 marshmallow 3.19.0 monotonic 1.6 more-itertools 8.10.0 mpmath 1.3.0 msg-parser 1.2.0 msoffcrypto-tool 5.1.1 multidict 6.0.4 multiprocess 0.70.14 mypy-extensions 1.0.0 netifaces 0.11.0 networkx 3.1 ninja 1.11.1 nltk 3.8.1 numexpr 2.8.4 numpy 1.23.5 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 olefile 0.46 oletools 0.60.1 openapi-schema-pydantic 1.2.4 openpyxl 3.1.2 packaging 23.1 pandas 1.5.3 pandoc 2.3 pcodedmp 1.2.6 pdfminer.six 20221105 peft 0.3.0.dev0 pexpect 4.8.0 Pillow 10.0.0 pip 22.0.2 pluggy 1.2.0 plumbum 1.8.2 ply 3.11 posthog 3.0.1 protobuf 4.23.4 psutil 5.9.5 ptyprocess 0.7.0 py-cpuinfo 9.0.0 pyarrow 12.0.1 pyasn1 0.4.8 pyasn1-modules 0.2.1 pycparser 2.21 pydantic 1.10.10 Pygments 2.15.1 PyGObject 3.42.1 PyHamcrest 2.0.2 PyJWT 2.3.0 PyMuPDF 1.22.3 pyOpenSSL 21.0.0 pypandoc 1.11 pyparsing 2.4.7 pyre-extensions 0.0.29 pyrsistent 0.19.3 pyserial 3.5 pytest 7.4.0 python-apt 2.4.0+ubuntu1 python-dateutil 2.8.2 python-debian 0.1.43ubuntu1 python-docx 0.8.11 python-dotenv 1.0.0 python-linux-procfs 0.6.3 python-magic 0.4.24 python-pptx 0.6.21 pytz 2023.3 pytz-deprecation-shim 0.1.0.post0 pyudev 0.22.0 PyYAML 6.0 red-black-tree-mod 1.20 regex 2023.6.3 requests 2.30.0 rfc3986 1.5.0 rich 13.0.1 RTFDE 0.0.2 safetensors 0.3.1 scikit-learn 1.3.0 scipy 1.11.1 screen-resolution-extra 0.0.0 SecretStorage 3.3.1 sentence-transformers 2.2.2 sentencepiece 0.1.97 service-identity 18.1.0 setuptools 59.6.0 shortuuid 1.0.11 six 1.16.0 sniffio 1.3.0 sos 4.4 soupsieve 2.4.1 SQLAlchemy 2.0.19 ssh-import-id 5.11 starlette 0.27.0 sympy 1.12 systemd-python 234 tabulate 0.9.0 tenacity 8.2.2 threadpoolctl 3.1.0 tokenizers 0.13.3 tomli 2.0.1 torch 2.0.1 torchvision 0.15.2 tqdm 4.65.0 transformers 4.30.0 triton 2.0.0 Twisted 22.1.0 typer 0.7.0 typing_extensions 4.7.1 typing-inspect 0.9.0 tzdata 2023.3 tzlocal 4.2 ubuntu-advantage-tools 8001 ubuntu-drivers-common 0.0.0 ufw 0.36.1 unattended-upgrades 0.1 unstructured 0.6.6 urllib3 2.0.2 uvicorn 0.22.0 uvloop 0.17.0 wadllib 1.3.6 watchfiles 0.19.0 websockets 11.0.3 wheel 0.37.1 wrapt 1.14.1 xformers 0.0.20 xkit 0.0.0 XlsxWriter 3.1.2 xxhash 3.2.0 yarl 1.9.2 zipp 1.0.0 zope.interface 5.4.0 zstandard 0.21.0 ### 运行日志或截图 Loading the embedding model... No sentence-transformers model found with name /root/text2vec-large-chinese. Creating a new one with MEAN pooling. [2023-07-20 16:44:17,426] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) loading LLM... The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function. Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:10<00:00, 3.63s/it] 请输入问题:你好 /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn( /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1353: UserWarning: Using `max_length`'s default (1000) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Traceback (most recent call last): File "/root/Chinese-LLaMA-Alpaca/scripts/langchain/langchain_qa.py", line 115, in <module> print(qa.run(query)) File "/root/langchain/langchain/chains/base.py", line 440, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/root/langchain/langchain/chains/base.py", line 243, in __call__ raise e File "/root/langchain/langchain/chains/base.py", line 237, in __call__ self._call(inputs, run_manager=run_manager) File "/root/langchain/langchain/chains/retrieval_qa/base.py", line 133, in _call answer = self.combine_documents_chain.run( File "/root/langchain/langchain/chains/base.py", line 445, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/root/langchain/langchain/chains/base.py", line 243, in __call__ raise e File "/root/langchain/langchain/chains/base.py", line 237, in __call__ self._call(inputs, run_manager=run_manager) File "/root/langchain/langchain/chains/combine_documents/base.py", line 106, in _call output, extra_return_dict = self.combine_docs( File "/root/langchain/langchain/chains/combine_documents/refine.py", line 152, in combine_docs res = self.initial_llm_chain.predict(callbacks=callbacks, **inputs) File "/root/langchain/langchain/chains/llm.py", line 252, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/root/langchain/langchain/chains/base.py", line 243, in __call__ raise e File "/root/langchain/langchain/chains/base.py", line 237, in __call__ self._call(inputs, run_manager=run_manager) File "/root/langchain/langchain/chains/llm.py", line 92, in _call response = self.generate([inputs], run_manager=run_manager) File "/root/langchain/langchain/chains/llm.py", line 102, in generate return self.llm.generate_prompt( File "/root/langchain/langchain/llms/base.py", line 186, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/root/langchain/langchain/llms/base.py", line 279, in generate output = self._generate_helper( File "/root/langchain/langchain/llms/base.py", line 223, in _generate_helper raise e File "/root/langchain/langchain/llms/base.py", line 210, in _generate_helper self._generate( File "/root/langchain/langchain/llms/base.py", line 602, in _generate self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) File "/root/langchain/langchain/llms/huggingface_pipeline.py", line 169, in _call response = self.pipeline(prompt) File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 201, in __call__ return super().__call__(text_inputs, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1120, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1127, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1026, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 263, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1522, in generate return self.greedy_search( File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2339, in greedy_search outputs = self( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 688, in forward outputs = self.model( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 578, in forward layer_outputs = decoder_layer( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama/modeling_llama.py", line 194, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
closed
2023-07-20T09:04:23Z
2023-07-20T12:43:38Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/775
[]
ai499
3
ufoym/deepo
jupyter
4
Can you tell me where is the caffe install folder?
Your docker image is amazing and I have tested all deeplearning tools like caffe,tensorflow etc can work. But I want to find the location of the build folder of caffe. Thanks.
closed
2017-10-30T12:03:15Z
2020-07-19T01:04:02Z
https://github.com/ufoym/deepo/issues/4
[]
supernihui
2
Yorko/mlcourse.ai
pandas
330
Яндекс&МФТИ, Coursera, Final project - Идентификация пользователей
Здравствуйте! Уточните пожалуйста, какова форма ответа в задании 2 недели, вопрос 2: "Распределено ли нормально число уникальных сайтов в сессии?". В форме нет четких указаний на формулировку ответа, варианты "Нет", "No", значение статистики и p-value критерия Шапиро-Вилка не подходят... Может, я неверно посчитал, но ведь так и не понять :)
closed
2018-04-28T16:12:15Z
2018-08-04T16:07:50Z
https://github.com/Yorko/mlcourse.ai/issues/330
[ "invalid" ]
levbed
1
chezou/tabula-py
pandas
327
Allow columns parameter to use relative area
**Is your feature request related to a problem? Please describe.** <!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently, the `columns` parameter accepts a list of floats, which map to the horizontal location in points even when setting `relative_area` to `True`. The list of floats cannot represent relative position on the page. **Describe the solution you'd like** <!--- A clear and concise description of what you want to happen. --> I would like the `columns` parameter to use relative position of the page width just like the `area` parameter does when `relative_area` is set to `True`. **Describe alternatives you've considered** <!--- A clear and concise description of any alternative solutions or features you've considered. --> This feature is able to be completed by setting the `options` parameter. For example `['--columns %25,50,80.6']`. **Additional context** <!--- Add any other context or screenshots about the feature request here. --> I think it would be nice to have this feature directly available instead of using `options`.
closed
2022-11-23T21:32:40Z
2022-12-01T15:52:05Z
https://github.com/chezou/tabula-py/issues/327
[]
tdpetrou
5
ckan/ckan
api
8,605
Solution: Automated data enrichment with metadata, tagging, annotation
**Problem description** Data scientists and ML engineers spend unnecessary time by manually searching through datasets to understand their characteristics due to insufficient or inconsistent metadata. It produces waste by repeatedly analyzing basic dataset characteristics. **Problem discovery** By interviewing an ML Engineer (it'll be a video interview next month), it was clearly said that: "...metadata helps me as an engineer to swiftly navigate by data, quickly get data statistics and pool slices of data with required properties." Extrapolating this statement we get into more specifics: - ML Engineers need quick access to data statistics and properties - Datasets navigation is often cumbersome without proper tagging - Manual annotation is time-consuming and prone to inconsistencies - Data users require efficient ways to filter datasets - Manual and ad-hoc approaches are common but pricy and time consuming **Solution hypothesis** ML powered automated data enrichment based on dataset content analysis; summarization of characteristics inside the dataset; names, locations, dates extraction. Tag consistency should be preserved. - Amount of missing values and data consistency may be detected - Created as a plugin - Ability to tune enrichment Additional functionality: - Integration with existing search functionality - API endpoints for automated tagging **Success metrics** 1. Reduction in time spent on data discovery 2. Accuracy of automated annotations **Questions to consider:** Is this change going to break current installations? - No breaking changes to core CKAN functionality. Implemented as optional, additive features. Can we provide a backwards compatibility? - New features implemented as optional extensions How easy is gonna be for current implementations to migrate to this new release? - No downtime required for core functionality Do current versions of CKAN have the adequate resources/support to migrate to this new version? - Optional GPU support? Are we going to change the database schema? - New table for generated metadata and tags Are we going to change the API? - New endpoints added Are we going to deprecate Interfaces? - No we wouldn't
open
2025-01-04T19:17:06Z
2025-01-07T13:18:05Z
https://github.com/ckan/ckan/issues/8605
[]
thegostev
0
ansible/awx
automation
15,007
Show status for host in job, not status of job in list of recent jobs for host
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Feature type Enhancement to Existing Feature ### Feature Summary Currently the exit code from ansible-playbook is being used to determine the status of the recent jobs in the activity log for a host. Finding out, what status the actual host had in a particular job, is quite tedious. I am aware, that it would be at least as tedious to to that in awx - but it would make awx a lot more user-friendly. ### Select the relevant components - [X] UI - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [X] Other ### Steps to reproduce Finding out, what status the actual host had in a particular job, is quite tedious: - check job output - check status of that host in that job ### Current results Currently the activity log for a host shows the status of the recent _jobs_, the host was in - not the status of the _host_ in these _jobs_ (which is, what the user would expect, looking at the recent activity of the host) ### Sugested feature result The activity log of a host should show the status of the recent jobs of the host. ### Additional information _No response_
open
2024-03-18T08:30:46Z
2024-03-18T08:31:16Z
https://github.com/ansible/awx/issues/15007
[ "type:enhancement", "component:ui", "needs_triage", "community" ]
leitwerk-ag
0
huggingface/datasets
pandas
7,077
column_names ignored by load_dataset() when loading CSV file
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.24.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
2024-07-26T14:18:04Z
2024-07-30T07:52:26Z
https://github.com/huggingface/datasets/issues/7077
[]
luismsgomes
1
ultralytics/yolov5
pytorch
12,798
How to load Custom Models in VsCode on windows
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question ![image](https://github.com/ultralytics/yolov5/assets/99647936/d10064dc-77aa-4a31-8396-8b683abab19f) I already read this one https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/#custom-models How to fix it? Thank you ### Additional _No response_
closed
2024-03-07T16:47:26Z
2024-04-18T00:20:36Z
https://github.com/ultralytics/yolov5/issues/12798
[ "question", "Stale" ]
Waariss
2
zappa/Zappa
flask
1,361
The role defined for the function cannot be assumed by Lambda.
<!--- Provide a general summary of the issue in the Title above --> ## Context I was deploying a simple (admin) only django install to test zappa and the zappa init was fine but on my first run of zappa deploy dev I go the error in the title <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 --> Python 3.12 ## Expected Behavior <!--- Tell us what should happen --> I expected to get through to Deployment complete ## Actual Behavior <!--- Tell us what happens instead --> An error occurred (InvalidParameterValueException) when calling the CreateFunction operation: The role defined for the function cannot be assumed by Lambda. ## Possible Fix <!--- Not obligatory, but suggest a fix or reason for the bug --> https://stackoverflow.com/a/37438525/2434654 this link suggested sleeping a few seconds so I tried it again and it got to Deployment complete. **Perhaps introduce a pause in the deployment after the upload to wait for the role to be ready if others have this issue.** ## Steps to Reproduce <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug include code to reproduce, if relevant --> 1. Step up super basic django 2. set up database (possibly not required) 3. setup amazon account to deploy ( i used sso) 4. zappa init 5. zappa deploy dev ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: `0.59.0` * Operating System and Python version: MacOS 15.2 * The output of `pip freeze`: ``` argcomplete==3.5.3 asgiref==3.8.1 boto3==1.35.95 botocore==1.35.95 certifi==2024.12.14 cfn-flip==1.3.0 charset-normalizer==3.4.1 click==8.1.8 django==5.1.4 durationpy==0.9 hjson==3.1.0 idna==3.10 jmespath==1.0.1 kappa==0.6.0 markupsafe==3.0.2 pip==24.3.1 placebo==0.9.0 psycopg==3.2.3 psycopg-binary==3.2.3 python-dateutil==2.9.0.post0 python-slugify==8.0.4 pyyaml==6.0.2 requests==2.32.3 s3transfer==0.10.4 setuptools==75.8.0 six==1.17.0 sqlparse==0.5.3 text-unidecode==1.3 toml==0.10.2 tqdm==4.67.1 troposphere==4.8.3 typing-extensions==4.12.2 urllib3==2.3.0 werkzeug==3.1.3 wheel==0.45.1 zappa==0.59.0 ``` * Link to your project (optional): * Your `zappa_settings.json`: ``` { "dev": { "aws_region": "ap-southeast-2", "django_settings": "project.settings", "exclude": [ "boto3", "dateutil", "botocore", "s3transfer", "concurrent" ], "profile_name": "default", "project_name": "unity", "runtime": "python3.12", "s3_bucket": "random-name" } } ```
open
2025-01-08T22:04:39Z
2025-02-14T05:41:53Z
https://github.com/zappa/Zappa/issues/1361
[]
nigeljames-tess
1
Gozargah/Marzban
api
694
امکان ویرایش نام کاربری در آپدیت بعدی
کاش می شد این قابلیت فعال باشه بدون اینکه لینک ساب تغیر کنه نام کاربری رو ویرایش کرد میشه ظوری تنظیم کنین در اپدیت بعدی که لینک ساب بر اساس شماره بندی مخصوص خودش داخل دیتایس لینک بده که در صورت تغیر یوزرنیم اون عوض نشه
closed
2023-12-12T15:34:59Z
2023-12-12T16:30:36Z
https://github.com/Gozargah/Marzban/issues/694
[ "Feature" ]
hayousef68
1
ansible/awx
automation
14,931
When job is running, selected EE is not visible in the UI, it appears only after it finishes
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Feature type New Feature ### Feature Summary Right now EE that was selected is invisible to user, see here ![image](https://github.com/ansible/awx/assets/1560121/a61b4a40-d6ee-421f-932c-925716b0f996) A specific version of EE was used for this template, but it's not visible anywhere in the job history (it was changed between runs) and therefore we can't verify if that specific version was really used or not and which version of EE the job was started with ### Select the relevant components - [X] UI - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Steps to reproduce Create a template, pick a custom EE, start a job, then edit template, pick another EE, start a new job. You won't be able to tell which job was started with which EE. ### Current results See above ### Sugested feature result Make it so that chosen EE version is visible just like other fields, job template, project etc. ### Additional information _No response_
open
2024-02-27T09:01:41Z
2024-03-06T16:17:47Z
https://github.com/ansible/awx/issues/14931
[ "type:enhancement", "component:ui", "needs_triage", "community" ]
benapetr
8
Lightning-AI/pytorch-lightning
data-science
20,384
Custom TQDMProgressBar changes not reflected
### Bug description I wrote a custom TQDMProgressBar class with some changes. When I run `train.fit()` in JupyterLab the default progress bar is still used, however. ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug ```python from lightning.pytorch.callbacks import TQDMProgressBar class CustomProgBar(TQDMProgressBar): def __init__(self, ncols: int = 100): super().__init__(leave=True) self.ncols = ncols def init_sanity_tqdm(self): bar = super().init_sanity_tqdm() bar.ncols = self.ncols return bar def init_train_tqdm(self): bar = super().init_train_tqdm() bar.ncols = self.ncols return bar def init_validation_tqdm(self): bar = super().init_validation_tqdm() bar.ncols = self.ncols return bar trainer = L.Trainer(accelerator="cpu", max_epochs=5, callbacks=[CustomProgBar(),], log_every_n_steps=1) # `model` and `data` are LightningModule and LightningDataModule instances, respectively. # I can include the code for this if you think it's needed for debugging this. trainer.fit(model, datamodule=data) ``` ### Error messages and logs Printout without `callbacks` argument passed: ``` | Name | Type | Params | Mode ------------------------------------------ 0 | model | UNet | 3.0 M | eval 1 | loss | DiceLoss | 0 | eval ------------------------------------------ 3.0 M Trainable params 0 Non-trainable params 3.0 M Total params 11.893 Total estimated model params size (MB) 0 Modules in train mode 112 Modules in eval mode Sanity Checking: | | 0/? [00:00<… Training: | | 0/? [00:00<… ``` Printout with just the `CustomProgBar` callaback: ``` | Name | Type | Params | Mode ------------------------------------------ 0 | model | UNet | 3.0 M | eval 1 | loss | DiceLoss | 0 | eval ------------------------------------------ 3.0 M Trainable params 0 Non-trainable params 3.0 M Total params 11.893 Total estimated model params size (MB) 0 Modules in train mode 112 Modules in eval mode Sanity Checking: | | 0/? [00:00<… Training: | | 0/? [00:00<… ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0 #- PyTorch Version (e.g., 2.4): 2.4.0 #- Python version (e.g., 3.12): 3.12.3 #- OS (e.g., Linux): Windows 10 #- CUDA/cuDNN version: n/a #- GPU models and configuration: none, CPU only #- How you installed Lightning(`conda`, `pip`, source): pip #- TQDM version: 4.66.6 ``` </details> ### More info _No response_
open
2024-11-01T18:56:25Z
2024-11-20T20:10:38Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20384
[ "bug", "needs triage", "ver: 2.4.x" ]
oseymour
0
zappa/Zappa
django
401
[Migrated] from zappa.concurrent.futures import LambdaPoolExecutor?
Originally from: https://github.com/Miserlou/Zappa/issues/1024 by [olirice](https://github.com/olirice) **Feature Proposal** Implement `LambdaPoolExecutor` with a similar api to `ThreadPoolExecutor` and `ProcessPoolExecutor`? i.e. ``` python from concurrent.futures import as_completed # pip install futures on py2.7 from zappa.concurrent.futures import LambdaPoolExecutor # Pushes current working directory to Lambda executor = LambdaPoolExecutor(max_workers=15) # Function to execute in Lambda function def do_stuff(x: int) -> int: return 1 futures = [] for val in range(100): # Submit work to Lambda function # Store python future (non-blocking) future = executor.submit(do_stuff, val) futures.append(future) # As each task completes for output in as_completed(futures): # Collect result from futures object result = output.result() # do more stuff with result ``` Thoughts: 1. Initialization of the executor would package up the project and ship it to Lambda (if not exists). 2. Submitting work to the executor pickles the variable `x`, copies it up to S3 and notifies a handler in Lambda 3. The handler downloads, unpickles, and passes the variables to the function 4. The client (`LambdaPoolExecutor`) repeatedly checks S3 for a response to see if the work is done 5. When the work is done, copy the pickled response from S3, unpickle, and set the result to the futures object **But why?** Zappa for cluster computing! - Drop in replacement for python concurrency primitives - Map reduce - ETL - Distributed web scraping - Generally take advantage AWS's CPU our internet connection resources with little effort **Possible hang ups** - 5 minute max execution time All thoughts welcome If the response is positive, I'll take a swing at implementing it.
closed
2021-02-20T08:27:58Z
2022-08-19T07:28:47Z
https://github.com/zappa/Zappa/issues/401
[]
jneves
1
aio-libs/aiomysql
asyncio
214
get a problem in cursor
Hello everyone, I use pool to execute sql.the example is like that without any problem ```python async with _pool.acquire() as conn: async with conn.cursor(aiomysql.DictCursor) as _cur: await _cur.execute(sql,kwargs or args) rs = await _cur.fetchall() ``` and ab result is good ``` Time taken for tests: 39.560 seconds Complete requests: 10000 Failed requests: 0 Total transferred: 3450000 bytes HTML transferred: 2460000 bytes Requests per second: 252.78 [#/sec] (mean) ``` **but** I want to do it like that in a Class ```python async def _cursor(self): async with self._pool.acquire() as conn: _cur = await conn.cursor(aiomysql.DictCursor) return _cur ``` ```python async def query(self,sql,*args,**kwargs): _cur = await self._cursor() try: await _cur.execute(sql,kwargs or args) rs = await _cur.fetchall() return rs finally: await _cur.close() ``` the ab result is so bad ``` Complete requests: 10000 Failed requests: 9919 (Connect: 0, Receive: 0, Length: 9919, Exceptions: 0) Non-2xx responses: 83 Total transferred: 3443379 bytes HTML transferred: 2451814 bytes Requests per second: 163.92 [#/sec] (mean) ``` I know that there is problem in my codes, but I don't know what is different. maybe conn don't close correctly?or the conn close too late?
closed
2017-10-10T12:30:25Z
2017-10-15T16:34:40Z
https://github.com/aio-libs/aiomysql/issues/214
[]
shownb
3
vimalloc/flask-jwt-extended
flask
478
make `current_user` available in jinja templates
Can jinja templates get `current_user` variable access without passing it explicitly? Like `Flask-Login` does, its quite convinient
closed
2022-05-22T19:05:51Z
2022-07-23T21:34:35Z
https://github.com/vimalloc/flask-jwt-extended/issues/478
[]
ghost
2
MagicStack/asyncpg
asyncio
383
Compiling the docs leads to missing sections
## Steps to reproduce ``` git clone https://github.com/MagicStack/asyncpg cd asyncpg/docs git checkout v0.18.1 python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt make html ``` ## Expected The connection pools section, located at _build/html/api/index.html#connection-pools, appears the same as https://magicstack.github.io/asyncpg/current/api/index.html#connection-pools. ## Actual The section is empty. ## Build log ``` python -m sphinx -b html -d _build/doctrees . _build/html Running Sphinx v1.8.1 loading pickled environment... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 1 source files that are out of date updating environment: 31 added, 0 changed, 0 removed reading sources... [100%] usage /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/base.rst:3: WARNING: Error in "currentmodule" directive: maximum 1 argument(s) allowed, 3 supplied. .. currentmodule:: {{ module }} /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/class.rst:3: WARNING: Error in "currentmodule" directive: maximum 1 argument(s) allowed, 3 supplied. .. currentmodule:: {{ module }} WARNING: invalid signature for autoclass ('{{ objname }}') WARNING: don't know which module to import for autodocumenting '{{ objname }}' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) WARNING: invalid signature for automodule ('{{ fullname }}') WARNING: don't know which module to import for autodocumenting '{{ fullname }}' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/base.rst:3: WARNING: Error in "currentmodule" directive: maximum 1 argument(s) allowed, 3 supplied. .. currentmodule:: {{ module }} /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/class.rst:3: WARNING: Error in "currentmodule" directive: maximum 1 argument(s) allowed, 3 supplied. .. currentmodule:: {{ module }} WARNING: invalid signature for autoclass ('{{ objname }}') WARNING: don't know which module to import for autodocumenting '{{ objname }}' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) WARNING: invalid signature for automodule ('{{ fullname }}') WARNING: don't know which module to import for autodocumenting '{{ fullname }}' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name) WARNING: autodoc: failed to import function 'connection.connect' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'connection.Connection' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'prepared_stmt.PreparedStatement' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'transaction.Transaction' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'cursor.CursorFactory' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'cursor.Cursor' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import function 'pool.create_pool' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import class 'pool.Pool' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) WARNING: autodoc: failed to import module 'types' from module 'asyncpg'; the following exception was raised: cannot import name 'Protocol' from 'asyncpg.protocol.protocol' (unknown location) looking for now-outdated files... none found pickling environment... done checking consistency... /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/Jinja2-2.10.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/Pygments-2.2.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/alabaster-0.7.12.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/certifi-2018.10.15.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/chardet-3.0.4.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/docutils-0.14.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/pytz-2018.7.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/requests-2.20.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/six-1.11.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/snowballstemmer-1.2.1.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/base.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/class.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/module.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/Jinja2-2.10.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/Pygments-2.2.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/alabaster-0.7.12.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/certifi-2018.10.15.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/chardet-3.0.4.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/docutils-0.14.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/pytz-2018.7.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/requests-2.20.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/six-1.11.0.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/snowballstemmer-1.2.1.dist-info/DESCRIPTION.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/base.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/class.rst: WARNING: document isn't included in any toctree /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib64/python3.7/site-packages/sphinx/ext/autosummary/templates/autosummary/module.rst: WARNING: document isn't included in any toctree done preparing documents... done writing output... [100%] usage generating indices... genindex py-modindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded, 47 warnings. The HTML pages are in _build/html. Build finished. The HTML pages are in _build/html. ``` ## Attempted workaround Building asyncpg from source first in the same venv also does not help: ``` cd ../ pip install -e . ``` ``` Obtaining file:///home/benjamin/code/vcs/git/com/github/%40/MagicStack/asyncpg Complete output from command python setup.py egg_info: running egg_info writing asyncpg.egg-info/PKG-INFO writing dependency_links to asyncpg.egg-info/dependency_links.txt writing requirements to asyncpg.egg-info/requires.txt writing top-level names to asyncpg.egg-info/top_level.txt Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/setup.py", line 294, in <module> setup_requires=setup_requires, File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup return distutils.core.setup(**attrs) File "/usr/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 296, in run self.find_sources() File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 303, in find_sources mm.run() File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 534, in run self.add_defaults() File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 570, in add_defaults sdist.add_defaults(self) File "/usr/lib/python3.7/distutils/command/sdist.py", line 228, in add_defaults self._add_defaults_ext() File "/usr/lib/python3.7/distutils/command/sdist.py", line 311, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File "/usr/lib/python3.7/distutils/cmd.py", line 299, in get_finalized_command cmd_obj.ensure_finalized() File "/usr/lib/python3.7/distutils/cmd.py", line 107, in ensure_finalized self.finalize_options() File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/setup.py", line 233, in finalize_options annotate=self.cython_annotate) File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/Cython/Build/Dependencies.py", line 956, in cythonize aliases=aliases) File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/Cython/Build/Dependencies.py", line 801, in create_extension_list for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern): File "/home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/docs/.venv/lib/python3.7/site-packages/Cython/Build/Dependencies.py", line 111, in nonempty raise ValueError(error_msg) ValueError: 'asyncpg/pgproto/pgproto.pyx' doesn't match any files ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /home/benjamin/code/vcs/git/com/github/@/MagicStack/asyncpg/ ```
closed
2018-11-04T18:10:50Z
2018-11-04T18:24:32Z
https://github.com/MagicStack/asyncpg/issues/383
[]
ioistired
1
mars-project/mars
scikit-learn
2,659
Support `merge_small_files` for `md.read_parquet` etc
<!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Is your feature request related to a problem? Please describe.** For reading data op like `md.read_parquet` and `md.read_csv`, if too many small files exist, a lot of chunks would be created, and the upcoming computation could be extremely slow. Thus I suggest to add a `merge_small_files` argument to these functions to enable auto optimization on merging small files. **Describe the solution you'd like** Sample a few input chunks, e.g. 10 chunks, get `k = 128M / {size of the largest chunk}`, if greater than 2, try to merge small chunks every k chunks.
closed
2022-01-27T10:38:11Z
2022-01-30T02:12:27Z
https://github.com/mars-project/mars/issues/2659
[ "type: enhancement", "mod: dataframe", "task: medium" ]
qinxuye
0
marshmallow-code/flask-smorest
rest-api
413
Proper way to use schemas with alt_response?
I have a route handler that looks like this: ```py @bp.arguments(Args) @bp.response(200, GoodResponse) @bp.alt_response(400, schema=ErrorResponse) def put(self, args): ... if errors: abort(400, errors=errors) else: return results ``` I've tried a few different variations in the error case to get things returning properly and validated according to the schema but can't get it working. Is it correct that the schema specified in `alt_response` will be used to validate and serialize the object returned in a `400` case here? I've experimented with putting invalid data into the `errors` object here, and it doesn't seem to make a difference. My goal here is to return a 400 response with a JSON body that has the shape of `ErrorResponse`. Thanks in advance for any help!
closed
2022-10-25T22:52:43Z
2024-05-24T20:16:28Z
https://github.com/marshmallow-code/flask-smorest/issues/413
[ "question" ]
GSGerritsen
7
frappe/frappe
rest-api
31,527
Edit button misalignment
![Image](https://github.com/user-attachments/assets/9075e954-7030-4261-afd9-2f21930170ef)
open
2025-03-05T09:11:20Z
2025-03-05T09:11:20Z
https://github.com/frappe/frappe/issues/31527
[ "bug" ]
maasanto
0
microsoft/nni
pytorch
5,558
Need shape format support for predefined one shot search space
Describe the issue: When I add the profiler as tutorial instructed below: ```python dummy_input = torch.randn(1, 3, 32, 32) profiler = NumParamsProfiler(model_space, dummy_input) penalty = ExpectationProfilerPenalty(profiler, 500e3) strategy = DartsStrategy(gradient_clip_val=5.0, penalty=penalty) ``` This error appeared: ```bash [2023-05-12 22:02:18] WARNING: Shape information is not explicitly propagated when executing aten.avg_pool2d.default, and and a recent module that needs shape information has no shape inference formula. Module calling stack: - '' (type: nni.nas.hub.pytorch.nasnet.DARTS, NO shape formula) - 'stages.0' (type: nni.nas.hub.pytorch.nasnet.NDSStage, NO shape formula) - 'stages.0.blocks.0' (type: nni.nas.nn.pytorch.cell.Cell, NO shape formula) - 'stages.0.blocks.0.ops.0.0' (type: nni.nas.nn.pytorch.choice.LayerChoice, HAS shape formula) - 'stages.0.blocks.0.ops.0.0.avg_pool_3x3' (type: torch.nn.modules.pooling.AvgPool2d, NO shape formula) Traceback (most recent call last): File "/home/dzhang/Documents/ml-experimental/TinyNAS/nas_tutorial/darts.py", line 235, in <module> profiler = NumParamsProfiler(model_space, dummy_input) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/flops.py", line 197, in __init__ self.profiler = FlopsParamsProfiler(model_space, args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/flops.py", line 171, in __init__ shapes = submodule_input_output_shapes(model_space, *args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 466, in submodule_input_output_shapes model(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/hub/pytorch/nasnet.py", line 613, in forward s0, s1 = stage([s0, s1]) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/nn/pytorch/repeat.py", line 158, in forward x = block(x) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl result = forward_call(*input, **kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/nn/pytorch/cell.py", line 385, in forward current_state.append(op(inp(states))) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1215, in _call_impl hook_result = hook(self, input, result) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 514, in module_shape_inference_hook result = _module_shape_inference_impl(module, output, *input, is_leaf=is_leaf) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 628, in _module_shape_inference_impl output_shape = formula(module, *input_args, **formula_kwargs, **input_kwargs) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape_formula.py", line 230, in layer_choice_formula expressions[val] = extract_shape_info(shape_inference(module[val], *args, is_leaf=is_leaf, **kwargs)) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 498, in shape_inference outputs = module(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1215, in _call_impl hook_result = hook(self, input, result) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 514, in module_shape_inference_hook result = _module_shape_inference_impl(module, output, *input, is_leaf=is_leaf) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 617, in _module_shape_inference_impl tree_map(_ensure_shape, outputs) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py", line 192, in tree_map return tree_unflatten([fn(i) for i in flat_args], spec) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py", line 192, in <listcomp> return tree_unflatten([fn(i) for i in flat_args], spec) File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 609, in _ensure_shape raise RuntimeError( RuntimeError: Shape inference failed because no shape inference formula is found for AvgPool2d(kernel_size=3, stride=1, padding=1) of type AvgPool2d. Meanwhile the nested modules and functions inside failed to propagate the shape information. Please provide a `_shape_forward` member function or register a formula using `register_shape_inference_formula`. ``` According to the previous issue in https://github.com/microsoft/nni/issues/5538, I think it is AvgPool2d does not have shape inference formula, since it is not customzied search space, so can you add the support on your side? Environment: NNI version: latest(Build from source and use Dockerfile in master branch) Training service (local|remote|pai|aml|etc): local Client OS: Unbuntu Python version: 3.8 PyTorch/TensorFlow version: 1.10.2 Is conda/virtualenv/venv used?: No Is running in Docker?: Yes Configuration: Experiment config (remember to remove secrets!): Same as Latest Version in Darts example Search space: Darts
open
2023-05-12T22:16:56Z
2023-05-25T11:28:20Z
https://github.com/microsoft/nni/issues/5558
[]
dzk9528
9
nvbn/thefuck
python
807
Not Running in Fish Shell
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with the following basic information: --> The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`): **The Fuck 3.26 using Python 3.6.5** Your shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.): **Fish v2.7.1 (works fine in Bash)** Your system (Debian 7, ArchLinux, Windows, etc.): **macOS 10.13.5 Beta (17F45c)** How to reproduce the bug: **Run 'fuck' command after entering any incorrect command in Fish shell.** The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck): ``` DEBUG: Run with settings: {'alter_history': True, 'debug': True, 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'}, 'exclude_rules': [], 'history_limit': None, 'instant_mode': False, 'no_colors': False, 'priority': {}, 'repeat': False, 'require_confirmation': True, 'rules': [<const: All rules enabled>], 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'], 'user_dir': PosixPath('/Users/user/.config/thefuck'), 'wait_command': 3, 'wait_slow_command': 15} DEBUG: Total took: 0:00:00.296931 Traceback (most recent call last): File "/usr/local/bin/thefuck", line 12, in <module> sys.exit(main()) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/entrypoints/main.py", line 25, in main fix_command(known_args) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/entrypoints/fix_command.py", line 36, in fix_command command = types.Command.from_raw_script(raw_command) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/types.py", line 81, in from_raw_script expanded = shell.from_shell(script) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/shells/generic.py", line 30, in from_shell return self._expand_aliases(command_script) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/shells/fish.py", line 65, in _expand_aliases aliases = self.get_aliases() File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/shells/fish.py", line 60, in get_aliases raw_aliases = _get_aliases(overridden) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/utils.py", line 33, in wrapper memo[key] = fn(*args, **kwargs) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/utils.py", line 267, in wrapper return _cache.get_value(fn, depends_on, args, kwargs) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/utils.py", line 243, in get_value value = fn(*args, **kwargs) File "/usr/local/Cellar/thefuck/3.26/libexec/lib/python3.6/site-packages/thefuck/shells/fish.py", line 25, in _get_aliases name, value = alias.replace('alias ', '', 1).split(' ', 1) ValueError: not enough values to unpack (expected 2, got 1) ``` If the bug only appears with a specific application, the output of that application and its version: N/A Anything else you think is relevant: N/A ![screenshot_20180426_17 50 13_tz8le7](https://user-images.githubusercontent.com/742476/39339042-64896c24-497b-11e8-992b-ee41abf71a0c.png) <!-- It's only with enough information that we can do something to fix the problem. -->
closed
2018-04-27T00:58:57Z
2018-05-22T17:32:16Z
https://github.com/nvbn/thefuck/issues/807
[ "bug", "fish" ]
grokdesigns
14
trevismd/statannotations
seaborn
152
Feature request: permutation test
Can the built-in scipy.stats permutation_test function be an option for statistical tests? Many Thanks!
open
2024-05-24T07:52:46Z
2024-11-30T08:54:21Z
https://github.com/trevismd/statannotations/issues/152
[]
naureeng
2
qubvel-org/segmentation_models.pytorch
computer-vision
849
Is this library still maintained?
It's been almost a year since the last release, and most commits since then have been limited to auto-generated dependabot PRs. The outdated version of timm required to use smp now makes it incompatible with the latest release of lightly: https://github.com/microsoft/torchgeo/issues/1824. PRs to update the version of timm have been ignored (https://github.com/qubvel/segmentation_models.pytorch/pull/839), and requests to unpin the timm dependency have been rejected (https://github.com/qubvel/segmentation_models.pytorch/issues/620). Our own contributions have been closed, and requests to reopen them have been ignored as well (https://github.com/qubvel/segmentation_models.pytorch/pull/776). Which begs the question: is this library still maintained? If yes, then it would be incredibly helpful to unpin the timm dependency. If no, then would you be willing to pass the torch (heh) on to someone else so this incredibly useful library does not become abandoned? Alternatively, does anyone know any alternatives that offer the same functionality and compatibility with modern timm releases as smp?
closed
2024-01-25T09:51:18Z
2024-09-27T10:13:45Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/849
[]
adamjstewart
18
ranaroussi/yfinance
pandas
1,727
Module 'yfinance' has no attribute 'Ticker'
### Describe bug I try to use command import yfinance as yf import pandas_datareader as pdr import pandas as pd from datetime import datetime import yfinance as yf msft = yf.Ticker('MSFT') and get a response of AttributeError: module 'yfinance' has no attribute 'Ticker' I'm using Miniconda. Tried to use this also with Anaconda but it gets the same response. I've installed all the needed packages with conda and pip and when I try to install them again I get a response of Requirement already satisfied. I have no idea what is wrong as everything should be working. ### Simple code that reproduces your problem import yfinance as yf import pandas_datareader as pdr import pandas as pd from datetime import datetime import yfinance as yf msft = yf.Ticker('MSFT') [4](file:///d%3A/Ville/Anaconda/yfinance.py?line=3) from datetime import datetime [6](file:///d%3A/Ville/Anaconda/yfinance.py?line=5) import yfinance as yf ----> [8](file:///d%3A/Ville/Anaconda/yfinance.py?line=7) msft = yf.Ticker('MSFT') [10](file:///d%3A/Ville/Anaconda/yfinance.py?line=9) msft.info AttributeError: module 'yfinance' has no attribute 'Ticker' ### Debug log Exception has occurred: AttributeError partially initialized module 'yfinance' has no attribute 'Ticker' (most likely due to a circular import) File "D:\Ville\Anaconda\yfinance.py", line 8, in <module> msft = yf.Ticker('MSFT') ^^^^^^^^^ File "D:\Ville\Anaconda\yfinance.py", line 1, in <module> import yfinance as yf AttributeError: partially initialized module 'yfinance' has no attribute 'Ticker' (most likely due to a circular import) ### Bad data proof _No response_ ### `yfinance` version 0.2.31 ### Python version _No response_ ### Operating system _No response_
closed
2023-10-18T16:48:03Z
2023-10-18T17:51:24Z
https://github.com/ranaroussi/yfinance/issues/1727
[]
Vakke
3
autogluon/autogluon
computer-vision
4,180
Accessing probabilities of bagged models
Hi, I was wondering if there's any way of accessing the probabilities of each of the bagged models which are averaged to get the output of ```predict_proba()``` for the L1 models? This would be helpful to be able to calculate uncertainties for each of the models as well as uncertainty for the entire weighted ensemble Thanks!
open
2024-05-08T00:44:01Z
2024-11-25T22:56:48Z
https://github.com/autogluon/autogluon/issues/4180
[ "enhancement", "module: tabular" ]
amanmalali
3
jupyter/nbviewer
jupyter
423
Test using tornado.testing
In our current testing setup, we're doing functional tests with no way to do mocks against GitHub itself. This ends up causing a lot of bad builds when things are actually fine. Since things can time out between requests and nbviewer, I this there's a mismatch. Using [`tornado.testing`](http://tornado.readthedocs.org/en/latest/testing.html) should help do tests properly for nbviewer.
open
2015-03-12T22:07:10Z
2015-09-01T01:15:48Z
https://github.com/jupyter/nbviewer/issues/423
[ "type:Maintenance" ]
rgbkrk
3
mirumee/ariadne
api
439
Could you write it more detail
I can integrate ariadne with Django by following the instruction in the ariadne document, however i don't understand the django-channels integration section, i think the sample code is a little bit short and tricky for newbies to follow. Moreover, Django today supports Asgi server out of the box. So can i add Subscription feature to my app without installing channels ?
closed
2020-11-05T07:28:24Z
2020-11-05T10:08:20Z
https://github.com/mirumee/ariadne/issues/439
[]
iamleson98
1
apache/airflow
machine-learning
47,905
Fix mypy-boto3-appflow version
### Body We set TODO to handle the version limitation https://github.com/apache/airflow/blob/9811f1d6d0fe557ab204b20ad5cdf7423926bd22/providers/src/airflow/providers/amazon/provider.yaml#L146-L148 I open issue for viability as it's a small scope and good task for new contributors. ### Committer - [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
closed
2025-03-18T11:28:58Z
2025-03-19T13:33:43Z
https://github.com/apache/airflow/issues/47905
[ "provider:amazon", "area:providers", "good first issue", "kind:task" ]
eladkal
2
tiangolo/uwsgi-nginx-flask-docker
flask
196
UWSGI with Python 3.7 in Dockerfile
Hi, Tiangolo Great work brother , hope you're the best , i have isse when run Docker file contain "Python 3.7 , Uwsgi , Postgres, Nginx " Build command by jenkins , everything running but when docker logs -f app showing this **"2020-07-16 00:34:26,763 INFO spawned: 'uwsgi' with pid 480 [uWSGI] getting INI configuration from /home/app/payouts_portal/uwsgi.ini 2020-07-16 00:34:27,774 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)"**
closed
2020-07-16T00:42:46Z
2020-12-17T00:27:59Z
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/196
[ "answered" ]
abobakrahmed
2
waditu/tushare
pandas
1,505
复权数据接口错误(九号公司缺少20210201复权数据)
![image](https://user-images.githubusercontent.com/26301585/106459291-9c1a3000-64cc-11eb-9b8f-8520155a6bab.png) (九号公司缺少20210201复权数据)
open
2021-02-01T12:33:23Z
2021-02-01T12:33:54Z
https://github.com/waditu/tushare/issues/1505
[]
winstonzhong
1
dgtlmoon/changedetection.io
web-scraping
2,415
[feature] Compare numeric values before notification?
**Version and OS** Current on Docker **Is your feature request related to a problem? Please describe.** Hi, I'm pretty new to CD and even if I did the first steps sucessfully. What a nice tool! But I seem to struggle on monitoring a products price and to get only notification, if the price is bigger / lower a certain value. Currently I still get a notification on simply every poll and this is pretty anoying and stress to manually check the value every time by hand. ( Sorry but there seem to be no option in the WebUI and also no tutorial seems to address this feature to supress notifications if they don't pass a filter.) **Describe the solution you'd like** Ability for a check or check notification to supress the notification, if a numeric filter dosesn't match e.g. x > 1000.0 **Describe the use-case and give concrete real-world examples** Notify only if prices, quantities, metrices, ... rise a certain level.
closed
2024-06-14T18:14:36Z
2024-07-12T15:09:45Z
https://github.com/dgtlmoon/changedetection.io/issues/2415
[ "enhancement" ]
Matthias84
1
fastapi-users/fastapi-users
asyncio
5
Improve test coverage
Current coverage: [![codecov](https://codecov.io/gh/frankie567/fastapi-users/branch/master/graph/badge.svg)](https://codecov.io/gh/frankie567/fastapi-users)
closed
2019-10-13T11:48:13Z
2019-10-15T05:55:10Z
https://github.com/fastapi-users/fastapi-users/issues/5
[ "enhancement" ]
frankie567
0
hbldh/bleak
asyncio
1,065
Advertisements only seldom received/displayed
* bleak version: 0.18.1 * Python version: Python 3.9.2 * Operating System: Linux test 5.15.61-v7l+ #1579 SMP Fri Aug 26 11:13:03 BST 2022 armv7l GNU/Linux * BlueZ version (`bluetoothctl -v`) in case of Linux: 5.55 ### Description I have a device which is sending a lot advertisements. I said it to do in maximum speed. test@test:~/bleak $ sudo hcitool lescan --duplicate LE Scan ... E4:5F:01:BA:05:2D (unknown) E4:5F:01:BA:05:2D E4:5F:01:BA:05:2D (unknown) E4:5F:01:BA:05:2D E4:5F:01:BA:05:2D (unknown) E4:5F:01:BA:05:2D I can see it advertising at a high speed (around 40 advertisements per second). Then I use (from example): ``` test@test:~/bleak $ cat detection.py ``` ```python """ Detection callback w/ scanner -------------- Example showing what is returned using the callback upon detection functionality Updated on 2020-10-11 by bernstern <bernie@allthenticate.net> """ import asyncio import logging import sys from bleak import BleakScanner from bleak.backends.device import BLEDevice from bleak.backends.scanner import AdvertisementData logger = logging.getLogger(__name__) def simple_callback(device: BLEDevice, advertisement_data: AdvertisementData): logger.info(f"{device.address}: {advertisement_data}") async def main(service_uuids): scanner = BleakScanner(simple_callback, service_uuids) while True: print("(re)starting scanner") await scanner.start() await asyncio.sleep(5.0) await scanner.stop() if __name__ == "__main__": logging.basicConfig( level=logging.INFO, format="%(asctime)-15s %(name)-8s %(levelname)s: %(message)s", ) service_uuids = sys.argv[1:] asyncio.run(main(service_uuids)) test@test:~/bleak $ ``` It gives the following output: ``` $ python3 detection.py (re)starting scanner 2022-10-05 13:16:40,784 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\x98\xc3\x01'}) 2022-10-05 13:16:45,775 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\x98\xc3\x01'}) (re)starting scanner 2022-10-05 13:16:45,798 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\xb1\xc3\x01'}) 2022-10-05 13:16:50,799 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\xb1\xc3\x01'}) (re)starting scanner 2022-10-05 13:16:50,819 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\xca\xc3\x01'}) 2022-10-05 13:16:55,823 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\xca\xc3\x01'}) (re)starting scanner 2022-10-05 13:16:55,838 __main__ INFO: E4:5F:01:BA:05:2D: AdvertisementData(manufacturer_data={65535: b'\xbe\xac\x13\xb7\xcbV\x81ZH\xec\xa0\x1f0!\xabI\xa1U\x00\x00:\xe3\xc3\x01'}) ``` It gives only seldom data, far away from 40 per second. I assume the checked fields remain equal, but there shall be at least a counter inside, which increases every 200ms. So I assume at least I should see an output every 200ms? Could I tell Bleak to display every received advertisement? Why is stop/restart needed? Just that duplicates are reported again? Or do I somehow need to tell BlueZ under Bleak to use something like lescan with --duplicate? Or could it be that problem is on sender side? Advertisement data not correct? Or only sometimes correct?
open
2022-10-05T13:22:57Z
2022-10-06T17:34:51Z
https://github.com/hbldh/bleak/issues/1065
[ "3rd party issue", "Backend: BlueZ" ]
capiman
21
ultralytics/ultralytics
deep-learning
19,292
Colab default setting could not covert to tflite
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug Few weeks ago I can convert the yolo11n.pt to tflite, but now the colab default python setting will be python 3.11.11. At this version, the tflite could not be converted by onnx2tf. How can I fix it? ![Image](https://github.com/user-attachments/assets/60fa0461-5413-4f7d-92c6-4adac4c368d9) ``` from ultralytics import YOLO model = YOLO('yolo11n.pt') model.export(format='tflite', imgsz=192, int8=True) model = YOLO('yolo11n_saved_model/yolo11n_full_integer_quant.tflite') res = model.predict(imgsz=192) res[0].plot(show=True) ``` ``` Downloading https://ultralytics.com/assets/Arial.ttf to '/root/.config/Ultralytics/Arial.ttf'... 100%|██████████| 755k/755k [00:00<00:00, 116MB/s] Scanning /content/datasets/coco8/labels/val... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<00:00, 117.40it/s]New cache created: /content/datasets/coco8/labels/val.cache TensorFlow SavedModel: WARNING ⚠️ >300 images recommended for INT8 calibration, found 4 images. TensorFlow SavedModel: starting TFLite export with onnx2tf 1.26.3... ERROR:root:Internal Python error in the inspect module. Below is the traceback from this internal error. ERROR: The trace log is below. Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func func(*args, **kwargs) File "/usr/local/lib/python3.11/dist-packages/onnx2tf/ops/Mul.py", line 245, in make_node correction_process_for_accuracy_errors( File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 5894, in correction_process_for_accuracy_errors min_abs_err_perm_1: int = [idx for idx in range(len(validation_data_1.shape))] ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'shape' ERROR: input_onnx_file_path: yolo11n.onnx ERROR: onnx_op_name: wa/model.10/m/m.0/attn/Mul ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option. Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func func(*args, **kwargs) File "/usr/local/lib/python3.11/dist-packages/onnx2tf/ops/Mul.py", line 245, in make_node correction_process_for_accuracy_errors( File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 5894, in correction_process_for_accuracy_errors min_abs_err_perm_1: int = [idx for idx in range(len(validation_data_1.shape))] ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'shape' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-6-da2eaec26985>", line 3, in <cell line: 0> model.export(format='tflite', imgsz=192, int8=True) File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/model.py", line 741, in export return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 418, in __call__ f[5], keras_model = self.export_saved_model() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 175, in outer_func f, model = inner_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 1036, in export_saved_model keras_model = onnx2tf.convert( ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/onnx2tf/onnx2tf.py", line 1141, in convert op.make_node( File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 378, in print_wrapper_func sys.exit(1) SystemExit: 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 1101, in get_records return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 248, in wrapped return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 281, in _fixed_getinnerframes records = fix_frame_records_filenames(inspect.getinnerframes(etb, context)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/inspect.py", line 1739, in getinnerframes traceback_info = getframeinfo(tb, context) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/inspect.py", line 1671, in getframeinfo lineno = frame.f_lineno ^^^^^^^^^^^^^^ AttributeError: 'tuple' object has no attribute 'f_lineno' --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py](https://localhost:8080/#) in print_wrapper_func(*args, **kwargs) 311 try: --> 312 result = func(*args, **kwargs) 313 18 frames AttributeError: 'NoneType' object has no attribute 'shape' During handling of the above exception, another exception occurred: SystemExit Traceback (most recent call last) [... skipping hidden 1 frame] SystemExit: 1 During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) [... skipping hidden 1 frame] [/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py](https://localhost:8080/#) in find_recursion(etype, value, records) 380 # first frame (from in to out) that looks different. 381 if not is_recursion_error(etype, value, records): --> 382 return len(records), 0 383 384 # Select filename, lineno, func_name to track frames with TypeError: object of type 'NoneType' has no len() ``` Thanks, Kris ### Environment colab default Environment ### Minimal Reproducible Example https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-02-18T09:09:13Z
2025-02-21T01:35:52Z
https://github.com/ultralytics/ultralytics/issues/19292
[ "bug", "non-reproducible", "exports" ]
kris-himax
5
keras-team/keras
data-science
20,760
common api for getting gradient from all backend?
In this example, https://keras.io/examples/vision/grad_cam/, it is used as follows to get gradient using tensorflow. ```python with tf.GradientTape() as tape: last_conv_layer_output, preds = grad_model(img_array) if pred_index is None: pred_index = tf.argmax(preds[0]) class_channel = preds[:, pred_index] ``` Changing the backend won't work without chaning the way to getting gradient, so I was wondering if keras care about providing general api interface to get the gradient, etc.
closed
2025-01-14T18:01:42Z
2025-01-17T17:54:27Z
https://github.com/keras-team/keras/issues/20760
[ "type:support", "stat:awaiting response from contributor" ]
pure-rgb
4
LibreTranslate/LibreTranslate
api
185
en as frontend source language is not supported
[root@DESKTOP-B0B9UFO code]# docker run -ti --rm -p 5000:5000 libretranslate/libretranslate Updating language models ERROR:root:(RemoteDisconnected('Remote end closed connection without response'),) Cannot update models (normal if you're offline): Local package index not found, use package.update_package_index() to load it INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts INFO:apscheduler.scheduler:Added job "remove_translated_files" to job store "default" INFO:apscheduler.scheduler:Scheduler started Traceback (most recent call last): File "/usr/local/bin/libretranslate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.8/site-packages/app/main.py", line 113, in main app = create_app(args) File "/usr/local/lib/python3.8/site-packages/app/app.py", line 135, in create_app raise AttributeError( AttributeError: en as frontend source language is not supported. INFO:apscheduler.scheduler:Scheduler has been shut down
closed
2021-12-17T09:04:23Z
2022-03-06T17:03:48Z
https://github.com/LibreTranslate/LibreTranslate/issues/185
[ "possible bug" ]
lizongshen
16
cobrateam/splinter
automation
759
Firefox does not open in fullscreen mode when passing the according keyword argument
The `fullscreen=True` argument to the Firefox browser constructor does not open the browser in full screen but in a regular non-maximized window instead: ``` from splinter import Browser browser = Browser('firefox', fullscreen=True) browser.visit('https://www.github.com') ``` Apart from that there is no error message. For Chrome, however, it works just fine: ``` from splinter import Browser browser = Browser('chrome', fullscreen=True) browser.visit('https://www.github.com') ``` Versions: * Ubuntu 18.04 * splinter 0.13.0 * selenium 3.141.0 * Firefox 73.0
closed
2020-02-19T16:18:06Z
2020-03-04T04:24:59Z
https://github.com/cobrateam/splinter/issues/759
[ "bug" ]
dirkschneemann
1
modoboa/modoboa
django
2,626
Enable CardDAV on user webmail
Hi Installed the latest build from the install script as of (5/10/2022) on debian 11 LXC in proxmox. When I go to the webmail URL and login as a normal user account I attempt to enable Synchronization Address book using CardDAV but it errors out. Error : Taken from Dev mode in chrome ----Start of error-- (index):336 Uncaught TypeError: Cannot read properties of undefined (reading 'destroy') at HTMLInputElement.toggleSignatureEditor ((index):336:18) at HTMLDocument.dispatch (jquery.min.js:3:28337) at v.handle (jquery.min.js:3:25042) at Object.trigger (jquery.min.js:3:27423) at HTMLInputElement.<anonymous> (jquery.min.js:4:3107) at Function.each (jquery.min.js:3:5257) at init.each (jquery.min.js:3:2013) at init.trigger (jquery.min.js:4:3083) at b.fn.<computed> [as change] (jquery.min.js:5:8648) at HTMLDocument.<anonymous> ((index):343:53) --End of error--- When I click the ((index):336:18) to takes me to the following. ![image](https://user-images.githubusercontent.com/79732984/193970488-e96a164a-7d5c-42bf-b863-a9001564b5e5.png) If I leave the selection on NO and click save it works fine... the other options "Message Filters" "Quarantine" "webmail" options all update fine. Sorry if that's not enough information..
closed
2022-10-05T02:51:42Z
2023-02-23T15:01:01Z
https://github.com/modoboa/modoboa/issues/2626
[ "feedback-needed", "stale" ]
Tradeforlife
6
docarray/docarray
pydantic
924
BUG: array bulk access is broken
How to reproduce: ```python import numpy as np from docarray import DocumentArray, Image da = DocumentArray[Image](Image(embedding=np.random.random((128,))) for _ in range(10)) da.embedding ``` raises: ```text Traceback (most recent call last): File "/home/johannes/.cache/pypoetry/virtualenvs/docarray-EljsZLuq-py3.8/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-28b49c4af740>", line 1, in <module> da = DocumentArray[Image](Image(embedding=np.random.random((128,))) for _ in range(10)) File "/home/johannes/Documents/jina/docarrayv2/docarray/array/array.py", line 137, in __init__ super().__init__(doc_ for doc_ in docs) File "/home/johannes/Documents/jina/docarrayv2/docarray/array/array.py", line 137, in <genexpr> super().__init__(doc_ for doc_ in docs) ``` **EDIT:** This is a more general problem, caused by `Embedding` being defined as a Union type. It also causes problems with `.stack()`: ```python import numpy as np from docarray import DocumentArray, Image da = DocumentArray[Image](Image(embedding=np.random.random((128,))) for _ in range(10)) da.stack() # breaks ``` The solution is to define `Embedding` and `Tensor` as "proper" classed
closed
2022-12-09T13:16:02Z
2023-01-03T12:54:32Z
https://github.com/docarray/docarray/issues/924
[ "DocArray v2" ]
JohannesMessner
0
ultralytics/yolov5
deep-learning
13,171
The GPU is not used when running detection with YOLOv5
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report. ### YOLOv5 Component Multi-GPU ### Bug ![WhatsApp Image 2024-07-04 at 09 58 08](https://github.com/ultralytics/yolov5/assets/167752920/0737bb00-8c51-4ec8-8efd-66503ca78695) When I run the YOLOv5 detection code, it still uses CPU. And it causes the detection process to be slow, I get fps = 0.4. For installation, CUDA has been activated but the CUDA on the Jetson nano is still not used. Please give me an explanation why it happened and what is the solution? The following are the versions of CUDA 10.2.300 and pytorch 2.3.1 that I have installed. I use the virtual environment Python 3.8.0. Please tell which version of Pytorch and CUDA suits my python virtual environment. Please help me ![WhatsApp Image 2024-07-05 at 21 29 04](https://github.com/ultralytics/yolov5/assets/167752920/8714a8d8-9658-4e30-8ce2-fcabf72fbdcb) ![versi cuda](https://github.com/ultralytics/yolov5/assets/167752920/dabdbddd-c993-4a3c-a701-a4ed4e371f07) ### Environment - YOLO : YOLO v5 CUDA 10.2.300 and pytorch 2.3.1 Python 3.8.0 ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2024-07-06T14:15:08Z
2024-10-20T19:49:38Z
https://github.com/ultralytics/yolov5/issues/13171
[ "bug", "Stale" ]
Angelinnp
6
SYSTRAN/faster-whisper
deep-learning
292
Creating SRT/TXT Files
Is there a way to have the code automatically create a .srt and .txt file like the original whisper?
closed
2023-06-12T01:26:59Z
2023-06-16T20:29:09Z
https://github.com/SYSTRAN/faster-whisper/issues/292
[]
joseph2mi
2
Urinx/WeixinBot
api
262
有没有人发送pdf成功的
open
2018-10-10T05:08:43Z
2018-10-10T05:08:43Z
https://github.com/Urinx/WeixinBot/issues/262
[]
xwfsdjk
0
hpcaitech/ColossalAI
deep-learning
5,880
[DOC]: macos 不可以运行吗请问
### 📚 The doc issue 如题,我想在mac电脑上使用ColossAI,请问可以吗,,,,,,,,,,,,,,,,,
closed
2024-07-02T07:53:37Z
2024-07-02T08:05:58Z
https://github.com/hpcaitech/ColossalAI/issues/5880
[ "documentation" ]
helloworkcupid
2
deepfakes/faceswap
deep-learning
867
extracting not working
**Describe the bug** it keep stoping at some point of extracting the frames from videos or even photos. it will stop at 19%. im also using all the cpu modes because it cant extract using my gpu. im using a mac pro python 3.6
closed
2019-09-10T00:26:25Z
2019-09-25T10:09:29Z
https://github.com/deepfakes/faceswap/issues/867
[]
ghost
3
ets-labs/python-dependency-injector
asyncio
411
Inject dependency into a class attr
Hello, Sorry if this question already has been answered somewhere but I went through the doc extensively and couldn't find anything. I would like to inject a dependency for a class attribute, but couldn't find any way of doing it. Something similar from python-inject: ``` class User(object): cache = inject.attr(Cache) ``` Thanks for your help
closed
2021-02-27T12:50:07Z
2021-03-02T17:20:54Z
https://github.com/ets-labs/python-dependency-injector/issues/411
[ "feature" ]
brunopereira27
6
ufoym/deepo
jupyter
123
Python dumped when trying to import tensorflow
I am a beginner in docker, when I follow the quick start instruction: `docker run --runtime=nvidia --rm ufoym/deepo nvidia-smi` got: ``` docker: Error response from daemon: Unknown runtime specified nvidia. See 'docker run --help'. ``` Is ` --runtime=nvidia` deprecated? As the instruction of the nvidia-docker I use `--gpus` instead of `--runtime=nvidia`: `docker run --gpus all -it --ipc=host ufoym/deepo bash` But: ``` root@ea810a8cf7b7:/# python Python 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow Illegal instruction (core dumped) root@ea810a8cf7b7:/# ``` Was there anything wrong with what I did? Thx
closed
2019-09-22T04:03:54Z
2019-11-19T11:24:13Z
https://github.com/ufoym/deepo/issues/123
[]
Frankr0
1
schenkd/nginx-ui
flask
37
Can code be merged to implement nginx operation management?
The effect is as follows Function: check version, start, stop, detect configuration, reload ![image](https://user-images.githubusercontent.com/12950929/99874622-7ba83c80-2c24-11eb-9d24-5b1360083b34.png) ![image](https://user-images.githubusercontent.com/12950929/99874651-b1e5bc00-2c24-11eb-8e15-2aed2034ef04.png) ![image](https://user-images.githubusercontent.com/12950929/99874662-bb6f2400-2c24-11eb-9573-60a855fb5aae.png) ![image](https://user-images.githubusercontent.com/12950929/99874668-c32ec880-2c24-11eb-86c9-0792ac14ea49.png)
open
2020-11-21T10:11:00Z
2021-08-02T07:29:49Z
https://github.com/schenkd/nginx-ui/issues/37
[]
sjkcdpc
2
coqui-ai/TTS
python
2,867
[Feature request] Any model pretrained on Russian lang
As we know, there used to be such a model, it was trained on the Ruslan dataset, but then it was removed due to licensing violations. Maybe it's time to train the model on other data? I think the Russian language will be in demand.
closed
2023-08-12T15:52:21Z
2023-08-13T10:42:34Z
https://github.com/coqui-ai/TTS/issues/2867
[ "feature request" ]
BrasD99
1
pydantic/pydantic-core
pydantic
1,339
Making `ObType` a separate crate
See https://github.com/samuelcolvin/rtoml/pull/59. We should move `ObType` to a separate crate so it can be reused by other projects, there's also a chance that it allows one of us, or something else to come along and make it faster. https://github.com/pydantic/pydantic-core/blob/a65f3272f002c7663c368aa4708ca706547e3bdb/src/serializers/ob_type.rs#L383-L428
open
2024-06-24T09:49:52Z
2025-02-05T13:37:31Z
https://github.com/pydantic/pydantic-core/issues/1339
[ "refactor" ]
samuelcolvin
3
Textualize/rich
python
2,737
[BUG] Triple quotations highlighted inconsistently
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions. - [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md). **Describe the bug** ![image](https://user-images.githubusercontent.com/49597791/211109136-0a9a416e-84fe-4669-83c0-c3929d44f898.png) Triple-quotes (""") get rendered as open-and-closed quotes with one extra quote after if there is no text on the same line. To reproduce: ```py from rich import print print('my_function(argument="""\ntest\n""")') ``` Provide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot. **Platform** <details> <summary>Click to expand</summary> What platform (Win/Linux/Mac) are you running on? What terminal software are you using? Win11, happens in both vscode terminal and Windows Terminal I may ask you to copy and paste the output of the following commands. It may save some time if you do it now. If you're using Rich in a terminal: ``` python -m rich.diagnose pip freeze | grep rich ``` ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── <class 'rich.console.Console'> ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ A high level console interface. │ │ │ │ ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ <console width=369 ColorSystem.TRUECOLOR> │ │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ color_system = 'truecolor' │ │ encoding = 'utf-8' │ │ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │ │ height = 31 │ │ is_alt_screen = False │ │ is_dumb_terminal = False │ │ is_interactive = True │ │ is_jupyter = False │ │ is_terminal = True │ │ legacy_windows = False │ │ no_color = False │ │ options = ConsoleOptions(size=ConsoleDimensions(width=369, height=31), legacy_windows=False, min_width=1, max_width=369, is_terminal=True, encoding='utf-8', max_height=31, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) │ │ quiet = False │ │ record = False │ │ safe_box = True │ │ size = ConsoleDimensions(width=369, height=31) │ │ soft_wrap = False │ │ stderr = False │ │ style = None │ │ tab_size = 8 │ │ width = 369 │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭── <class 'rich._windows.WindowsConsoleFeatures'> ───╮ │ Windows features available. │ │ │ │ ╭─────────────────────────────────────────────────╮ │ │ │ WindowsConsoleFeatures(vt=True, truecolor=True) │ │ │ ╰─────────────────────────────────────────────────╯ │ │ │ │ truecolor = True │ │ vt = True │ ╰─────────────────────────────────────────────────────╯ ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────── Environment Variables ─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ {'TERM': None, 'COLORTERM': 'truecolor', 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': 'vscode', 'COLUMNS': None, 'LINES': None, 'JUPYTER_COLUMNS': None, 'JUPYTER_LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ platform="Windows" If you're using Rich in a Jupyter Notebook, run the following snippet in a cell and paste the output in your bug report. ```python from rich.diagnose import report report() ``` </details>
closed
2023-01-06T22:18:51Z
2023-01-14T09:51:50Z
https://github.com/Textualize/rich/issues/2737
[ "Needs triage" ]
torshepherd
4
microsoft/MMdnn
tensorflow
494
Non-square maxpooling for TensorFlow-->IR-->Caffe
Platform (like ubuntu 16.04/win10): ubuntu 16.04 Python version: 2.7 Source framework with version (like Tensorflow 1.4.1 with GPU): 1.12 GPU Destination framework with version (like CNTK 2.3 with GPU): Caffe Pre-trained model path (webpath or webdisk path): NA Running scripts: NA In tensorflow network I have a layer of max_pool(x, ksize=[1, 2, 1, 1], strides=[1, 2, 1, 1], padding='SAME') after convert to IR then Caffe, it becomes layer { ... type: "Pooling" pooling_param{kernel_size: 2...}} Just wonder the current release do support non-square max_pool? Thanks
open
2018-11-09T06:37:41Z
2018-12-26T14:10:16Z
https://github.com/microsoft/MMdnn/issues/494
[]
eerobert
1
strawberry-graphql/strawberry
graphql
3,572
TypeError: MyCustomType fields cannot be resolved. unsupported operand type(s) for |: 'LazyType' and 'NoneType'
## Describe the Bug I had an error generating types with the next values and the error thrown was > TypeError: MyCustomType fields cannot be resolved. unsupported operand type(s) for |: 'LazyType' and 'NoneType' This was the status the packages and the error happened on the change from: ``` Django= 4.2.11 strawberry-graphql = "^0.235.2" strawberry-graphql-django = "^0.44.2" python= "3.11.9" ``` To ``` Django= 4.2.11 strawberry-graphql = "^0.235.2" strawberry-graphql-django = "^0.45.0" python= "3.11.9" ``` The code was like this when the error raised: ``` class MyCustomType: my_custom_field: Annotated[AType, strawberry.lazy("xx.graphql.types")] | None class AnotherMyCustomType: my_another_custom_field: AType | None = None ``` Changed to this to get fixed: ``` class MyCustomType: my_custom_field: AType | None = None class AnotherMyCustomType: my_another_custom_field: AType | None = None ``` I did not change it both to be lazy that would be the other solution The error at the end was how the types were generated because I had in the same file a type that was declared as lazy and also as not lazy, I did not have the error before and when I upgraded to those values the error came up. I fixed the error changing this to not lazy but I mention here @bellini666 that suggested me to fill up a bug in case is related to how the types are generated. I hope this helps Thanks! ## System Information - Operating system: Apple M1 Pro Sonoma 14.4.1 -> But the system is on Docker python:3.11.9-bullseye (debian) - Strawberry version (if applicable): "0.235.2" ## Additional Context [error_traceback.txt](https://github.com/user-attachments/files/16231460/error_traceback.txt) [discord thread](https://discord.com/channels/689806334337482765/689861980776955948/1261325861597089802)
closed
2024-07-15T08:06:25Z
2025-03-20T15:56:48Z
https://github.com/strawberry-graphql/strawberry/issues/3572
[ "bug" ]
Ronjea
3
twopirllc/pandas-ta
pandas
687
Not able to extract data from Yahoo Finance anymore
Seem Yahoo has blocked it. Would be great if this can be resolved cause it's the easiest way to get historical data. 401 Client Error: Unauthorized for url: [https://query1.finance.yahoo.com/v7/finance/quote?formatted=true&lang=en-US&symbols=](https://query1.finance.yahoo.com/v7/finance/quote?formatted=true&lang=en-US&symbols=GBPUSD%3DX)
closed
2023-05-08T00:51:05Z
2023-05-13T18:37:15Z
https://github.com/twopirllc/pandas-ta/issues/687
[ "question", "wontfix", "info" ]
jq419
3
Asabeneh/30-Days-Of-Python
numpy
392
day 4_result is not correct
https://github.com/Asabeneh/30-Days-Of-Python/blame/c8656171d69e79b5dfc743f425991f46b7d1423e/04_Day_Strings/04_strings.md#L331 For this program the result should be 5 for ('y') and 0 for ('th') challenge = 'thirty days of python' print(challenge.find('y')) # 16 print(challenge.find('th')) # 17
closed
2023-05-09T18:09:54Z
2023-07-08T21:47:18Z
https://github.com/Asabeneh/30-Days-Of-Python/issues/392
[]
Galio54
1
scrapy/scrapy
python
6,254
Fix and re-enable `unnecessary-comprehension` and `use-dict-literal` pylint tags
Both are valid simplification hints.
closed
2024-02-28T09:36:51Z
2024-02-29T07:36:35Z
https://github.com/scrapy/scrapy/issues/6254
[ "good first issue", "cleanup" ]
wRAR
0
ymcui/Chinese-LLaMA-Alpaca
nlp
385
建议开放中文数据集
ERROR: type should be string, got "\r\nhttps://github.com/ymcui/Chinese-LLaMA-Alpaca 文中提到:\r\n中文LLaMA模型在原版的基础上扩充了中文词表,使用了中文通用纯文本数据进行二次预训练。\r\n请教能否开放此处所述的中文通用纯文本数据。\r\n-------------------------------------------------------------\r\n- [x] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus\r\n- [x] **运行系统**:Linux \r\n- [x] **问题分类**:模型训练与精调\r\n- [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行\r\n- [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案\r\n- [x] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案\r\n\r\n"
closed
2023-05-19T02:34:39Z
2023-05-30T22:02:22Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/385
[ "stale" ]
mikeda100
3
ultralytics/ultralytics
deep-learning
19,300
The train is getting slower and slower.
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug hello, i have some trouble when train yolov8 models. why the train are getting slower and slower..? i want to train the model using big batch but, i can`t. ![Image](https://github.com/user-attachments/assets/272f309d-79bd-4939-a60f-4e8f3a775795) ![Image](https://github.com/user-attachments/assets/d04a52de-d94c-416e-a895-249444acb9ac) ![Image](https://github.com/user-attachments/assets/6086df56-ce66-4507-9c1e-8080c10eb3fa) ### Environment Ultralytics 8.3.53 🚀 Python-3.9.19 torch-2.4.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4090, 23995MiB) Setup complete ✅ (36 CPUs, 125.5 GB RAM, 440.1/915.3 GB disk) OS Linux-5.15.0-131-generic-x86_64-with-glibc2.31 Environment Linux Python 3.9.19 Install git RAM 125.47 GB Disk 440.1/915.3 GB CPU Intel Core(TM) i9-10980XE 3.00GHz CPU count 36 GPU NVIDIA GeForce RTX 4090, 23995MiB GPU count 4 CUDA 12.1 numpy ✅ 1.26.4>=1.23.0 numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin" matplotlib ✅ 3.9.2>=3.3.0 opencv-python ✅ 4.10.0.84>=4.6.0 pillow ✅ 10.4.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.13.1>=1.4.1 torch ✅ 2.4.1>=1.8.0 torch ✅ 2.4.1!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.19.1>=0.9.0 tqdm ✅ 4.66.5>=4.64.0 psutil ✅ 6.0.0 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.8>=2.0.0
open
2025-02-18T23:22:01Z
2025-02-19T09:52:38Z
https://github.com/ultralytics/ultralytics/issues/19300
[ "bug", "detect" ]
yeonhyochoi
7
cleanlab/cleanlab
data-science
375
Add support for string labels
Instead of requiring labels must be converted to integer indices.
closed
2022-08-24T00:14:30Z
2023-05-15T20:48:05Z
https://github.com/cleanlab/cleanlab/issues/375
[ "enhancement" ]
jwmueller
2
zappa/Zappa
flask
1,121
Add support for Lambda Function URLs
AWS recently announced support for [Lambda Function URLs](https://aws.amazon.com/blogs/aws/announcing-aws-lambda-function-urls-built-in-https-endpoints-for-single-function-microservices/). There's no additional pricing when using Function URLs. It seems reasonable for Zappa to support that, perhaps even default to when creating new functions as there's less resources to configure and likely cheaper than API Gateway to use in the end.
closed
2022-04-09T15:55:42Z
2024-04-13T20:12:33Z
https://github.com/zappa/Zappa/issues/1121
[ "has-pr", "next-release-candidate", "no-activity", "auto-closed" ]
jdahlin
4
DistrictDataLabs/yellowbrick
scikit-learn
1,324
Discrimination Threshold plot explanation
**Describe the issue** Hi, I have been trying to use the discrimination threshold plot as per below ``` from yellowbrick.classifier import PrecisionRecallCurve, DiscriminationThreshold # Precision-Recall Plot pr_curve = PrecisionRecallCurve(model) pr_curve.fit(X_train, y_train) pr_curve.score(X_test, y_test) pr_curve.show() # Threshold Plot threshold_plot = DiscriminationThreshold(model) threshold_plot.fit(X_train, y_train) threshold_plot.score(X_test, y_test) threshold_plot.show() ``` However, I'm confused by the results. Perhaps there's a gap in my understanding. My precision recall plot shows values of precision ranging from 0.2 to 0.6, with a "jump" to 1 at zero recall.. My discrimination threshold plot shows "scores" for precision ranging from 0.5 up to 1 (talking about the line, not the band). The bands are very narrow and only widen from threshold 0.9+ On the precision recall chart a recall of 0.2 relates to a precision of ~0.4. However on the discrimination threshold chart a recall of 0.2 relates to a precision score of ~0.9? Why does the recall vs. precision figures I see on the PR chart not match the precision vs recall scores on the discrimination threshold chart? Apologies for my ignorance. Could you help me understand? Also, I don't understand how the "score" for precision can start at 0.5 (for a discrimination threshold value of 0) given that in the precision-recall curve we saw values for precision as low as 0.2 (presumably for some low probability threshold). Is the varying discrimination threshold not the same as the varying probabilities used to generate the precision recall curve? Why is there nowhere on the discrimination threshold plot showing a precision of 0.2? <!-- This line alerts the Yellowbrick maintainers, feel free to use this @ address to alert us directly in follow up comments --> @DistrictDataLabs/team-oz-maintainers
open
2025-02-27T04:18:03Z
2025-02-27T04:19:38Z
https://github.com/DistrictDataLabs/yellowbrick/issues/1324
[]
robmcd
0
graphdeco-inria/gaussian-splatting
computer-vision
467
Resolution r=1 make the scene blur.
Hi! I trained the bicycle scene with different parameters. When I set `r=1`, the scene blur than the default setting. Here is the comparison figure: right image set `r=1`, left image keeps default setting. ![comparison r=1 and default](https://github.com/graphdeco-inria/gaussian-splatting/assets/40193711/ef8fa446-1c4e-4872-b355-3a28318eeab0)
open
2023-11-14T05:09:37Z
2023-11-27T07:15:54Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/467
[]
aruiplex
4
PaddlePaddle/PaddleNLP
nlp
9,545
[Question]: Taskflow("feature_extraction")功能是否下线?按照Paddle文档中的代码运行报错:RuntimeError
### 请提出你的问题 Paddle文档中写明的Taskflow("feature_extraction")这个功能是否不能使用了?我在aistudio中运行该功能会出现报错。 **代码** ``` >>> from paddlenlp import Taskflow >>> import paddle.nn.functional as F >>> feature_extractor = Taskflow("feature_extraction") ``` 出现如下报错: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[17], line 3 1 from paddlenlp import Taskflow 2 import paddle.nn.functional as F ----> 3 feature_extractor = Taskflow("feature_extraction") File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/taskflow/taskflow.py:809, in Taskflow.__init__(self, task, model, mode, device_id, from_hf_hub, **kwargs) 807 self.kwargs = kwargs 808 task_class = TASKS[self.task][tag][self.model]["task_class"] --> 809 self.task_instance = task_class( 810 model=self.model, task=self.task, priority_path=self.priority_path, from_hf_hub=from_hf_hub, **self.kwargs 811 ) 812 task_list = TASKS.keys() 813 Taskflow.task_list = task_list File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/taskflow/multimodal_feature_extraction.py:229, in MultimodalFeatureExtractionTask.__init__(self, task, model, batch_size, is_static_model, max_length, return_tensors, **kwargs) 227 self._check_predictor_type() 228 if self.is_static_model: --> 229 self._get_inference_model() 230 else: 231 self._construct_model(model) File /opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddlenlp/taskflow/multimodal_feature_extraction.py:427, in MultimodalFeatureExtractionTask._get_inference_model(self) 425 self._static_model_file = self.inference_model_path + ".pdmodel" 426 self._static_params_file = self.inference_model_path + ".pdiparams" --> 427 self._config = paddle.inference.Config(self._static_model_file, self._static_params_file) 428 self._prepare_static_mode() 430 self.predictor_map["text"] = self.predictor **RuntimeError: (NotFound) Cannot open file /home/aistudio/.paddlenlp/taskflow/feature_extraction/PaddlePaddle/ernie_vil-2.0-base-zh/static/get_text_features.pdmodel, please confirm whether the file is normal. [Hint: Expected paddle::inference::IsFileExists(prog_file_) == true, but received paddle::inference::IsFileExists(prog_file_):0 != true:1.] (at /paddle/paddle/fluid/inference/api/analysis_config.cc:111)**
closed
2024-12-02T12:53:32Z
2025-03-17T00:23:10Z
https://github.com/PaddlePaddle/PaddleNLP/issues/9545
[ "question", "stale" ]
Alonghui
5
proplot-dev/proplot
matplotlib
324
cartopy cartopy._crs import Globe moved in version 0.20.2
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). --> ### Description Cartopy._crs.Globe moved to Cartopy.crs.Globe in version 0.20.2 Can be solved by editing proplot/crs.py in lines 69, 113, 158 and 203 [Description of the bug or feature.] ### Steps to reproduce ```python from cartopy._crs import Globe from cartopy.crs import Globe ``` **Expected behavior**: [What you expected to happen] No output if import succeeds **Actual behavior**: [What actually happened] ModuleNotFoundError: No module named 'cartopy._crs' ### Equivalent steps in matplotlib ### Proplot version Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here. 3.4.3 0.9.5
closed
2022-01-20T11:41:56Z
2022-01-21T20:48:31Z
https://github.com/proplot-dev/proplot/issues/324
[ "bug", "dependencies" ]
ingomichaelis
1
biolab/orange3
scikit-learn
6,861
Edit Domain: no way to get rid of warning "categories mapping for [variable] does not apply to current input" after change in upstream Formula
<!-- Thanks for taking the time to report a bug! If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3 To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability. --> **What's wrong?** When changing the mapping of categories of a categorical variable in Edit Domain, Edit Domain displays a warning "categories mapping for [variable] does not apply to current input" once changes in an upstream Formula produce different or additional categories. Even when adapting the category mappings to the new inputs, the warning persists. **How can we reproduce the problem?** [edit domain category mappings.ows.zip](https://github.com/user-attachments/files/16359530/edit.domain.category.mappings.ows.zip) In the Formula in the attached workflow, change 'America' in the if statement to 'North America'. Edit Domain will show the warning, although in its dialog box the category name has already been updated. Even when explicitly defining an new mapping based on the new category names, the warning doesn't go away. **What's your environment?** <!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code --> - Operating system: Mac OS 14.5 - Orange version: 3.37 - How you installed Orange: from DMG followed by updates using the internal installer within Orange
closed
2024-07-24T08:55:41Z
2024-10-03T17:15:31Z
https://github.com/biolab/orange3/issues/6861
[ "bug" ]
wvdvegte
1
MaartenGr/BERTopic
nlp
1,175
get <ufunc 'invert'> error when trying to load BERTopic
I get the following message when I try to import BERTopic from bertopic: KeyError: <ufunc 'invert'> Would really appreciate any help!!
closed
2023-04-10T07:34:33Z
2023-05-23T08:36:29Z
https://github.com/MaartenGr/BERTopic/issues/1175
[]
petra-lo
2
great-expectations/great_expectations
data-science
10,427
row ids not displayed in data docs for Spark, SQLAlchemy if specifying unexpected_index_column_names in result_format
**Describe the bug** If running a validation using Spark or SQLAlchemy and specifying values for unexpected_index_column_names, the row ids are not displayed in the data docs. **To Reproduce** Run any expectation suite containing a rule, such as expect_column_value_lengths_to_be, and define a column list for unexpected_index_column_names within the result format object when invoking a checkpoint. This bug is present on all versions of GX. **Expected behavior** The column name/values of the failed rows should be rendered **Environment (please complete the following information):** - Operating System: [e.g. Linux, MacOS, Windows] All - Great Expectations Version: [e.g. 0.13.2] 0.18.x and 1.0.x, probably others - Data Source: [e.g. Pandas, Snowflake] SparkDF - Cloud environment: [e.g. Airflow, AWS, Azure, Databricks, GCP] **Additional context** the root of the issue appears to be great_expectations/render/util.py: _convert_unexpected_indices_to_df(), line 422: set(first_unexpected_index.keys()).difference(set(unexpected_index_column_names)) this ALWAYS evaluates to the empty list, [], causing an exception to be raised later in that same function on line 442 when that empty list is passed to the .groupby() method, aborting the rendering of the failed row indices.
closed
2024-09-19T20:32:16Z
2025-01-29T13:44:20Z
https://github.com/great-expectations/great_expectations/issues/10427
[]
NathanJM
2
aiortc/aiortc
asyncio
927
Capturing an image from the camera with MediaPlayer, processing it with OpenCV, and then transferring it
I'm new to the concepts of aiortc, webrtc and I haven't used ffmpeg before. I receive and transfer images from the camera via MediaPlayer with aiortc. How can we process the image to be transferred with OpenCV and then transfer it?
closed
2023-09-10T01:48:14Z
2023-09-10T01:58:51Z
https://github.com/aiortc/aiortc/issues/927
[]
EmirEvcil
1
sinaptik-ai/pandas-ai
data-visualization
806
Error with Custom prompt
### System Info Python 3.11.3 Pandasai 1.5.5 ### 🐛 Describe the bug Hi @gventuri I am trying to use custom prompt for python code generation. I am using agents and while looking at the log file, i can see that the prompt that was uses is the default prompt. Here is the code to replicate the issue and attached is the log file ``` import pandas as pd import random from pandasai import SmartDataframe from pandasai.llm import AzureOpenAI import os from dotenv import load_dotenv load_dotenv() from pandasai.prompts import AbstractPrompt from pandasai.helpers.logger import Logger from pandasai import Agent logger_obj = Logger(save_logs=True) model = AzureOpenAI( api_token=os.getenv('OPENAI_API_KEY'), azure_endpoint= os.getenv('OPENAI_API_BASE'), api_version=os.getenv('OPENAI_API_VERSION'), deployment_name="chatgpt4" ) months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] countries = ["USA", "Canada", "Mexico", "Brazil", "Germany", "France", "China", "India", "Japan", "Australia"] carriers = ["FedEx", "UPS", "DHL", "USPS"] modes_of_transport = ["Air", "Sea", "Road", "Rail"] data = [] for _ in range(100): month = random.choice(months) country = random.choice(countries) carrier = random.choice(carriers) mode_of_transport = random.choice(modes_of_transport) units = random.randint(1, 100) amount = random.randint(1000, 10000) data.append([month, country, carrier, mode_of_transport, units, amount]) orig_df = pd.DataFrame(data, columns=["Month", "Country", "Carrier", "mot", "Units", "Amount"]) class MyCustomPrompt(AbstractPrompt): def template(self): return """ You are given a dataframe with distinct value in each of the dimension columns of the dataframe Country {Country} Carrier {Carrier} mot {mot} {conversation} """ def setup(self, **kwargs): self.set_vars(kwargs) df = SmartDataframe(df = orig_df, config = { "custom_prompts": { "generate_python_code": MyCustomPrompt( Country = orig_df['Country'].unique(), Carrier = orig_df['Carrier'].unique(), mot = orig_df['mot'].unique() ) }, "enable_cache" : False }) agent = Agent([df], config={"llm": model}, memory_size=20, logger = logger_obj) # Chat with the agent response = agent.chat("Please provide insights on which carrier should be preferred to ship to Germany") print(response) ``` **Below is the log from the log file generated** 2023-12-08 11:27:23 [INFO] Question: Please provide insights on which carrier should be preferred to ship to Germany 2023-12-08 11:27:24 [INFO] Running PandasAI with azure-openai LLM... 2023-12-08 11:27:24 [INFO] Prompt ID: 84a0e3fa-7099-4342-b37a-bc7bf495aad4 2023-12-08 11:27:24 [INFO] Executing Step 0: CacheLookup 2023-12-08 11:27:24 [INFO] Executing Step 1: PromptGeneration 2023-12-08 11:27:24 [INFO] Using prompt: <dataframe> dfs[0]:100x6 Month,Country,Carrier,mot,Units,Amount April,Brazil,USPS,Road,19,4461 February,Mexico,DHL,Rail,9,5098 April,India,DHL,Rail,59,3040 </dataframe> Update this initial code: ```python # TODO: import the required dependencies import pandas as pd # Write code here # Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" } ``` Q: Please provide insights on which carrier should be preferred to ship to Germany Variable `dfs: list[pd.DataFrame]` is already declared. At the end, declare "result" var dict: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" } Generate python code and return full updated code: 2023-12-08 11:27:24 [INFO] Executing Step 2: CodeGenerator 2023-12-08 11:27:34 [INFO] HTTP Request: POST https://openaiservice-dev.openai.azure.com//openai/deployments/chatgpt4/chat/completions?api-version=2023-07-01-preview "HTTP/1.1 200 OK" 2023-12-08 11:27:34 [INFO] Code generated: ``` # TODO: import the required dependencies import pandas as pd # Write code here df = dfs[0] germany_df = df[df['Country'] == 'Germany'] carrier_counts = germany_df['Carrier'].value_counts() preferred_carrier = carrier_counts.idxmax() # Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" } result = { "type": "string", "value": f"The preferred carrier to ship to Germany is {preferred_carrier}." } ``` 2023-12-08 11:27:34 [INFO] Executing Step 3: CachePopulation 2023-12-08 11:27:34 [INFO] Executing Step 4: CodeExecution 2023-12-08 11:27:34 [INFO] Saving charts to C:\Users\navneetkumar\OneDrive - Microsoft\MDOCopilot\AutoGen Test\exports\charts\temp_chart.png 2023-12-08 11:27:34 [INFO] Code running: ``` df = dfs[0] germany_df = df[df['Country'] == 'Germany'] carrier_counts = germany_df['Carrier'].value_counts() preferred_carrier = carrier_counts.idxmax() result = {'type': 'string', 'value': f'The preferred carrier to ship to Germany is {preferred_carrier}.'} ``` 2023-12-08 11:27:34 [INFO] Executing Step 5: ResultValidation 2023-12-08 11:27:34 [INFO] Answer: {'type': 'string', 'value': 'The preferred carrier to ship to Germany is USPS.'} 2023-12-08 11:27:34 [INFO] Executed in: 11.175710678100586s 2023-12-08 11:27:34 [INFO] Executing Step 6: ResultParsing
closed
2023-12-08T06:10:06Z
2024-06-01T00:20:53Z
https://github.com/sinaptik-ai/pandas-ai/issues/806
[]
kumarnavn
0
explosion/spaCy
nlp
13,158
Spcay recoginize similar words into different entities
Hello everyone, I used the following code to do entity recognition in the MIMIC discharge_summary dataset. `nlp = spacy.load("en_core_sci_sm") ` `nlp.add_pipe("scispacy_linker", config={"resolve_abbreviations": True, "linker_name": "umls"}) linker = nlp.get_pipe("scispacy_linker")` `similar_list = ["spinal", "spinals", "Some SPINALS", "one SPINAL", "bulbar", "bulbars", "BULBAR", "BULBARS"]` `for sent in similar_list: ` doc = nlp(sent) entity = doc.ents[0] print("Name: ", entity) entity = doc.ents[0] print("Name: ", entity) for umls_ent in entity._.kb_ents: print(linker.kb.cui_to_entity[umls_ent[0]]) print("-----"*15) But when I tried to use some similar words to test. The spacy returned quite different entities. Like 'spinals' and 'spinal' gives me different entities. Any way to solve this?
closed
2023-11-28T02:30:40Z
2023-11-28T08:43:09Z
https://github.com/explosion/spaCy/issues/13158
[]
LeiGong0125Carrot
0
marimo-team/marimo
data-science
3,956
Invoking shell commands using ! syntax
### Description Jupyter notebooks support invoking shell commands directly in the context of Python code using the ! syntax. Support for the same or alternative would be highly useful. ### Suggested solution Support invoking shell commands like `! ls -alh` ### Alternative _No response_ ### Additional context _No response_
closed
2025-03-02T08:16:44Z
2025-03-03T00:58:23Z
https://github.com/marimo-team/marimo/issues/3956
[ "enhancement" ]
jnoortheen
2
plotly/dash
data-visualization
2,878
[BUG] `id` passed through `dcc.Loading` not visible in DOM
**Describe your context** Hello guys 👋 I am currently trying to pass an `id` to the dcc.Loading component or its parent container and I would like the `id` to be visible in the DOM such that I can target the CSS of the components inside the `dcc.Loading` via ID. Please provide us your environment, so we can easily reproduce the issue. - replace the result of `pip list | grep dash` below ``` dash 2.17.0 dash-bootstrap-components 1.5.0 dash-core-components 2.0.0 dash-html-components 2.0.0 ``` - if frontend related, tell us your Browser, Version and OS - OS: [e.g. iOS] - Browser: Chrome - Version [e.g. 22] **Describe the bug** Let's take the example app below - what I would have expected is that there would be an html div visible with a className="loading" and an id="loading-id". However, if I provide the `className="loading"` I see a div but it does not have the className="loading" in the DOM nor does it have the id="loading-id" in the DOM. When I switch this to `parent_className="loading"`, now I see a div with the className="loading", but I cannot attach an id to this parent container. I am not a react expert, but from the source I can see that the `id` doesn't seem to be passed on in the return of the react component and is therefore not visible in the DOM? Is there any reason for that? https://github.com/plotly/dash/blob/09252f8d2f690480cc468b2e015f9e2417dc90ad/components/dash-core-components/src/components/Loading.react.js#L128-L133 ``` from dash import Dash, html, dcc, callback, Output, Input import plotly.express as px import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder_unfiltered.csv') app = Dash() app.layout = [ html.H1(children='Title of Dash App', style={'textAlign':'center'}), dcc.Dropdown(df.country.unique(), 'Canada', id='dropdown-selection'), dcc.Loading(dcc.Graph(id='graph-content'), color='grey', id="loading-id", parent_className="loading") ] @callback( Output('graph-content', 'figure'), Input('dropdown-selection', 'value') ) def update_graph(value): dff = df[df.country==value] return px.line(dff, x='year', y='pop') if __name__ == "__main__": app.run(debug=True) ``` **Expected behavior** I would expect the `id` being passed on to the react component and visible in the DOM, so having a <div class="loading" id="loading-id" </div> visible in the DOM. **Screenshots** ![Screenshot 2024-06-07 at 12 29 36](https://github.com/plotly/dash/assets/90609403/b2f921bd-2a46-4073-8615-61d348f485b3)
closed
2024-06-07T10:41:21Z
2024-06-18T13:22:13Z
https://github.com/plotly/dash/issues/2878
[ "good first issue" ]
huong-li-nguyen
4
browser-use/browser-use
python
1,070
How to get clean screenshots without marks?
### Problem Description I want to get the clean screenshots without highlight marks, like this: ![Image](https://github.com/user-attachments/assets/588a98a2-3313-4e2f-8456-7302d5634dac) but using these codes: ```python agent = Agent( task="xxx", llm=model, browser=browser, ) history = await agent.run() history.screenshots() ``` I got highlight one, like this: ![Image](https://github.com/user-attachments/assets/412b8514-24c9-4477-a46f-b3ca46123f3f) ### Proposed Solution It would be great if I can get clean screenshots without marks, with similar usage as xx.screenshots() ### Alternative Solutions _No response_ ### Additional Context _No response_
open
2025-03-19T10:05:06Z
2025-03-19T10:06:10Z
https://github.com/browser-use/browser-use/issues/1070
[ "enhancement" ]
Shi33
0
CTFd/CTFd
flask
2,625
Missing attribution in Challenge API (GET)
**Environment**: - CTFd Version/Commit: 3.7.4 - Operating System: NA - Web Browser and Version: NA **What happened?** The API call (GET) to `/api/v1/challenges/<id>` does not return the `attribution` field, as stated in #2595. **What did you expect to happen?** Return the `attribution` field. **How to reproduce your issue** ``` curl "http://my.ctf/api/v1/challenges/1" ``` **Any associated stack traces or error logs** NA
closed
2024-10-09T10:24:06Z
2024-10-11T06:19:55Z
https://github.com/CTFd/CTFd/issues/2625
[]
pandatix
2
microsoft/MMdnn
tensorflow
261
a little question
when I see the MMdnn: Pytorch ReadMe Extract PyTorch pre-trained models command line: $ mmdownload -f pytorch -h Support frameworks: ['alexnet', 'densenet121', 'densenet161', 'densenet169', 'densenet201', 'inception_v3', 'resnet101', 'resnet152', 'resnet18', 'resnet34', 'resnet50', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn', 'vgg19', 'vgg19_bn'] $ mmdownload -f pytorch -n resnet50 -o ./ Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /home/ruzhang/.torch/models/resnet50-19c8e357.pth 100%|████████████████████████████████████████████████████████████████████████| 102502400/102502400 [00:06<00:00, 15858546.50it/s] PyTorch pretrained model is saved as [./imagenet_resnet50.pth]. Can you tell me the difference between resnet50-19c8e357.pth with imagenet_resnet50.pth? because I see .torch/models/resnet50-19c8e357.pth is 100 000, imagenet_resnet50.pth is 100 113. When I try to use the resnet50-19c8e357.pth to convert is false. I do not know why ?
closed
2018-06-21T00:51:50Z
2018-07-04T07:14:19Z
https://github.com/microsoft/MMdnn/issues/261
[]
SmallMunich
2
explosion/spaCy
machine-learning
13,275
Spacy french NER transformer based model fr_dep_news_trf not working
<!-- NOTE: For questions or install related issues, please open a Discussion instead. --> Hello, we want to use spacy to do NER extraction for french texts. The transformer based model fr_dep_news_trf seems to be broken. The list of entities is always empty. ## How to reproduce the behaviour <!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> We create a minimum example to reproduce the issue with google colab https://colab.research.google.com/drive/1mngC0EBDOP3SAngeTeNRdK2d3EP2Mc-v?authuser=0#scrollTo=eXeJRQvflErl ``` import spacy doc = nlp("Bonjour, Emmanuel. Bonjour, monsieur. Donc voilà, je fais plein de choses. Biologie, c'est du pire veau, museau, lentilles, c'est voilà. Donc la pièce est bouchée au sep, c'est pareil. Je fais une sauce au sep avec la crème. Ah, ça doit être pas mal aussi. C'est pas mal aussi. Alors on va prendre un petit pot de quoi ? On a le Beaujolais, on a le Saint-Joseph, le Trois-Hermitages. Ah non, je suis une fille du Beaujolais, moi. Merci. Alors attends, je pousse.") for w in doc.ents: print(w.text,w.label_) ``` the model doesn't detect anything. ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> It's the default colab environment ## Info about spaCy - **spaCy version:** 3.6.1 - **Platform:** Linux-6.1.58+-x86_64-with-glibc2.35 - **Python version:** 3.10.12 - **Pipelines:** fr_dep_news_trf (3.6.1), fr_core_news_lg (3.6.0), en_core_web_sm (3.6.0)
closed
2024-01-25T22:51:42Z
2024-01-26T08:54:07Z
https://github.com/explosion/spaCy/issues/13275
[ "lang / fr", "feat / transformer" ]
zmy1116
1
dynaconf/dynaconf
fastapi
651
[RFC] Support for typing
**Describe the solution you'd like** I've started using typing a fair bit in my application, and have found it quite useful. When I try to access DYNACONF variables, currently the type is essentially "any type". Would be nice to have better types based on what I have defined in my settings, or allow me to specify the typings in a better way so mypy can handle this better ![image](https://user-images.githubusercontent.com/2200743/131949494-40d5ef18-39f2-4154-91fd-eec77a15d42b.png) **Describe alternatives you've considered** I tried: - settings.get('KEY') - settings['KEY'] - settings.KEY I think the 3rd approach is the only one that can support typing well - as the others are runtime dependent. **Additional context** Maybe in the longer run, for some cases - it would be nice to also be able to specify validators using typehints - like dataclasses.
open
2021-09-03T04:16:28Z
2022-06-29T13:57:05Z
https://github.com/dynaconf/dynaconf/issues/651
[ "wontfix", "Not a Bug", "RFC" ]
AbdealiLoKo
4
deepfakes/faceswap
machine-learning
670
train failed
03/15/2019 22:08:06 MainProcess training_0 multithreading run DEBUG Error in thread (training_0): OOM when allocating tensor with shape[16384,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\n [[{{node training_1/Adam/mul_43}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_2/read, training_1/Adam/Variable_30/read)]]\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\n\n [[{{node loss_1/mul/_401}} = _Recv[[[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1638_loss_1/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\n 03/15/2019 22:08:06 MainProcess MainThread train monitor_console DEBUG Thread error detected 03/15/2019 22:08:06 MainProcess MainThread train monitor_console DEBUG Closed Console Monitor 03/15/2019 22:08:06 MainProcess MainThread train end_thread DEBUG Ending Training thread 03/15/2019 22:08:06 MainProcess MainThread train end_thread CRITICAL Error caught! Exiting... 03/15/2019 22:08:06 MainProcess MainThread multithreading join DEBUG Joining Threads: 'training' 03/15/2019 22:08:06 MainProcess MainThread multithreading join DEBUG Joining Thread: 'training_0' 03/15/2019 22:08:06 MainProcess MainThread multithreading join ERROR Caught exception in thread: 'training_0' Traceback (most recent call last): File "C:\Users\jinyi\faceswap\lib\cli.py", line 107, in execute_script process.process() File "C:\Users\jinyi\faceswap\scripts\train.py", line 101, in process self.end_thread(thread, err) File "C:\Users\jinyi\faceswap\scripts\train.py", line 126, in end_thread thread.join() File "C:\Users\jinyi\faceswap\lib\multithreading.py", line 443, in join raise thread.err[1].with_traceback(thread.err[2]) File "C:\Users\jinyi\faceswap\lib\multithreading.py", line 381, in run self._target(*self._args, **self._kwargs) File "C:\Users\jinyi\faceswap\scripts\train.py", line 152, in training raise err File "C:\Users\jinyi\faceswap\scripts\train.py", line 142, in training self.run_training_cycle(model, trainer) File "C:\Users\jinyi\faceswap\scripts\train.py", line 214, in run_training_cycle trainer.train_one_step(viewer, timelapse) File "C:\Users\jinyi\faceswap\plugins\train\trainer\_base.py", line 139, in train_one_step loss[side] = batcher.train_one_batch(do_preview) File "C:\Users\jinyi\faceswap\plugins\train\trainer\_base.py", line 214, in train_one_batch loss = self.model.predictors[self.side].train_on_batch(*batch) File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch outputs = self.train_function(ins) File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__ run_metadata_ptr) File "D:\PC_apps\Anaconda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16384,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node training_1/Adam/mul_43}} = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Adam/beta_2/read, training_1/Adam/Variable_30/read)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[{{node loss_1/mul/_401}} = _Recv[[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1638_loss_1/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
closed
2019-03-15T14:34:28Z
2019-03-18T17:44:52Z
https://github.com/deepfakes/faceswap/issues/670
[]
Nostalgia1990
11
tox-dev/tox
automation
2,838
tox 4 breaks generative env def with -pyXXX fragments if basepython is defined
## Issue ```ini [tox] [testenv] base_python = python3 [testenv:functional{,-py38,-py39,-py310}] [testenv:other] ``` This tox.ini is worked before in tox 3.28.0 to use the default python3 binary for `other` env while use the specific python binary for the `functional` envs. With tox 4.6.2. this result in conflict. ## Environment Provide at least: - OS: ```console root@3974fdf51f12:/tmp/repro# cat /etc/*rele* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS" PRETTY_NAME="Ubuntu 22.04.1 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.1 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy ``` - `pip list` of the host Python where `tox` is installed: ```console root@3974fdf51f12:/tmp/repro# pip list Package Version ------------- ------- cachetools 5.2.1 chardet 5.1.0 colorama 0.4.6 distlib 0.3.6 filelock 3.9.0 packaging 23.0 pip 22.0.2 platformdirs 2.6.2 pluggy 1.0.0 py 1.11.0 pyproject_api 1.4.0 setuptools 59.6.0 six 1.16.0 tomli 2.0.1 tox 4.2.6 virtualenv 20.17.1 wheel 0.37.1 ``` ## Output of running tox Provide the output of `tox -rvv`: ```console root@3974fdf51f12:/tmp/repro# tox -rvv -e functional-py310 .pkg: 58 W remove tox env folder /tmp/repro/.tox/.pkg [tox/tox_env/api.py:321] functional-py310: 58 E failed with env name functional-py310 conflicting with base python python3 [tox/session/cmd/run/single.py:55] functional-py310: FAIL code 1 (0.00 seconds) evaluation failed :( (0.02 seconds) ``` ## Minimal example If possible, provide a minimal reproducer for the issue: ```ini [tox] [testenv] base_python = python3 [testenv:functional{,-py38,-py39,-py310}] [testenv:other] ```
closed
2023-01-09T10:20:23Z
2023-01-10T17:34:02Z
https://github.com/tox-dev/tox/issues/2838
[]
gibizer
12
matplotlib/matplotlib
data-science
29,615
[Bug]: pcolormesh's default x/y range might break `set_scale('log')`
### Bug summary While using `pcolormesh`, setting the x- or y-scale to logarithmic sometimes breaks. It turned out this happens when the default `x/ylim` contain negative values. A fix could be to ensure that limits are all positive before calling the `set_x/yscale('log')`. This likely happens with similar commands as well. ### Code for reproduction ```Python import numpy as np import matplotlib.pyplot as plt x = np.arange(4, dtype=float) y = np.linspace(1e1, 1e5, 10) # all positive z = np.arange(len(x)*len(y)).reshape(len(x), len(y)) fig, axs = plt.subplots(1,3, figsize=(9,3)) # works fine axs[0].pcolormesh(x,y, z.T) # doesn't work fine axs[1].pcolormesh(x,y, z.T) axs[1].set_yscale('log') # because the limits set automatically include negative values # works fine axs[2].pcolormesh(x,y, z.T) axs[2].set_ylim(1e-3, y.max()) # Fix: we can explicitly set limits to the positive range axs[2].set_yscale('log') ``` ### Actual outcome ![Image](https://github.com/user-attachments/assets/2ead51f2-67c6-4f09-9e5e-5caa12708e74) ### Expected outcome The rightmost plot is the expected outcome, whereas the middle panel is what will be output by default. ### Additional information _No response_ ### Operating system Ubuntu 22.04.5 LTS ### Matplotlib Version 3.8.4 ### Matplotlib Backend module://matplotlib_inline.backend_inline ### Python version 3.12.3 ### Jupyter version 4.2.5 ### Installation conda
open
2025-02-13T14:00:19Z
2025-02-20T13:36:41Z
https://github.com/matplotlib/matplotlib/issues/29615
[ "status: confirmed bug", "status: has patch" ]
arashgmn
8
ultralytics/ultralytics
pytorch
18,864
How to further optimize model for single image faster inference
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question I am working on detecting faces on a Raspberry Pi Zero. For this I'm using a YOLOv6 modified to reduce the number of parameters (removed heads, reduced image size), and exporting a quantized version ("best_integer_quant.tflite"), but it is surprisingly slow in the Pi (~800ms best case), are there any strategies I'm missing to make it run faster? Training: ``` model = YOLO("./yolov6-face.yaml") r = model.train(data="face-detection-dataset.yaml", epochs=50, imgsz='192,320', single_cls=True, plots=True, batch=0.9) ``` model yaml: ``` %%writefile yolov6-face.yaml # Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv6 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/models/yolov6 # Parameters nc: 1 # number of classes activation: nn.ReLU() # (optional) model default activation function scales: # model compound scaling constants, i.e. 'model=yolov6n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] p: [0.33, 0.25, 8] # nano is [0.33, 0.25, 1024] # YOLOv6-3.0s backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 6, Conv, [128, 3, 1]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 12, Conv, [256, 3, 1]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 18, Conv, [512, 3, 1]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 6, Conv, [1024, 3, 1]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv6-3.0s head head: - [-1, 1, Conv, [256, 1, 1]] - [-1, 1, nn.ConvTranspose2d, [256, 2, 2, 0]] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 1, Conv, [256, 3, 1]] - [-1, 9, Conv, [256, 3, 1]] # 14 - [-1, 1, Conv, [128, 1, 1]] - [-1, 1, nn.ConvTranspose2d, [128, 2, 2, 0]] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 1, Conv, [128, 3, 1]] - [-1, 9, Conv, [128, 3, 1]] # 19 - [[14, 19], 1, Detect, [nc]] # Detect(P3, P4, P5) ``` Model printout: ``` from n params module arguments 0 -1 1 232 ultralytics.nn.modules.conv.Conv [3, 8, 3, 2] 1 -1 1 592 ultralytics.nn.modules.conv.Conv [8, 8, 3, 2] 2 -1 2 1184 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 3 -1 1 592 ultralytics.nn.modules.conv.Conv [8, 8, 3, 2] 4 -1 4 2368 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 5 -1 1 592 ultralytics.nn.modules.conv.Conv [8, 8, 3, 2] 6 -1 6 3552 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 7 -1 1 592 ultralytics.nn.modules.conv.Conv [8, 8, 3, 2] 8 -1 2 1184 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 9 -1 1 184 ultralytics.nn.modules.block.SPPF [8, 8, 5] 10 -1 1 80 ultralytics.nn.modules.conv.Conv [8, 8, 1, 1] 11 -1 1 264 torch.nn.modules.conv.ConvTranspose2d [8, 8, 2, 2, 0] 12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 13 -1 1 1168 ultralytics.nn.modules.conv.Conv [16, 8, 3, 1] 14 -1 3 1776 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 15 -1 1 80 ultralytics.nn.modules.conv.Conv [8, 8, 1, 1] 16 -1 1 264 torch.nn.modules.conv.ConvTranspose2d [8, 8, 2, 2, 0] 17 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 1168 ultralytics.nn.modules.conv.Conv [16, 8, 3, 1] 19 -1 3 1776 ultralytics.nn.modules.conv.Conv [8, 8, 3, 1] 20 [14, 19] 1 94178 ultralytics.nn.modules.head.Detect [1, [8, 8]] YOLOv6-face summary: 145 layers, 111,826 parameters, 111,810 gradients, 1.0 GFLOPs ``` ### Additional Lately even when I reduce the number of layers or operations, the inference on the training hw (Kaggle's P100) always takes ~0.3ms, why is this? Am I missing something? ``` Speed: 0.0ms preprocess, 0.3ms inference, 0.0ms loss, 1.1ms postprocess per image ```
closed
2025-01-24T14:18:36Z
2025-03-11T20:30:27Z
https://github.com/ultralytics/ultralytics/issues/18864
[ "question", "detect", "embedded", "exports" ]
EmmanuelMess
13
openapi-generators/openapi-python-client
rest-api
902
Error No package metadata was found for openapi_python_client when running app.exe built from pyinstaller
**Describe the bug** **Error `importlib.metadata.PackageNotFoundError: No package metadata was found for openapi_python_client` when running `app.exe` built from `pyinstaller`.** I have an application that uses the `openapi_python_client` library, and I've built the application into an exe file using PyInstaller to include the entire library. However, when running the resulting exe file after building, I encounter the following error: ![image](https://github.com/openapi-generators/openapi-python-client/assets/44437492/cb33ac01-2ac3-4d8f-9b18-0ea144b52243) I have tried adjusting the `__version__` in the `__init__.py` file of the `openapi_python_client` library from `__version__ = version(__package__)` to `__version__ = "1.0"`, and the error is resolved. ![image](https://github.com/openapi-generators/openapi-python-client/assets/44437492/9f4a50ad-f10f-499d-887c-c4c9c2900b42) **Is there a way to adjust this part so that the application doesn't encounter this error when built with PyInstaller?** **Desktop (please complete the following information):** - OS: [windows 10] - Python Version: [3.10.11] - openapi-python-client version [v0.16.0]
closed
2023-12-13T12:23:14Z
2023-12-18T22:44:04Z
https://github.com/openapi-generators/openapi-python-client/issues/902
[]
dinhthang1987
1
ray-project/ray
data-science
51,510
[Core] Cover cpplint for `ray/core_worker` (excluding transport)
## Description As part of the initiative to introduce cpplint into the pre-commit hook, we are gradually cleaning up C++ folders to ensure compliance with code style requirements. This issue focuses on cleaning up /src/ray/core_worker/transport (excluding transport , as it's being covered through #51457 ) ## Goal - Ensure all .h and .cc files in `/src/ray/core_worker` comply with cpplint rules. - Address or suppress all cpplint warnings. ## Steps to Complete - Checkout the latest main branch and install the pre-commit hook. - Manually modify all C++ files in `/src/ray/core_worker` to trigger cpplint (e.g., by adding a newline). - Run git commit to trigger cpplint and identify issues. - Fix the reported issues or suppress them using clang-tidy if necessary. This is a sub issue from https://github.com/ray-project/ray/issues/50583
closed
2025-03-19T02:31:35Z
2025-03-22T04:10:59Z
https://github.com/ray-project/ray/issues/51510
[ "enhancement", "core" ]
nishi-t
2
gevent/gevent
asyncio
1,543
gevent Native Module Errors On Heroku While Running Embedded
* gevent version: 1.4.0 * Python version: Python 3.8.X provided from Heroku's buildpack * Operating System: Heroku Web Dyno ### Description: While running `gevent.monkey.patch_all()` in my web application, my Heroku app quit with this error: ``` 2020-03-10T20:49:00.873710+00:00 app[web.1]: Traceback (most recent call last): 2020-03-10T20:49:00.873721+00:00 app[web.1]: File "/app/out/server_impl/__init__.py", line 1, in <module> 2020-03-10T20:49:00.873855+00:00 app[web.1]: from gevent import monkey; monkey.patch_all() 2020-03-10T20:49:00.873875+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.8/site-packages/gevent/__init__.py", line 87, in <module> 2020-03-10T20:49:00.874029+00:00 app[web.1]: from gevent._hub_local import get_hub 2020-03-10T20:49:00.874033+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.8/site-packages/gevent/_hub_local.py", line 101, in <module> 2020-03-10T20:49:00.874173+00:00 app[web.1]: import_c_accel(globals(), 'gevent.__hub_local') 2020-03-10T20:49:00.874189+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.8/site-packages/gevent/_util.py", line 105, in import_c_accel 2020-03-10T20:49:00.874339+00:00 app[web.1]: mod = importlib.import_module(cname) 2020-03-10T20:49:00.874343+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.8/importlib/__init__.py", line 127, in import_module 2020-03-10T20:49:00.874518+00:00 app[web.1]: return _bootstrap._gcd_import(name[level:], package, level) 2020-03-10T20:49:00.874548+00:00 app[web.1]: ImportError: /app/.heroku/python/lib/python3.8/site-packages/gevent/__hub_local.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyExc_SystemError ``` I don't know what caused this, the `monkey.patch_all` function runs perfectly everywhere except on Heroku, my assumption is that some asset typically present on a normal operating system is missing on Heroku's web dynos.
closed
2020-03-10T21:26:46Z
2020-03-28T23:42:57Z
https://github.com/gevent/gevent/issues/1543
[]
danii
9
modAL-python/modAL
scikit-learn
91
Can this package be applied into one's own classifier?
open
2020-07-16T08:10:41Z
2020-07-17T01:55:36Z
https://github.com/modAL-python/modAL/issues/91
[]
liumuyan666
1
ageitgey/face_recognition
machine-learning
1,274
face_recognition pipeline for multiple sources
* face_recognition version: - * Python version:3.7 * Operating System: ubuntu 18.04 ### Description Hi, I am have implemented a face recognition pipeline with face_recognition library on jetson nano which fetched 7-10 fps from a single 1080p source, which is decent for a single source. but, to increase the number of sources I am planning to use this library with deepstream to process multiple streams. please do let me know if you've implemented something similar.
open
2021-02-02T08:21:37Z
2021-04-08T13:27:36Z
https://github.com/ageitgey/face_recognition/issues/1274
[]
shubham-shahh
6
matplotlib/mplfinance
matplotlib
352
Usage Question
Hi, I'm trying to use your library but having a bit of difficulty. So I am getting minute candles through a REST API and then placing them into a pandas datagram, parsing the date/time into a datetime object and then using set_index to my datetime column but I'm getting stuck getting the plot to show up. The error is: raise TypeError('Expect data.index as DatetimeIndex') TypeError: Expect data.index as DatetimeIndex I can provide more data if needed, any help is appreciated and I also want to mention I appreciate your work as well so I want to thank you in advance.
closed
2021-03-14T01:42:29Z
2021-03-14T02:54:46Z
https://github.com/matplotlib/mplfinance/issues/352
[ "question" ]
RT-Tap
2
ydataai/ydata-profiling
jupyter
938
assign report object raise AttributeError: can't set attribute
**Describe the bug** when I run `ieee_fraud_report.report = original_report_structure` in modify_report_structure.ipynb, I come across `raise AttributeError: can't set attribute ` **To Reproduce** ```python """ Test for issue XXX: https://github.com/pandas-profiling/pandas-profiling/issues/XXX """ import pandas as pd from pandas_profiling import ProfileReport from copy import deepcopy df = pd.read_csv("./taitanic/train.csv") ieee_fraud_report = ProfileReport(df, minimal=True) original_report_structure = deepcopy(ieee_fraud_report.report) for section in original_report_structure.content["body"].content["items"]: # Only consider sections that contain items # if len(section.content['items']) > 0: # Set the report structure ieee_fraud_report.report = original_report_structure --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_19264/2377341498.py in <module> 3 # if len(section.content['items']) > 0: 4 # Set the report structure ----> 5 ieee_fraud_report.report = original_report_structure AttributeError: can't set attribute ```
open
2022-03-09T16:20:08Z
2022-05-07T20:01:12Z
https://github.com/ydataai/ydata-profiling/issues/938
[ "help wanted 🙋", "documentation 📖" ]
searchlink
1
ultralytics/yolov5
machine-learning
13,064
Class scores from TFlite model's output data don't add up to 1
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi, I have successfully trained a custom model based on YOLOv5s and converted the model to TFlite. I get this as output: name: StatefulPartitionedCall:0 tensor: float32[1,10647,15] In this output array, I expect the column names to be [xywh, conf, class0, class1, class2, class3, class4, class5, class6, class7, class8, class9]. Here is a sample of the output array: `[0.0099678915, 0.02021235, 0.048227567, 0.11275095, 0.0020225942, 0.10732424, 0.048576027, 0.18665865, 0.07772142, 0.020257145, 0.13898787, 0.039612412, 0.074305505, 0.05975789, 0.008609295]` If you look at just the class scores, they don't add up to 1, so there is some issue here. Here is the relevant code: ```` /** * Writes Image data into a {@code ByteBuffer}. */ protected ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) { ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * BATCH_SIZE * INPUT_SIZE * INPUT_SIZE * PIXEL_SIZE); byteBuffer.order(ByteOrder.nativeOrder()); int[] intValues = new int[INPUT_SIZE * INPUT_SIZE]; bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight()); int pixel = 0; if(imgData != null){ imgData.rewind(); } for (int i = 0; i < INPUT_SIZE; ++i) { for (int j = 0; j < INPUT_SIZE; ++j) { int pixelValue = intValues[i * INPUT_SIZE + j]; if (isModelQuantized) { // Quantized model imgData.putFloat(((pixelValue >> 16) & 0xFF) / 255.0f); imgData.putFloat(((pixelValue >> 8) & 0xFF) / 255.0f); imgData.putFloat((pixelValue & 0xFF) / 255.0f); } else { // Float model imgData.putFloat((((pixelValue >> 16) & 0xFF)- IMAGE_MEAN) / IMAGE_STD); //image_mean = 0f and image_std = 255f imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD); imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD); } } } return imgData; } ``` public ArrayList<Recognition> recognizeImage(Bitmap bitmap) { Bitmap resizedBitmap = Bitmap.createScaledBitmap(bitmap, INPUT_SIZE, INPUT_SIZE, true); ByteBuffer byteBuffer_ = convertBitmapToByteBuffer(resizedBitmap); float[][][] out = new float[1][output_box][15]; if(outData != null){ outData.rewind(); } Object[] inputArray = {imgData}; this.tfLite.run(byteBuffer_, out); ArrayList<Recognition> detections = new ArrayList<Recognition>(); for (int i = 0; i < output_box; ++i) { // Denormalize xywh for (int j = 0; j < 4; ++j) { out[0][i][j] *= getInputSize(); } } float[] probs = new float[output_box]; for (int i = 0; i < output_box; ++i){ probs[i]= out[0][i][4]; } System.out.println("Softmax"); probs = softmax(probs); for (int i = 0; i < output_box; ++i){ out[0][i][4] = probs[i]; System.out.println(probs[i]); } for (int i = 0; i < output_box; ++i){ final int offset = 0; final float confidence = out[0][i][4]; int detectedClass = -1; float maxClass = -1; final float[] classes = new float[10]; for (int c = 0; c < 10; ++c) { classes[c] = out[0][i][5 + c]; } System.out.println("Step 2"); for (int c = 0; c < 10; ++c) { if (Float.compare(classes[c],maxClass) > 0) { detectedClass = c; maxClass = classes[c]; } } final float confidenceInClass = maxClass * confidence; System.out.println("Confidence in Class:" + confidenceInClass); System.out.println("Confidence in Label:" + confidence); if (confidenceInClass > 0.3f) { final float xPos = out[0][i][0]; final float yPos = out[0][i][1]; final float w = out[0][i][2]; final float h = out[0][i][3]; final RectF rect = new RectF( Math.max(0, xPos - w / 2), Math.max(0, yPos - h / 2), Math.min(bitmap.getWidth() - 1, xPos + w / 2), Math.min(bitmap.getHeight() - 1, yPos + h / 2)); detections.add(new Recognition("" + offset, this.labels.get(detectedClass), confidenceInClass, rect, detectedClass)); } } final ArrayList<Recognition> recognitions = nms(detections); return recognitions; }` ### Additional I have updated java, yolo, and tensorflow as of 6/3/2024. I am using android studio 3.6.3.
closed
2024-06-03T11:42:46Z
2024-10-20T19:47:10Z
https://github.com/ultralytics/yolov5/issues/13064
[ "question", "Stale" ]
Rishivarshil
5
akfamily/akshare
data-science
5,305
帮忙新增一个批量股本数据,谢谢!
具体网址:https://webapi.cninfo.com.cn/#/thematicStatistics 数据页面: ![Uploading image.png…]() 非常感谢!
closed
2024-11-04T07:53:34Z
2024-11-05T10:21:28Z
https://github.com/akfamily/akshare/issues/5305
[ "bug" ]
jasonudu
2
STVIR/pysot
computer-vision
327
关于siammask的训练问题
在我对siammask进行训练时,总会遇到以下问题,请问各位大神是否有解决办法,谢谢! [2020-03-18 17:21:45,190-rk0-model_load.py# 48] load pretrained model from /media/misstian/tnq/pysot-master/tools/../pretrained_models/resnet50.model [2020-03-18 17:21:45,349-rk0-model_load.py# 42] remove prefix 'module.' [2020-03-18 17:21:45,350-rk0-model_load.py# 33] used keys:265 [2020-03-18 17:21:45,354-rk0-train.py# 58] build train dataset Traceback (most recent call last): File "../../tools/train.py", line 317, in <module> main() File "../../tools/train.py", line 290, in main train_loader = build_data_loader() File "../../tools/train.py", line 60, in build_data_loader train_dataset = TrkDataset() File "/media/misstian/tnq/pysot-master/tools/pysot/datasets/dataset.py", line 158, in __init__ subdata_cfg = getattr(cfg.DATASET, name) File "/home/misstian/anaconda3/envs/pysot/lib/python3.7/site-packages/yacs/config.py", line 141, in __getattr__ raise AttributeError(name) AttributeError: C
closed
2020-03-18T09:29:06Z
2020-03-19T04:12:14Z
https://github.com/STVIR/pysot/issues/327
[]
Dtappledoghuati
0
taverntesting/tavern
pytest
632
Use external functions to generate query string parameters doesn't work after tavern-1.12.2
```yaml name: Request trade analysis for a date range request: url: "{tavern.env_vars.APP_URL}/v2/price" method: GET verify: false params: $ext: function: api.test.tavern.utils:generate_multi_date_request headers: content-type: application/json response: status_code: 200 verify_response_with: function: api.test.tavern.utils:validate_ta_data ``` We have a test case that looks like this and use an external Python function to generate the query string parameters. This used to work before `tavern-1.12.2`. We didn't change any code but only bumped the tavern version so the issue should be replicable using a dummy example as well. Not sure if this is related to the `pykwalify` version.
closed
2021-01-04T02:19:03Z
2021-01-30T16:09:11Z
https://github.com/taverntesting/tavern/issues/632
[ "Type: Bug" ]
DonghanYang
2
miguelgrinberg/python-socketio
asyncio
1,081
Error when emitting a message to multiple recipients at the same time
**Describe the bug** Hello, big fan of this library, thank you for maintaining it! I wanted to report what looks like a bug in the `AsyncManager`'s emit method. Based on [the docs for emit](https://python-socketio.readthedocs.io/en/latest/api.html#socketio.AsyncServer.emit), it seems like I can pass a `list` of recipients to the `to`/`room` argument to emit an event to multiple rooms but I get the error shown below: ```python-traceback Traceback (most recent call last): File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 524, in _handle_event_internal r = await server._trigger_event(data[0], namespace, sid, *data[1:]) File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 558, in _trigger_event ret = await handler(*args) File "/tmp/test/server.py", line 12, in ping await sio_server.emit("pong", to=[sid]) File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 168, in emit await self.manager.emit(event, data, namespace, room=room, File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_manager.py", line 18, in emit if namespace not in self.rooms or room not in self.rooms[namespace]: TypeError: unhashable type: 'list' ``` ^ `room` here is a `list` which is not hashable and is throwing this `TypeError` https://github.com/miguelgrinberg/python-socketio/blob/81f872c17051b0d1d0cea7ca49a3bdca6f6bae1d/src/socketio/asyncio_manager.py#L18 **To Reproduce** I made a simple client/server setup to verify this and confirmed that it only affects the async version of the server (the synchronous server seems to work as expected). `server.py` ```python from aiohttp import web from socketio import AsyncServer sio_server = AsyncServer(logger=True, engineio_logger=True) app = web.Application() sio_server.attach(app) @sio_server.event async def test_event_from_client(sid): await sio_server.emit("test_event_from_server", to=sid) # this works as expected await sio_server.emit("test_event_from_server", to=[sid]) # this does not if __name__ == "__main__": web.run_app(app, port=8080) ``` `client.py` ```python import asyncio from socketio import AsyncClient sio_client = AsyncClient(logger=True, engineio_logger=True) async def test_client(): await sio_client.connect("http://localhost:8080") await sio_client.emit("test_event_from_client") await sio_client.wait() if __name__ == "__main__": asyncio.run(test_client()) ``` **Expected behavior** I'd expect no error to be raised and that a message would be emitted to the unique list of participants for the list of ids passed to `emit()` (ex. if I emit to a list of room ids, each client in at least one of the rooms would receive one and only one message). **Logs** Server logs: ``` received event "test_event_from_client" from sNDx2ya64AYLtpHYAAAB [/] emitting event "test_event_from_server" to sNDx2ya64AYLtpHYAAAB [/] w_QsRzjHdk0VbslPAAAA: Sending packet MESSAGE data 2["test_event_from_server"] emitting event "test_event_from_server" to ['sNDx2ya64AYLtpHYAAAB'] [/] Traceback (most recent call last): File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 524, in _handle_event_internal r = await server._trigger_event(data[0], namespace, sid, *data[1:]) File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 558, in _trigger_event ret = await handler(*args) File "/tmp/test/server.py", line 12, in test_event_from_client await sio_server.emit("test_event_from_server", to=[sid]) File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_server.py", line 168, in emit await self.manager.emit(event, data, namespace, room=room, File "/home/taha/.cache/pypoetry/virtualenvs/socketio-issue-CHLv_kh8-py3.10/lib/python3.10/site-packages/socketio/asyncio_manager.py", line 18, in emit if namespace not in self.rooms or room not in self.rooms[namespace]: TypeError: unhashable type: 'list' ```
closed
2022-11-06T23:03:36Z
2022-11-06T23:43:39Z
https://github.com/miguelgrinberg/python-socketio/issues/1081
[ "bug" ]
DarkAce65
2
microsoft/nni
pytorch
5,324
ValueError: RetiariiExeConfig: type of experiment_name ('mnist_search') is not typing.Optional[str]
**Describe the issue**: I run the tutorial NAS on my M1 laptop. I build the NNI library as this [instruction](https://nni.readthedocs.io/zh/stable/notes/build_from_source.html). But an error happen: ``` ValueError: RetiariiExeConfig: type of experiment_name ('mnist_search') is not typing.Optional[str] [2023-01-26 13:27:25] Stopping experiment, please wait... [2023-01-26 13:27:25] Experiment stopped ``` **Environment**: - NNI version: 999.devo - Training service (local|remote|pai|aml|etc): local - Client OS: M1 MacOS - Server OS (for remote mode only): - Python version: 3.10.5 - PyTorch/TensorFlow version: 1.12.0 - Is conda/virtualenv/venv used?: miniforge3 (conda in Mac) - Is running in Docker?: no **Configuration**: - Experiment config (remember to remove secrets!): - Search space: **Log message**: - nnimanager.log: - dispatcher.log: - nnictl stdout and stderr: <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
closed
2023-01-26T12:21:27Z
2023-02-15T16:47:50Z
https://github.com/microsoft/nni/issues/5324
[]
whubaichuan
6
twopirllc/pandas-ta
pandas
435
Can make session volume with pandas_ta?
closed
2021-11-19T12:00:04Z
2021-11-19T16:39:57Z
https://github.com/twopirllc/pandas-ta/issues/435
[]
Ylelmon
0
elliotgao2/gain
asyncio
44
Add hooks before download and after download.
我们想在下载前或者下载后加一些东西,比如保存整个url。或者针对不同的状态码做不同的处理,比如3xx的重定向,5xx时重试等。
open
2018-07-14T17:51:51Z
2018-09-25T09:59:25Z
https://github.com/elliotgao2/gain/issues/44
[]
songww
3
davidteather/TikTok-Api
api
373
[BUG] - KeyError: 'challenge'
**Describe the bug** Getting videos by hashtag throwing exception **The buggy code** https://github.com/davidteather/TikTokBot/blob/master/tiktokbot.py **Expected behavior** Code works **Error Trace (if any)** Put the error trace below if there's any error thrown. ``` Traceback (most recent call last): File "/Users/*****/PycharmProjects/TestProjects/main.py", line 123, in <module> res = api.byHashtag(x, count=count, custom_did=did) File "/Users/*****k/PycharmProjects/TestProjects/venv/lib/python3.7/site-packages/TikTokApi/tiktok.py", line 744, in byHashtag id = self.getHashtagObject(hashtag)["challengeInfo"]["challenge"]["id"] KeyError: 'challenge' ``` **Desktop (please complete the following information):** - OS: MacOS - TikTokApi Version [3.7.9]
closed
2020-11-18T20:39:55Z
2020-11-18T22:18:19Z
https://github.com/davidteather/TikTok-Api/issues/373
[ "bug" ]
kami4ka
2
pywinauto/pywinauto
automation
490
How to click item in context menu
Hello, I am new to pywinauto and I need to automate one application that has a ToolbarWindow32 control with several buttons that make one "Context" menu to appear. I am able to click on all those buttons, but not to navigate inside the menu. I guess I am missing something basic. Here is what the Inspect.exe shows for that "Context" menu. ![context_menu](https://user-images.githubusercontent.com/24893177/39129512-54d6c2f0-4713-11e8-96e1-e27c505b25d4.PNG) I've read other posts where it was written that the search should start from the "Desktop" part (not from the actual control: in this case the toolbar), but I could not find any example posted (the wireshark , notepad and explorer examples are slightly different situations) Thanks in advance for your help.
closed
2018-04-23T13:37:15Z
2020-09-11T13:30:33Z
https://github.com/pywinauto/pywinauto/issues/490
[ "question" ]
bulyhome
4
christabor/flask_jsondash
flask
143
Consider removing d3 specific layouts in favor of vega equivalents
For example, the tree layout is far superior in https://vega.github.io/vega/examples/tree-layout/ than the one currently implemented, and would allow completely removing that category since vega (lite) is already implemented.
open
2017-08-04T06:32:30Z
2017-08-04T06:32:30Z
https://github.com/christabor/flask_jsondash/issues/143
[ "API change", "new chart" ]
christabor
0