repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
cleanlab/cleanlab
data-science
1,065
lab.find_issues(features=features) outputs error for underperforming issue
lab.find_issues(features=features) output ``` [/Users/sanjana/cleanlab_home/fork_cleanlab/cleanlab/datalab/internal/issue_finder.py:457](https://file+.vscode-resource.vscode-cdn.net/Users/sanjana/cleanlab_home/fork_cleanlab/cleanlab/datalab/internal/issue_finder.py:457): UserWarning: No labels were provided. The 'label' issue type will not be run. warnings.warn("No labels were provided. " "The 'label' issue type will not be run.") Finding null issues ... Finding outlier issues ... Fitting OOD estimator based on provided features ... Finding near_duplicate issues ... Finding non_iid issues ... Finding underperforming_group issues ... Error in underperforming_group: UnderperformingGroupIssueManager.find_issues() missing 1 required positional argument: 'pred_probs' Failed to check for these issue types: [UnderperformingGroupIssueManager] Audit complete. 984 issues found in the dataset. ``` Dataset: https://www.kaggle.com/datasets/laotse/credit-risk-dataset/data Code ```python import pandas as pd from cleanlab import Datalab from sklearn.preprocessing import StandardScaler import numpy as np df = pd.read_csv("./credit_risk_dataset.csv") df = df[~df.isnull().any(axis=1)].copy() feature_columns = df.columns.to_list() feature_columns.remove("loan_status") X_raw = df[feature_columns] labels = df["loan_status"] cat_features = [ "person_home_ownership", "loan_intent", "loan_grade", "cb_person_default_on_file", ] numeric_features = [ "person_age", "person_income", "person_emp_length", "loan_amnt", "loan_int_rate", "loan_percent_income", "cb_person_cred_hist_length", ] X_encoded = pd.get_dummies(X_raw, columns=cat_features, drop_first=True, dtype='float') scaler = StandardScaler() X_processed = X_encoded.copy() X_processed[numeric_features] = scaler.fit_transform(X_encoded[numeric_features]) lab = Datalab({"X": X_processed.to_numpy(), "y": labels}) lab.find_issues(features=X_processed.to_numpy()) ```
closed
2024-03-26T19:16:45Z
2024-05-24T13:21:53Z
https://github.com/cleanlab/cleanlab/issues/1065
[ "bug", "help-wanted" ]
sanjanag
1
iperov/DeepFaceLab
machine-learning
5,229
Features request
It would be interesting in my opinion to have some useful implementations: 1) better management of the MANUAL selection in the extraction of faces. Often the automatism does not intercept dark faces and faces of black people, and manual positioning is quite difficult. 2) frame number during manual extraction. It would be very useful when for example you realize that the automatic extraction has failed some frames, it would be easier with the number to go directly to the desired frames to extract it manually. 3) manual color management during manual merge, not only in the preset profiles, but the possibility to decrease or increase the color tints and brightness. 4) In the XSeg editor it would be very useful to have two different colors for the two masks, intended as "face and exclusions". One of my deepfaces: https://youtu.be/SlYcX_kngrE Thanks @iperov for everything you've done to the code so far!
open
2021-01-02T14:16:54Z
2021-01-08T21:10:30Z
https://github.com/iperov/DeepFaceLab/issues/5229
[]
robustini
4
SYSTRAN/faster-whisper
deep-learning
601
German?
I tried German, using the large model. But it wrote every word in small letters (in German, we have many words with capital letters), and it did not set any punctuation. My Video is 1,5 hours, very clear spoken. I cancelled after two minutes transcription, it said it would take 10 hours. What went wrong? I installed directly via SE.
open
2023-12-02T13:58:35Z
2023-12-04T16:32:56Z
https://github.com/SYSTRAN/faster-whisper/issues/601
[]
HeikeMuc
2
plotly/dash-core-components
dash
480
DatePickerRange breaks when select first date
Good day. In the current version of dash core components, DatePickerRange throws an error on console when it have been initialized without dates in python and try to open the calendar and the component falls apart. The error is "currentMonth is undefined". It feels strange because I have a component built around airbnb/react-dates and it can be initialize without pre-selected dates and when open display actual month, dash core components also takes the same component from there and I'm not saying that my component provides the date somehow. I also tried to copy-paste the entire component from airbnb/react-dates and render and it has no problem with no dates. In DatePickerSingle is not a problem initialize it without any props or dates only in the Range component.
closed
2019-03-07T12:46:11Z
2019-08-05T14:58:35Z
https://github.com/plotly/dash-core-components/issues/480
[ "dash-type-bug", "size: 1" ]
Ajaesuria
3
Johnserf-Seed/TikTokDownload
api
450
[BUG]
**描述出现的错误** 对bug的清晰而简洁的描述。 **bug复现** 复现这次行为的步骤: 1.在 mac 上的windows虚拟机中安装运行的时候,选址是否升级,选择 no,不升级, 输入短视频地址,下载提示网络不畅,请重试,再次重新下载,直接闪退 **截图** 如果适用,添加屏幕截图以帮助解释您的问题。 **桌面(请填写以下信息):** -操作系统:[例如windows10 64bit] -vpn代理[例如开启、关闭] -版本[如1.2.3] **附文** 在此处添加有关此问题的文字。
open
2023-06-12T02:53:13Z
2023-06-12T02:53:14Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/450
[ "故障(bug)", "额外求助(help wanted)", "无效(invalid)" ]
xuedie12369
0
pytest-dev/pytest-qt
pytest
223
How to troubleshoot post-test hang?
What are possible causes for pytest-qt to hang *after* the test function returns but *before* py.test prints the "dot of success"? Aborting with ^C always yields this message: ``` ICE default IO error handler doing an exit(), pid = 27288, errno = 4 ```
closed
2018-08-19T10:44:38Z
2018-08-19T10:55:42Z
https://github.com/pytest-dev/pytest-qt/issues/223
[]
enkore
2
paperless-ngx/paperless-ngx
machine-learning
8,017
[BUG] Cant upgrade to 2.13.0, migration fails
### Description After updating after previous version i getting this in console when running migrations: `root@paperless-ngx:/opt/paperless/src# /usr/bin/python3 manage.py migrate Traceback (most recent call last): File "/opt/paperless/src/manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python3.9/dist-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.9/dist-packages/django/core/management/__init__.py", line 382, in execute settings.INSTALLED_APPS File "/usr/local/lib/python3.9/dist-packages/django/conf/__init__.py", line 102, in __getattr__ self._setup(name) File "/usr/local/lib/python3.9/dist-packages/django/conf/__init__.py", line 89, in _setup self._wrapped = Settings(settings_module) File "/usr/local/lib/python3.9/dist-packages/django/conf/__init__.py", line 217, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/opt/paperless/src/paperless/settings.py", line 58, in <module> def __get_optional_int(key: str) -> int | None: TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' ` ### Steps to reproduce cd ~ wget -q https://github.com/paperless-ngx/paperless-ngx/releases/download/$RELEASE/paperless-ngx-$RELEASE.tar.xz tar -xf paperless-ngx-$RELEASE.tar.xz cp -r /opt/paperless/paperless.conf paperless-ngx/ cp -r paperless-ngx/* /opt/paperless/ cd /opt/paperless pip install -r requirements.txt cd /opt/paperless/src /usr/bin/python3 manage.py migrate ### Webserver logs ```bash - ``` ### Browser logs _No response_ ### Paperless-ngx version 2.13.0 ### Host OS Ubuntu 20.04.6 LTS ### Installation method Bare metal ### System status _No response_ ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-10-26T06:16:55Z
2024-11-26T03:14:58Z
https://github.com/paperless-ngx/paperless-ngx/issues/8017
[ "not a bug" ]
hellofaduck
2
litestar-org/litestar
asyncio
3,681
bug: inconsistent behavior between `signature_namespace` and `signature_types`
It's currently difficult to use the simpler `signature_types` syntax in plugins due to the potential of there being an exception raised during startup. We should consider removing this entire section to keep the behavior consistent between `signature_namespace` and `signature_types`. https://github.com/litestar-org/litestar/blob/238d26bab30afa1f852ef0d88830077aab77ba35/tests/unit/test_utils/test_signature.py#L159-L168
closed
2024-08-21T00:16:29Z
2025-03-20T15:54:52Z
https://github.com/litestar-org/litestar/issues/3681
[ "Bug :bug:", "Good First Issue", "Refactor", "area/signature" ]
cofin
2
huggingface/transformers
nlp
36,744
num_items_in_batch unexpected in vision encoder decoder
### System Info ![Image](https://github.com/user-attachments/assets/31ce5f30-1a51-4782-8739-823b8a6e98c9) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction do forward on vision encoder decoder like donut. ### Expected behavior model get trained ![Image](https://github.com/user-attachments/assets/19b67b65-8093-40da-909f-b26f46e50c47)
closed
2025-03-15T19:32:32Z
2025-03-20T10:35:36Z
https://github.com/huggingface/transformers/issues/36744
[ "bug" ]
eljandoubi
0
quantumlib/Cirq
api
6,612
`cirq.PauliStringPhasor` returns wrong unitary
**Description of the issue** I am trying to check the unitary of a`cirq.PauliStringPhasor`. I don't get what I expect. **How to reproduce the issue** ``` import cirq qubits = cirq.LineQubit.range(2) phasor = cirq.PauliStringPhasor(cirq.Z(qubits[0]), qubits, exponent_neg=1) circuit = cirq.Circuit(phasor) print(circuit) print(cirq.unitary(circuit)) ``` ![image](https://github.com/quantumlib/Cirq/assets/85507449/c5eb1de0-f87e-4f92-bea3-14660574782d) The unitary this generates represents a Z⊗Z, not Z⊗I </details> **Cirq version** 1.4.0.dev20240403011731
open
2024-05-22T17:10:46Z
2024-07-03T16:23:05Z
https://github.com/quantumlib/Cirq/issues/6612
[ "kind/bug-report", "triage/accepted" ]
pgoiporia
10
keras-team/keras
tensorflow
20,373
Loss calculation fails with multiple outputs due to the LossWrapper
When trying to execute forward_pass of a multi-output model with a loss the loss calculation fails because of a call to squeeze_or_expands_to_same_rank call inside the LossWrapper, as the parameters passed are of type tuple and list instead of a singular Tensor. The function fails on the call of shape on the passed parameters: https://github.com/keras-team/keras/blob/bce176f7a239b32ad321e8d7a019588ee4217baa/keras/src/losses/loss.py#L109 Loss' implementation initially calls to tf.convert_to_tenzor before the squeeze call while LossWrapper omits this step. As the wrapper is executed first, losses are no longer handle multiple outputs. https://github.com/keras-team/keras/blob/bce176f7a239b32ad321e8d7a019588ee4217baa/keras/src/losses/losses.py#L1291 There are two ways to fix this problem that shouldn't pose too big of a problem: - add convert_to_tensor first in LossWrapper - or remove the squeeze_or_expands_to_same_rank call alltogether and rely on proper loss function implementation
closed
2024-10-17T17:00:57Z
2024-11-23T01:42:55Z
https://github.com/keras-team/keras/issues/20373
[ "type:Bug" ]
markomitos
10
litestar-org/litestar
api
3,340
Bug: cannot start server with `PydanticPlugin` if using `pydantic == 1.x.x`
### Description #3296 introduced a dictionary key as seen [here](https://github.com/litestar-org/litestar/blob/63c2510f36a5aa8c843a385b4e725013ff95fb3a/litestar/contrib/pydantic/pydantic_dto_factory.py#L50C5-L50C16) that accesses an attribute that will not exist if `pydantic_v2` is `None`. This makes server startup impossible. ### URL to code causing the issue _No response_ ### MCVE _No response_ ### Steps to reproduce _No response_ ### Screenshots _No response_ ### Logs ```bash Traceback (most recent call last): File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started target(sockets=sockets) File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve await self._serve(sockets) File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve config.load() File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/developer/.local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1204, in _gcd_import File "<frozen importlib._bootstrap>", line 1176, in _find_and_load File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/app/backend/src/main.py", line 4, in <module> from litestar.contrib.pydantic import PydanticPlugin File "/home/developer/.local/lib/python3.11/site-packages/litestar/contrib/pydantic/__init__.py", line 8, in <module> from .pydantic_dto_factory import PydanticDTO File "/home/developer/.local/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_dto_factory.py", line 50, in <module> pydantic_v2.JsonValue: Any, ^^^^^^^^^^^^^^^^^^^^^ AttributeError: '_EmptyEnum' object has no attribute 'JsonValue' ``` ### Litestar Version 2.8.0 ### Platform - [X] Linux - [ ] Mac - [ ] Windows - [ ] Other (Please specify in the description above)
closed
2024-04-08T10:25:29Z
2025-03-20T15:54:33Z
https://github.com/litestar-org/litestar/issues/3340
[ "Bug :bug:" ]
LonelyVikingMichael
2
OthersideAI/self-operating-computer
automation
221
[BUG] Brief Description of the Issue
This is my first time trying it. However, when I use the command operate -m gemini-pro-vision, add my key, and start the process, I get an error asking me to provide an OpenAI API key. I'm using Gemini, not OpenAI.
open
2024-12-24T09:16:35Z
2024-12-24T10:22:00Z
https://github.com/OthersideAI/self-operating-computer/issues/221
[ "bug" ]
ahmedabdallah0
2
autokey/autokey
automation
676
Custom location for new subfolders, phrases, or scripts
### Has this issue already been reported? - [X] I have searched through the existing issues. ### Is this a question rather than an issue? - [X] This is not a question. ### What type of issue is this? Enhancement ### Which Linux distribution did you use? Any. ### Which AutoKey GUI did you use? Both ### Which AutoKey version did you use? Any. ### How did you install AutoKey? Standard versions were installed from the Linux distribution's repository and the beta was installed using pip. ### Can you briefly describe the issue? In any version of AutoKey using either front end, you're not able to choose the location for new items when clicking the "New..." button. ### Can the issue be reproduced? Always ### What are the steps to reproduce the issue? 1. Click the "New..." button in the toolbar. 2. Choose to create a subfolder, phrase, or script. 3. Click the "OK" button. 4. Look for your new subfolder, phrase, or script. ### What should have happened? We should be given a choice of location for the new item since it may not belong in the current location. ### What actually happened? _The behavior described below is different in each front end and this has been reported in the #675 issue:_ In the Qt front end, the new subfolder, phrase, or script will be placed in the currently-selected folder. In the GTK front end, the new subfolder, phrase, or script will be placed in the currently-selected folder or in the folder that the currently-selected phrase or script is in. ### Do you have screenshots? _No response_ ### Can you provide the output of the AutoKey command? _No response_ ### Anything else? This is not an issue if you want to create the new item in your current location, but if, for example, you have the phrase folder selected when you create a new script, it will be placed into the phrase folder since that's where you are. Although being aware of this is helpful for finding newly-added items that may not be where we expect them to be, it might be a good idea for AutoKey to make the location a choice.
open
2022-02-24T20:22:30Z
2022-03-04T01:50:20Z
https://github.com/autokey/autokey/issues/676
[ "bug", "enhancement", "autokey-qt", "autokey-gtk", "user interface" ]
Elliria
8
PrefectHQ/prefect
data-science
17,398
UI Variable updates don't work correctly when adding prefixed text
### Bug summary Hello! I would like to report a bug with updating Variables defined in the Cloud UI. Whenever I update a variable and prepend a word to a plain string `""`, any change with a space character is not saved correctly. I'll go through an example: 1. Created a new variable with the text `"hello there"`. 2. Edit the variable to change the text to `"hi hello there"`. 3. Save the changes. 4. Observe the new state of the variable is `"hihello there"`. I'll share screenshots below. Thank you! ### Version info ```Text n/a (Cloud Only) Google Chrome Version 133.0.6943.142 (Official Build) (arm64) ``` ### Additional context ![Image](https://github.com/user-attachments/assets/ccf6c333-78bb-465c-aac4-e69a7ddea939) ![Image](https://github.com/user-attachments/assets/0b3045e2-8bb8-4555-8da9-50d16c6a4f47) ![Image](https://github.com/user-attachments/assets/5913ef7a-e905-49e2-ba04-347826e060b0) ![Image](https://github.com/user-attachments/assets/887b0c62-e7ed-4922-9150-237f808bb40d)
open
2025-03-06T15:36:06Z
2025-03-06T18:32:59Z
https://github.com/PrefectHQ/prefect/issues/17398
[ "bug", "ui" ]
joshcaskie
1
ymcui/Chinese-LLaMA-Alpaca
nlp
92
重复“下面是一道例题”
我做错了什么???? ```sh Below is an instruction that describes a task. Write a response that appropriately completes the request. > 手中中文怎样? I can speak Chinese very well。 下面是一道例题,请写出一个恰当的答案: 请问我怎么了啊?我的邮箱地址是什么问题呀!谢谢啦~! 我怎么了呢??我在新浪上了一个邮件地址,但是现在它都不发信了.请问是怎么回事呀???谢谢啦~~!!!!!! 尊敬的邮件地址: (如果您在新浪网注册并开通了自己的电子邮件服务) 你们可以使用以下邮箱地址发送您的消息:sinab@v ip6.com(请注意后面有“.”) 如果您想查看自己的发信记录,请您点击下面链接进入个人信息中心。 尊敬的邮件地址:sinab@vip6.com(请注意后面有“.”) 如果你要查看你的发信纪录,请单击下边的网址打开您的个人信息中心。 我怎么了呢??我的邮箱地址是什么问题呀???谢谢啦~~!!!!!! 你们可以使用以下邮箱地址发送您的消息:sinab@vip6.com(请注意后面有“.”)如果您想查看自己的发信记录,请您单击下边的网址打开您的个人信息中心 > 。 尊敬的邮件地址:sinab@vip6.com(请注意后面有“.”)如果您要查看你的发信纪录,请单击下面链接^C > 进入 个人 信息中心 。 我怎么了呢??我的邮箱^C > 地址 是什么问题呀???谢谢^C > 如何做秘方? 我可以做很多事情! 下面是一道例题,请写出一个恰当的答案: 请问我怎么了?我的邮件地址是什么问题是问题的呀??谢谢大家啦!~谢谢啦!!! 你们可以在下使用以下邮箱地址发送您的消息sinab@vip6.com(请注意后面有“.”)如果您想查看你的发信记录,请您单击下面链接打开个人信息中心。 尊敬的电子邮件地址:sinab@vip6.com(请注意后面有“.”)如果您要查看自己的发信纪录,请单击下边的网址打开您的个人信息中心。 我怎么了呢??我的邮箱地址是什么问题是问题的呀???谢谢大家啦!~谢谢啦!!!~~ 你们可以在下使用以下邮箱地址发送您的消息:sinab@vip6.com^C > 如何煮米饭 我可以做很多事情! 下面是一道例题,请写出 > 如何做披萨 我也能做的很好。 请问我怎么了?我的邮箱地址是什么问题是问题的呀??谢谢大家啦!~谢谢啦^C > 介绍北京 我可以介绍很多事情! 下面是一道例题,请写出一个恰当的答案: 你们可以在下使用以下邮箱地址发送^C > ho whow 我可以做一些事^C > hello, who are u? 我是一个学生,我的名字是张彦。 下面是一个例题:请写出一个恰当的答案.^C > speak english 我可以讲英语很好! 请问我怎么了?我的邮箱地址是什么问题是问题的呀??谢谢大家^C > ```
closed
2023-04-09T11:59:29Z
2023-05-17T22:02:39Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/92
[ "stale" ]
lingfengchencn
5
allure-framework/allure-python
pytest
620
allure-framework cant support with pytest-check
hi,I meet a problem,such as follow picture【allure cant support with pytest-check】 ![image](https://user-images.githubusercontent.com/16397200/117454332-9d01a880-af78-11eb-843b-c753ebf21cb6.png)
closed
2021-05-07T12:53:32Z
2023-01-10T08:08:06Z
https://github.com/allure-framework/allure-python/issues/620
[ "theme:pytest" ]
jamesz2011
1
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,554
Onionsites Have Disconnected after upgrade
### What version of GlobaLeaks are you using? From 4.11.5 to 4.12.3 ### What browser(s) are you seeing the problem on? Tor Browser ### What operating system(s) are you seeing the problem on? Windows ### Describe the issue After upgrade to 4.12.3 Tor said: Onionsite Has Disconnected The most likely cause is that the onionsite is offline. Contact the onionsite administrator. Details: 0xF2 — Introduction failed, which means that the descriptor was found but the service is no longer connected to the introduction point. It is likely that the service has changed its descriptor or that it is not running. But https sites are working. I try upgrade and update and the result is the same. ### Proposed solution _No response_
closed
2023-07-24T13:01:38Z
2023-07-25T19:12:15Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3554
[ "T: Bug", "C: Backend" ]
j3pc0m
5
pydata/bottleneck
numpy
377
[BUG] get wrong result in move_mean
I find a strange thing, bd.move_mean([1.9272201201869577, 0.0, 0.0, 0.0], 3) this print: [ nan, nan, 6.36527364e-01, 6.42406707e-01, 7.40148683e-17] the last result should be `0` instead of `7.40148683e-17` actually. With the slow version https://github.com/pydata/bottleneck/blob/master/bottleneck/slow/move.py#L26 here: it results [ nan nan 0.63652736 0.64240671 0. ]
open
2021-03-19T06:18:56Z
2023-02-23T03:02:05Z
https://github.com/pydata/bottleneck/issues/377
[ "bug" ]
rogerdehe
2
aiogram/aiogram
asyncio
1,055
Auto-update LICENSE file every year automatically
### aiogram version both ### Problem We always forgot to update README file in the repository ### Possible solution We can use GitHub Action that makes auto-update the LICENSE file using CRON Example: https://github.com/marketplace/actions/update-license-copyright-year-s ### Alternatives _No response_ ### Code example _No response_ ### Additional information _No response_
closed
2022-11-04T02:24:24Z
2023-09-03T00:18:21Z
https://github.com/aiogram/aiogram/issues/1055
[ "enhancement", "good first issue", "3.x", "2.x" ]
JrooTJunior
2
roboflow/supervision
machine-learning
749
Extend `Detections.from_ultralytics` to support OBB
> [!IMPORTANT] > Required for https://github.com/roboflow/supervision/issues/757. ### Description TODO ### Use case TODO ### Additional TODO
closed
2024-01-18T20:28:27Z
2024-03-22T01:29:05Z
https://github.com/roboflow/supervision/issues/749
[ "enhancement", "Q1.2024" ]
SkalskiP
0
jupyter/nbgrader
jupyter
1,886
AutoTest hits an exception if the kernel has no `text/plain` response
When generating a notebook with autotest directives, cells that don't produce a `text/plain` output will cause AutoTest to crash with output: ``` File "/opt/conda/lib/python3.11/site-packages/jupyter_client/client.py", line 551, in _async_execute_interactive output_hook(msg) File "/opt/conda/lib/python3.11/site-packages/nbgrader/preprocessors/instantiatetests.py", line 430, in _execute_code_snippet_output_hook self.execute_result = self.sanitizer(content["data"]["text/plain"]) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'text/plain' ``` The offending cell that was the source of the key error in my case just rendered an `IRdisplay`: ``` IRdisplay::display_html('<iframe width="560" height="315" src="https://www.youtube.com/[youtube video link]" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>') ``` The fix for this should be fairly easy. When the kernel returns a response, need to check that the content dict has a `text/plain` entry before trying to use it. If it doesn't, then there's nothing to handle and AutoTest should move on, unless it's specifically expecting a response in which case throw an exception with a more informative error message.
open
2024-05-24T16:38:52Z
2024-05-24T16:38:52Z
https://github.com/jupyter/nbgrader/issues/1886
[]
trevorcampbell
0
mckinsey/vizro
pydantic
834
404 page appears unstyled
Our 404 page has lost its styling and doesn't look good... ![image](https://github.com/user-attachments/assets/573ca603-9ccb-47d6-a87f-aa2de1b399f1) Almost certainly caused by https://github.com/mckinsey/vizro/pull/804. Note that the theme selector callback doesn't run on this page. Just adding `data-bs-theme="dark"` in devtools fixes it so I'm sure this is easy to fix, but I'm curious why it doesn't work already given that we set `:root` to be the same as that?
closed
2024-10-28T18:24:49Z
2024-11-13T14:05:31Z
https://github.com/mckinsey/vizro/issues/834
[ "Bug Report :bug:" ]
antonymilne
0
graphql-python/graphene-sqlalchemy
graphql
302
Latest Graphene Library, Assertion Error on same Enum in Multiple SQL Alchemy Columns
Hi everyone I know there are quite a few issues documented regarding the assertion error that occurs as a result of the same enum being utilized in two different SQL alchemy mappings with the same column name. I've read through https://github.com/graphql-python/graphene-sqlalchemy/issues/208, as well as this PR: https://github.com/graphql-python/graphene-sqlalchemy/pull/210 which in theory should resolve the issue I'm seeing. The other approaches around changing the naming with connections fields seem unnecessary in 2.3 based on the PR and other documentation that such cases are accounted for, but please let me know if I missed a potential solution. The resulting error is: ``` AssertionError: Found different types with the same name in the schema: JobStatusEnum, JobStatusEnum. ``` For context here's my enum, mappings, and graphene setup: Enum: ``` import enum class JobStatusEnum(enum.Enum): matched = 1 interested = 2 closed = 3 match_rejected = 4 ``` Mapping (left out extra columns and CinexBase is just a `declarative_base(metadata=MetaData("schema_name"))` ): ``` from sqlalchemy import Enum # and some other things class SavedJobs(CinexBase): __tablename__ = 'saved_jobs' job_status = Column('job_status', Enum(JobStatusEnum)) class MatchedJobs(CinexBase): __tablename__ = 'matched_jobs' job_status = Column('job_status', Enum(JobStatusEnum)) ``` The graphene setup: ``` class SavedJobsType(SQLAlchemyObjectType): class Meta: model = SavedJobs interfaces = (graphene.relay.Node,) class MatchedJobsType(SQLAlchemyObjectType): class Meta: model = MatchedJobs interfaces = (graphene.relay.Node,) ``` Query Class (I have a custom object that adds authentication logic and filtering as a result on top of the graphene-sqlalchemy-filter library though this library simply extends the usage of core graphene-sqlalchemy objects by adding filters. Let me know if y'all think this may be the culprit (https://github.com/art1415926535/graphene-sqlalchemy-filter): ``` class Query(graphene.ObjectType): matched_jobs = AuthModifiedFilterableConnectionField(MatchedJobsType.connection, filters=MatchedJobsFilter()) saved_jobs = AuthModifiedFilterableConnectionField(SavedJobsType.connection, filters=SavedJobsFilter()) ``` Would appreciate any help. The way I've gotten around this is creating an extra enum class with a slightly different name as such: ``` class SavedJobs(CinexBase): __tablename__ = 'saved_jobs' job_status = Column('job_status', Enum(JobStatusEnum)) class MatchedJobs(CinexBase): __tablename__ = 'matched_jobs' job_status = Column('job_status', Enum(JobStatusEnum2)) ``` Though this solution seems impractical as I would be expected to create multiple copies of the same enum for potentially 5+ tables and attempt to maintain a universal set of options. Defeats the purpose of the enum IMO. Thanks in advance would love any direction here!
closed
2021-03-24T19:46:39Z
2023-02-25T00:49:10Z
https://github.com/graphql-python/graphene-sqlalchemy/issues/302
[]
ShantanuJoshi
3
marshmallow-code/flask-marshmallow
rest-api
276
The function decorated by the "post_load" decorator will undergo changes in parameter types as the function name changes.
` class AdGroupUpsertSchemaV3(ma.SQLAlchemySchema): name = fields.String(required=True) class Meta: model = models.AdGroup fields = 'name' load_instance = True @post_load def a(self, data, *args, **kwargs): print(type(data)) return data ` flask-marshmallow==0.14.0 marshmallow==3.17.0 marshmallow-sqlalchemy==0.28.1 like the code above,when post_load decorator func name is a-m, data type is dict, when func name is n-z, data type is <class 'modules.advertising.group.models.AdGroup'> why why why?
open
2023-12-06T02:51:09Z
2023-12-06T02:57:47Z
https://github.com/marshmallow-code/flask-marshmallow/issues/276
[]
LionSniper
0
hankcs/HanLP
nlp
1,728
TupleTrieDict的len不一致
<!-- 提问请上论坛,不要发这里! 提问请上论坛,不要发这里! 提问请上论坛,不要发这里! 以下必填,否则恕不受理。 --> **Describe the bug** TupleTrieDict 初始化前合并数据,与初始化后更新数据,len(TupleTrieDict)的结果不一致 **Code to reproduce the issue** 分批添加数据key有643142(dict1方式),合并后key的量为632400(dict2方式)。 程序运行时,发现 dict1方式 节省了时间,请问 TupleTrieDict 的此种现象,是特性,还是bug? ```python from hanlp_trie.dictionary import TupleTrieDict dict1 = TupleTrieDict({"aa": 1}) print(len(dict1), dict1["aa"]) # 输出 1 1 dict1["aa"] += 2 print(len(dict1), dict1["aa"]) # 输出 2 3 dict2 = TupleTrieDict({"aa": 4}) print(len(dict2), dict2["aa"]) # 输出 1 4 ``` **Describe the current behavior** A clear and concise description of what happened. **Expected behavior** A clear and concise description of what you expected to happen. **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): - Python version: 3.9.7 - HanLP version: 2.1.0b27 **Other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. * [x] I've completed this form and searched the web for solutions. <!-- 发表前先搜索,此处一定要勾选! --> <!-- 发表前先搜索,此处一定要勾选! --> <!-- 发表前先搜索,此处一定要勾选! -->
closed
2022-04-30T12:58:44Z
2022-04-30T19:48:30Z
https://github.com/hankcs/HanLP/issues/1728
[ "bug" ]
weblearngit
1
jupyter-incubator/sparkmagic
jupyter
478
Question: Using visualization widget in a Python environment
When using `sparkmagic` in Jupyter, it generates this nice interactive visualization of a dataframe. (full example at from https://github.com/jupyter-incubator/sparkmagic/blob/master/examples/Pyspark%20Kernel.ipynb). How to achieve the same visualization controls for a normal python notebook having pandas dataframe objects without any Spark code? ![jupyter widget visualization](https://i.stack.imgur.com/8VzJX.png) No matter what I install the dataframe when viewed in a cell, shows up only as a table. Asked the same at https://stackoverflow.com/questions/52234432/interactive-multi-chart-widget-in-jupyter-notebook-for-pandas-dataframe but no one answered. Thank you.
closed
2018-09-08T16:58:59Z
2018-09-08T17:53:00Z
https://github.com/jupyter-incubator/sparkmagic/issues/478
[]
Nithanaroy
1
unionai-oss/pandera
pandas
1,018
Dependency error with typing-extensions
**Describe the bug** As of pandera 0.13.4 the module /pandera/schemas.py imports the class Self from typing_extensions. The Self class was introduced in typing-extensions 4.0.0, however, pandera dependencies is specified as "typing_extensions >= 3.7.4.3 ; python_version<'3.8'" in setup.py. Having a version of typing-extensions lower than 4.0.0 results in an error when importing pandera. - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandera. - [x] (optional) I have confirmed this bug exists on the master branch of pandera. #### Code Sample, a copy-pastable example ```python # import pandera ``` #### Result ```/usr/local/lib/python3.9/site-packages/pandera/pandas_accessor.py:7: in <module> from . import schemas /usr/local/lib/python3.9/site-packages/pandera/schemas.py:51: in <module> from typing_extensions import Self E ImportError: cannot import name 'Self' from 'typing_extensions' (/usr/local/lib/python3.9/site-packages/typing_extensions.py) ``` #### Expected behavior Import should work with no error
open
2022-11-14T09:28:02Z
2022-11-17T16:12:26Z
https://github.com/unionai-oss/pandera/issues/1018
[ "bug" ]
noahhansson
3
mage-ai/mage-ai
data-science
4,765
Integration of web scrapers with mage-ai
I need some direction/pointers on how to integrate streaming scrapers/scrapy spiders with mage-ai
open
2024-03-15T13:53:28Z
2024-03-18T18:27:27Z
https://github.com/mage-ai/mage-ai/issues/4765
[ "enhancement" ]
dreamspag
0
electricitymaps/electricitymaps-contrib
data-visualization
7,684
Change `snapshot.assert_match(value)` to `assert snapshot == value`
## Description With #7672 we changed which snapshot testing library we use which means we no longer have to serialize the data manually. We also changed to use a pure PyTest style for testing so we should do both for the last remaining tests, so we should change that. Turn all instances of this: ```python prices = fetch_price(target_datetime=historical_datetime, session=session) snapshot.assert_match( [ { "datetime": price["datetime"].isoformat(), "zoneKey": price["zoneKey"], "currency": price["currency"], "price": price["price"], "source": price["source"], "sourceType": price["sourceType"].value, } for price in prices ] ) ``` Into this: ```python assert snapshot == fetch_price(target_datetime=historical_datetime, session=session) ```
closed
2025-01-05T18:27:02Z
2025-01-12T12:39:26Z
https://github.com/electricitymaps/electricitymaps-contrib/issues/7684
[ "help wanted", "good first issue", "python" ]
VIKTORVAV99
2
stanfordnlp/stanza
nlp
1,265
Indonesian model: wrong md5
Hello, I cannot currently use the Indonesian model due to a md5 mismatch on the /stanza_resources/id/pretrain/conll17.pt file The file available on hugginface has a wrong md5. The error is: `ValueError: md5 for /stanza_resources/id/pretrain/conll17.pt is 9d50fc3cf8267befb1fc25c0350f4589, expected 644c82e3c1f229e4b1a2133b5e55cb0d`
closed
2023-07-25T13:38:58Z
2023-09-09T20:52:36Z
https://github.com/stanfordnlp/stanza/issues/1265
[ "bug" ]
rahonalab
1
zama-ai/concrete-ml
scikit-learn
1,003
GPU is not working when running the ResNet18 example
## Summary I am trying to run the ResNet18 inference with a GPU. First, I followed [the GPU acceleration guide](https://docs.zama.ai/concrete-ml/guides/using_gpu) to set up the running environment, and the checking progress passed successfully. <img width="580" alt="Image" src="https://github.com/user-attachments/assets/378ccb6c-b0d7-49b6-8852-bc535f3e70f9" /> However, when I actually run the ResNet18 example, I do not see that the program utilizes any GPU resources for FHE computations. My running command is ```python3 run_resnet18_fhe.py --use_gpu --run_fhe```. ## Description - versions affected: 1.8.0 - python version: 3.8.20 - config (optional: HW, OS): Ubuntu 22.04 - workaround (optional): if you’ve a way to workaround the issue - proposed fix (optional): if you’ve a way to fix the issue Step by step procedure someone should follow to trigger the bug: <details><summary>minimal POC to trigger the bug</summary> <p> ```python print("Minimal POC to reproduce the bug") ``` </p> </details>
open
2025-01-21T09:12:30Z
2025-03-04T15:01:05Z
https://github.com/zama-ai/concrete-ml/issues/1003
[ "bug" ]
whcjimmy
5
hbldh/bleak
asyncio
817
bleak giving incorrect values in android
* bleak version:0.14.2 * Python version:3.9 * Operating System:android * BlueZ version (`bluetoothctl -v`) in case of Linux: ### Description Trying to run the same code on windows and android both works perfectly in windows but in android it seems to be giving incorrect values more like only 0x01 only. Does anyone know what to do? ### What I Did ```python def scan(self): loop = asyncio.get_event_loop() loop.run_until_complete(self.begin_scan()) async def begin_scan(self): a = [] scanned_devices = await bleak.BleakScanner.discover(1) if len(scanned_devices) == 0: print("NO DEVICES") scanned_devices.sort(key=lambda device: -device.rssi) for device in scanned_devices: device1 = f"{device.name}" print(device1) if device1 == "Bluno": try: async with bleak.BleakClient(device) as client: paired = client.pair(protection_level=2) print(f"Paired: {paired}") COLOR_CHARACTERISTIC = "0000dfb1-0000-1000-8000-00805f9b34fb" uu = '1' write_value = bytearray([int(uu)]) await client.write_gatt_char(COLOR_CHARACTERISTIC, write_value) await asyncio.sleep(6.0) await asyncio.sleep(1.0) val1 = await client.read_gatt_char(COLOR_CHARACTERISTIC, use_cached=True) print(val1, 'val1') except: print("error1") asyncio.sleep(10) else: print('Device Not Found') ```
open
2022-04-29T18:17:49Z
2022-05-04T06:50:14Z
https://github.com/hbldh/bleak/issues/817
[ "Backend: Android" ]
ps2229
5
mars-project/mars
scikit-learn
2,865
[BUG] Adding datetime series and `datetime.timedelta` returns object dtype
**Describe the bug** Adding datetime series and `datetime.timedelta` returns datetime dtype in pandas. But in Mars, an object series is returned instead. **To Reproduce** ```python >>> import mars.dataframe as md >>> import datetime >>> dt_ser = md.to_datetime(md.Series(['2022-03-18', '2022-03-19', '2022-03-20'])) + datetime.timedelta(days=1) >>> print(dt_ser.dtype) object ```
closed
2022-03-24T11:22:30Z
2022-03-27T23:53:05Z
https://github.com/mars-project/mars/issues/2865
[ "type: bug", "mod: dataframe" ]
wjsi
0
ivy-llc/ivy
pytorch
28,217
Fix Ivy Failing Test: paddle - shape.shape__bool__
closed
2024-02-07T18:30:23Z
2024-02-09T09:27:04Z
https://github.com/ivy-llc/ivy/issues/28217
[ "Sub Task" ]
fnhirwa
0
Avaiga/taipy
data-visualization
1,531
[DEPENDENCY] update react-jsx-parser to 2.0
### What would you like to share or ask? react-jsx-parser has reached 2.0.0 We need to update our dependencies ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [X] I am willing to work on this issue (optional)
closed
2024-07-16T19:44:54Z
2024-07-17T14:44:44Z
https://github.com/Avaiga/taipy/issues/1531
[ "🟥 Priority: Critical", "📈 Improvement", "🔒 Staff only", "GUI: Front-End" ]
FredLL-Avaiga
0
zappa/Zappa
django
641
[Migrated] [Feature] Lambda env as prefix instead of suffix
Originally from: https://github.com/Miserlou/Zappa/issues/1622 by [claygorman](https://github.com/claygorman) I am wondering if there is any way to make the deployment process prepend the env name instead of adding as suffix?
closed
2021-02-20T12:27:02Z
2024-04-13T17:36:14Z
https://github.com/zappa/Zappa/issues/641
[ "no-activity", "auto-closed" ]
jneves
2
randyzwitch/streamlit-folium
streamlit
123
No radius data for circles in output
related to this: https://github.com/Leaflet/Leaflet.draw/issues/390 when drawing a circle, the data returned is as a point. ``` "all_drawings": [ 0: { "type": "Feature" "properties": {} "geometry": { "type": "Point" "coordinates": [ 0: -104.853516 1: 45.261355 ] } } ] ``` Need to add a workaround to the frontend code to capture circle radius as a property.
closed
2023-05-24T08:17:29Z
2023-06-05T12:47:42Z
https://github.com/randyzwitch/streamlit-folium/issues/123
[]
xy73
11
Miserlou/Zappa
django
1,945
Cannot use Custom Domain Names with ACM in us-east-2
## Expected Behavior Was trying to zappa certify with ACM for a wildcard cert in us-east-2 but noticed instructions mention us-east-1 and I get an error. Not sure why boto3 for creating the domain can't respect the aws_region in the zappa_settings.json like the rest of the commands do. I was able to do the DNS juggling myself to get it running through custom domain names myself so it's not an AWS limitation ## Actual Behavior On zappa certify get this error `botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the CreateDomainName operation: Invalid certificate ARN: arn:aws:acm:us-east-2:...:certificate/.... Certificate must be in 'us-east-1'.` ## Steps to Reproduce 1. Setup lambdas and API gateways in us-east-2 2. Use a wildcard ACM cert from us-east-2 3. Get error on zappa certify ## Your Environment Python 3.6.9 ``` # # This file is autogenerated by pip-compile # To update, run: # # pip-compile dev-requirements.in # appdirs==1.4.3 # via black argcomplete==1.9.3 # via zappa aspy.yaml==1.3.0 # via pre-commit atomicwrites==1.3.0 # via pytest attrs==19.2.0 # via black, pytest black==19.3b0 # via flake8-black boto3==1.9.253 # via kappa, zappa botocore==1.12.253 # via boto3, s3transfer, zappa certifi==2019.9.11 # via requests cfgv==2.0.1 # via pre-commit cfn-flip==1.2.1 # via troposphere chardet==3.0.4 # via requests click==7.0 # via black, cfn-flip, kappa, pip-tools, pur docopt==0.6.2 # via pipreqs docutils==0.15.2 # via botocore, zappa durationpy==0.5 # via zappa entrypoints==0.3 # via flake8 flake8-black==0.1.1 flake8==3.7.8 # via flake8-black future==0.16.0 # via zappa hjson==3.0.1 # via zappa identify==1.4.7 # via pre-commit idna==2.8 # via requests importlib-metadata==0.23 # via pluggy, pre-commit, pytest importlib-resources==1.0.2 # via pre-commit jmespath==0.9.3 # via boto3, botocore, zappa kappa==0.6.0 # via zappa lambda-packages==0.20.0 # via zappa mccabe==0.6.1 # via flake8 more-itertools==7.2.0 # via pytest, zipp nodeenv==1.3.3 # via pre-commit packaging==19.2 # via pytest, pytest-reqs pip-api==0.0.12 # via pytest-reqs pip-tools==4.2.0 pipdeptree==0.13.2 pipreqs==0.4.9 placebo==0.9.0 # via kappa pluggy==0.13.0 # via pytest pre-commit==1.18.3 pur==5.2.2 py==1.8.0 # via pytest pycodestyle==2.5.0 # via flake8 pyflakes==2.1.1 # via flake8 pyparsing==2.4.2 # via packaging pytest-reqs==0.2.1 pytest==5.2.1 # via pytest-reqs python-dateutil==2.6.1 # via botocore, zappa python-slugify==1.2.4 # via zappa pyyaml==5.1.2 # via aspy.yaml, cfn-flip, kappa, pre-commit, zappa requests==2.22.0 # via yarg, zappa s3transfer==0.2.1 # via boto3 six==1.12.0 # via cfgv, cfn-flip, packaging, pip-tools, pre-commit, python-dateutil, zappa toml==0.10.0 # via black, pre-commit, zappa tqdm==4.19.1 # via zappa troposphere==2.5.2 # via zappa unidecode==1.1.1 # via python-slugify urllib3==1.25.6 # via botocore, requests virtualenv==16.7.5 # via pre-commit wcwidth==0.1.7 # via pytest werkzeug==0.16.0 # via zappa wheel==0.33.6 # via zappa wsgi-request-logger==0.4.6 # via zappa yarg==0.1.9 # via pipreqs zappa==0.48.2 zipp==0.6.0 # via importlib-metadata # The following packages are considered to be unsafe in a requirements file: # pip==19.3.1 # via pip-api, pipdeptree, zappa ``` ``` { "dev": { "app_function": "sample.app.app", "aws_region": "us-east-2", "apigateway_description": "sample api gateway", "profile_name": "default", "project_name": "sample", "lamdba_description": "sample lambda", "runtime": "python3.6", "s3_bucket": "zappa-...", "keep_warm": false, "certificate_arn": "arn:aws:acm:us-east-2:...:certificate/...", "domain": "sample-dev.mydomain.com", "endpoint_configuration": ["REGIONAL"], "environment_variables": { "DEBUG":"True", "TESTING":"True", "FLASK_ENV":"development", "FLASK_DEBUG":"1", "PYTHONUNBUFFERED":"1" } } } ``` ## Ideas Not sure if related to regional domain names #1465 not being implemented yet (PR with fix was over a year ago) though unclear how that differs from endpoint_configuration: ["REGIONAL"]
closed
2019-10-21T04:19:23Z
2021-08-23T23:28:04Z
https://github.com/Miserlou/Zappa/issues/1945
[]
alecl
2
SYSTRAN/faster-whisper
deep-learning
870
Local or remote?
Hi, thought faster-whisper ran locally, but there seems to be a connection to hugginface.co on every request. Is this to convert to text, or is it to check if the model has been downloaded? Is there a way to remain totally local?
closed
2024-06-05T00:36:31Z
2024-06-17T07:02:45Z
https://github.com/SYSTRAN/faster-whisper/issues/870
[]
mike99mac
1
scikit-image/scikit-image
computer-vision
7,222
Applying iRadon then Radon does not return array to original shape
### Description: Would expect applying the functions `iradon` and then `radon` would return an array to its original shape. However, with the particular shape of (400,168), an array of shape (399, 168) is returned. This bug does not occur for arrays of shape (n, 168) for n != 400 (as far as observed so far) ### Way to reproduce: ``` from skimage.transform import radon, iradon import numpy as np sino = np.zeros((400, 168)) azi_angles = np.linspace(0.0, 180.0, 168) bp_image = iradon(sino, filter_name=None, circle=False) new_sino = radon(bp_image, azi_angles, circle=False) assert new_sino.shape == sino.shape ``` ### Version information: ```Shell 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)] Windows-10-10.0.19045-SP0 scikit-image version: 0.20.0 numpy version: 1.26.0 ```
closed
2023-10-25T12:36:46Z
2023-11-21T20:27:21Z
https://github.com/scikit-image/scikit-image/issues/7222
[ ":bug: Bug" ]
GeorgeWebber
6
SciTools/cartopy
matplotlib
1,552
pip install of cartopy 0.18.0 fails when installing numpy at the same time
### Description I am provisioning a docker image using pip. In a single pip command, I installed a number of packages, including cartopy and numpy. This worked in versions prior to 0.18.0, and no longer works with 0.18.0. #### Code to reproduce In a docker image with vanilla python3 install and no pip packages installed, run ``` pip3 install --upgrade pip && pip3 install --no-cache-dir cartopy==0.18.0 numpy ``` #### Traceback ``` ERROR: Command errored out with exit status 1: command: /usr/local/pyenv/versions/3.7.6/bin/python3.7 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-cdx7ek5c/cartopy/setup.py'"'"'; __file__='"'" '/tmp/pip-install-cdx7ek5c/cartopy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(cod e, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-377te2k_ cwd: /tmp/pip-install-cdx7ek5c/cartopy/ Complete output (12 lines): Traceback (most recent call last): File "/tmp/pip-install-cdx7ek5c/cartopy/setup.py", line 43, in <module> import numpy as np ModuleNotFoundError: No module named 'numpy' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-cdx7ek5c/cartopy/setup.py", line 45, in <module> raise ImportError('NumPy 1.10+ is required to install cartopy.') ImportError: NumPy 1.10+ is required to install cartopy. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` <details> <summary>Full environment definition</summary> <!-- fill in the following information as appropriate --> ### Operating system Ubuntu 18.04 Python 3.7.6 installed via pyenv ### Cartopy version 0.18.0 </details>
closed
2020-05-06T14:42:57Z
2020-12-18T15:57:17Z
https://github.com/SciTools/cartopy/issues/1552
[]
mc-allen
15
jina-ai/serve
deep-learning
6,095
Release Note
# Release Note This release contains 1 bug fix. ## 🐞 Bug Fixes ### Add and use POST endpoint for streaming (#6093) HTTP streaming when using a `GET` endpoint only allows Documents whose schemas contain only `str`, `int`, and `float` fields. By adding a `POST` endpoint and modifying the Jina Client to use it, it is now possible to send Documents with complex nested document schemas over streaming HTTP. ## 🤟 Contributors We would like to thank all contributors to this release: * Peter Willemsen (@peterwilli ) * Narek Amirbekian (@NarekA )
closed
2023-10-25T08:29:49Z
2023-10-25T12:06:18Z
https://github.com/jina-ai/serve/issues/6095
[]
JoanFM
1
dmlc/gluon-cv
computer-vision
1,450
Horovod stalled ranks with Faster RCNN
Hi, I'm attempting to run GluonCV with Horovod + Faster-RCNN. I'm using the latest AWS DLAMI (Ubuntu) with the `mxnet_latest_p37` conda kernel on an `ml.p3.8xlarge` instance. I installed gluoncv via `git clone --recursive -b v0.8.0 https://github.com/dmlc/gluon-cv.git; cd gluon-cv; python setup.py install --with-cython` I also prepared coco data with gluon's provided coco script. I tried running the following: `horovodrun -np 4 -H localhost:4 python ./gluon-cv/scripts/detection/faster_rcnn/train_faster_rcnn.py --dataset coco -j 4 --log-interval 10 --horovod --batch-size 8 --use-fpn --lr 0.02 --lr-warmup 100` But after the 100 iterations of warmup, Horovod stalls. If I leave out this parameter it instead stalls at iteration 1000. Here's the output: > [1,0]<stderr>:INFO:root:[Epoch 0 Iteration 100] Set learning rate to 0.02 > [1,0]<stderr>:[2020-09-18 16:17:16.259193: W horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock. > [1,0]<stderr>:Stalled ranks: > [1,0]<stderr>:2: [horovod_allreduce.P2_conv1_bias, horovod_allreduce.P2_conv1_weight, horovod_allreduce.P2_conv_lat_bias, horovod_allreduce.P2_conv_lat_weight, horovod_allreduce.P3_conv1_bias, horovod_allreduce.P3_conv1_weight ...] > [1,0]<stderr>:3: [horovod_allreduce.P2_conv1_bias, horovod_allreduce.P2_conv1_weight, horovod_allreduce.P2_conv_lat_bias, horovod_allreduce.P2_conv_lat_weight, horovod_allreduce.P3_conv1_bias, horovod_allreduce.P3_conv1_weight ...] This error then continues to repeat.
closed
2020-09-18T16:21:32Z
2020-09-18T18:13:21Z
https://github.com/dmlc/gluon-cv/issues/1450
[]
austinmw
0
microsoft/unilm
nlp
1,658
BEATS reproducibility
**Describe the bug** I am using BEATs as a feature extractor. and using BEATs_iter3.pt .. however passing the same input, I got a different output every time.. I wonder what could be the cause . Since the model is pretrained and frozen, for the same input, the output should be the same. It would be great if you could help me understand it better. ``` tensor([[ 1.9269, 1.4873, 0.9007, ..., 0.8227, -0.8546, 0.7941]]) tensor([[[ 0.4125, 0.1860, 0.2770, ..., 0.0746, 0.0759, -0.1040], [-0.1389, -0.0395, -0.1223, ..., -0.1694, -0.2940, 0.0465], [ 0.0531, -0.0998, 0.4792, ..., -0.5941, -0.4111, 0.0983], ..., [ 0.7151, 0.0939, 0.0118, ..., 0.6868, -0.3090, 0.2154], [ 0.2418, 0.2485, -0.0050, ..., 0.5315, -0.0149, -0.1740], [ 0.3563, 0.4537, 0.0398, ..., 0.5003, 0.0146, -0.0805]]], grad_fn=<TransposeBackward0>) ``` ``` tensor([[ 1.9269, 1.4873, 0.9007, ..., 0.8227, -0.8546, 0.7941]]) tensor([[[ 0.1959, 0.0528, 0.4861, ..., 0.0678, -0.1611, -0.0386], [-0.1310, -0.3422, 0.3333, ..., -0.0915, -0.2356, -0.0009], [ 0.2121, 0.0532, 0.4134, ..., -0.6164, -0.5548, -0.0760], ..., [ 0.7319, -0.0581, 0.1427, ..., 0.5444, -0.5717, 0.0932], [ 0.3550, 0.1968, 0.1670, ..., 0.4279, -0.4536, -0.1726], [ 0.3206, 0.2454, 0.2433, ..., 0.6233, -0.3299, -0.2126]]], ``` The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. 2. 3. **Expected behavior** A clear and concise description of what you expected to happen. - Platform: - Python version: - PyTorch version (GPU?):
open
2024-11-20T21:25:56Z
2024-11-20T21:28:12Z
https://github.com/microsoft/unilm/issues/1658
[]
poonehmousavi
0
noirbizarre/flask-restplus
flask
371
How do i show documentation to only authenticated users?
looking at the API method, there are instances where we don't want to expose part of the internal API to the public. is there a way to hide the documentation and let it be available to only specific user/groups?
open
2017-12-27T15:47:47Z
2019-03-08T15:00:53Z
https://github.com/noirbizarre/flask-restplus/issues/371
[]
afidegnum
1
jupyterlab/jupyter-ai
jupyter
716
Chat memory is reset when selecting new LLM
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue. Before creating a new issue: * Search for relevant issues * Follow the issue reporting guidelines: https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html --> ## Description Chat memory is reset when selecting new LLM provider. https://github.com/jupyterlab/jupyter-ai/blob/3bfce328e3b6f730d05faa68ca9b6d6434b5fdec/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py#L34-L36 This is due to `self.memory` being reassigned in `DefaultChatHandler.create_llm_chain` which is triggered when LLM provider is changed. ## Expected behavior <!--Describe what you expected to happen--> I want to be able to change LLM provider while continuing a conversation. Separate reset command can be provided to allow user to reset history if desired (https://github.com/jupyterlab/jupyter-ai/issues/616). ## Suggested solution Fix would be to remove L34-L36 above which would reuse the existing `self.memory` object instantiated in L21 in `DefaultChatHandler.__init__`. Also this current approach makes that instantiation in `DefaultChatHandler.__init__` useless as it will always be overriden in `DefaultChatHandler.create_llm_chain` anyway.
open
2024-04-05T09:25:10Z
2024-04-05T09:26:01Z
https://github.com/jupyterlab/jupyter-ai/issues/716
[ "bug" ]
michaelchia
0
nvbn/thefuck
python
502
-bash: usage:: command not found
I'm sorry if this is a repeat issue. Couldn't find anything previously mentioned about this. `thefuck` seems to be working when I manually provide an argument, but when I try `fuck`, I get this error: ``` -bash: usage:: command not found ``` I added `eval "$(thefuck --alias)"` to my `.bash_profile` When I run `alias`: ``` alias fuck='TF_CMD=$(TF_ALIAS=fuck PYTHONIOENCODING=utf-8 TF_SHELL_ALIASES=$(alias) thefuck $(fc -ln -1)) && eval $TF_CMD && history -s $TF_CMD' ``` More info: `The Fuck 3.9 using Python 2.7.10` `pip 8.1.1 from /usr/local/lib/python2.7/site-packages (python 2.7)` Thoughts?
open
2016-05-03T17:18:39Z
2019-11-08T19:14:28Z
https://github.com/nvbn/thefuck/issues/502
[]
MichaelArestad
2
mljar/mljar-supervised
scikit-learn
148
Make AutoML scikit-learn compatible
I think it would be interesting to add a feature to export the best model with a scikit-learn wrapper. This would allow integrating the best AutoML model into a scikit-learn workflow. I think most of the models that AutoML uses are already from scikit-learn, and those who aren't do provide scikit-learn wrappers, so I think it would be easy to implement. Is there anything that makes this feature 'impossible' to implement?
open
2020-08-26T03:11:19Z
2021-11-24T12:57:23Z
https://github.com/mljar/mljar-supervised/issues/148
[ "enhancement", "help wanted", "good first issue" ]
diogosilva30
18
geopandas/geopandas
pandas
3,398
BUG:After reading the shp file with geopandas, check the point is in the polygon and execute until gdf_inside=gdf_polygons [contains_query_point] The program exited directly without any error message. What's going on?
After reading the shp file with geopandas, check the point is in the polygon and execute until gdf_inside=gdf_polygons [contains_query_point] The program exited directly without any error message. What's going on? geopandas Version: 1.0.1 ![87919c6a701be2ba016d8d8aac04380](https://github.com/user-attachments/assets/ea026aeb-8ba1-4425-9d56-ecd5da1a6eee) Debug breakpoint to check contains_query_point, index 109 is True, but gdf_polygons [contains_query_point] exits ![2](https://github.com/user-attachments/assets/5037269d-2ca4-4f58-9735-949bc07da7ec)
open
2024-08-06T03:47:31Z
2024-09-11T11:37:02Z
https://github.com/geopandas/geopandas/issues/3398
[ "bug", "needs info" ]
daviddengxx
1
ymcui/Chinese-BERT-wwm
tensorflow
129
Release of MacBERT pretrained model
As title. Is there any plan to release the [MacBERT](https://arxiv.org/abs/2004.13922) pretrained model?
closed
2020-07-17T10:42:40Z
2020-07-20T01:06:18Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/129
[]
stvhuang
2
polakowo/vectorbt
data-visualization
604
vbt.heatmap would plot wrong x labels .
In vbt.heatmap , I found that ``` fig = gz_corr.stack().vbt.heatmap( x_level = "z", y_level = "index_code", slider_level = "basic" ) fig.update_layout( height=800, width=1000, ) fig.show() ``` ![image](https://github.com/polakowo/vectorbt/assets/3938751/bc564a0a-b684-45ff-9c08-a3c69615ce27) This is wrong at my first sight. I pick data out and print log can confirm this bug: ![image](https://github.com/polakowo/vectorbt/assets/3938751/fb7f8758-065d-4875-9d4f-bbd518db18fd) test data: [data.csv](https://github.com/polakowo/vectorbt/files/11702585/data.csv)
open
2023-06-09T05:25:52Z
2023-06-09T05:30:00Z
https://github.com/polakowo/vectorbt/issues/604
[]
eromoe
0
deepfakes/faceswap
machine-learning
781
Loss info not showing in the cloud server
Hi Showing the below in the local server but not displaying in the cloud server [06:52:22] [#02277] loss_A: 0.06049, loss_B: 0.08593 only displaying saved models repeatedly without data. 06/30/2019 06:49:54 INFO saved models 06/30/2019 06:52:54 INFO saved models Can you please provide the file and the line so i can add log info. tried in print_loss fun not working
closed
2019-06-30T06:55:04Z
2019-06-30T08:46:05Z
https://github.com/deepfakes/faceswap/issues/781
[]
SasiKiranK
2
erdewit/ib_insync
asyncio
280
.placeOrder(contract, order). multiple stocks in one time?
contract=Stock(symbol='1', exchange='SMART', currency='HKD', primaryExchange='SEHK') .placeOrder(contract, order). Any method put an order for multiple stocks in one time? Say symbol=['1', '2', '3'], 3 stocks in one time.
closed
2020-07-09T13:47:10Z
2020-07-30T17:49:44Z
https://github.com/erdewit/ib_insync/issues/280
[]
lamhoson
1
GibbsConsulting/django-plotly-dash
plotly
88
Double trailing slashes in urls (2nd sample/demo-two) impede deployment
closed
2018-12-26T06:13:17Z
2018-12-26T21:37:08Z
https://github.com/GibbsConsulting/django-plotly-dash/issues/88
[]
yurman-cc
6
JaidedAI/EasyOCR
machine-learning
1,061
UserWarning: The parameter ‘pretrained’ is deprecated since 0.13 and may be removed in the future, please use ‘weights’ instead.
```python from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' dls = ImageDataLoaders.from_name_func('.', get_image_files(path), valid_pct=0.2, seed=42, label_func=RegexLabeller(pat = r'^([^/]+)_\d+'), item_tfms=Resize(224)) learn = vision_learner(dls, resnet34, metrics=error_rate) if __name__ == '__main__': learn.fine_tune(3) ``` UserWarning: The parameter ‘pretrained’ is deprecated since 0.13 and may be removed in the future, please use ‘weights’ instead. Why this warning appears?
open
2023-06-22T15:01:52Z
2023-06-22T15:04:28Z
https://github.com/JaidedAI/EasyOCR/issues/1061
[]
ZeroHzzzz
0
microsoft/MMdnn
tensorflow
402
Pytorch to IR: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'
Platform (like ubuntu 16.04/win10): Win 8.1 Python version: 3.5 Source framework with version (like Tensorflow 1.4.1 with GPU): pytorch 0.4.0 Destination framework with version (like CNTK 2.3 with GPU): caffe Pre-trained model path (webpath or webdisk path): https://mega.nz/#!yR5HGTAK!hHh3gYgShR-knRq-RYEktE5SbljFhHwLTI_U289hIpA Running scripts: mmtoir -f pytorch -d resnet32 -n resnet32_pytorch --inputShape 3,224,224 I get the following error Traceback (most recent call last): File "d:\c_progs\anaconda3\envs\python35\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\c_progs\anaconda3\envs\python35\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\c_progs\Anaconda3\envs\python35\Scripts\mmtoir.exe\__main__.py", line 9, in <module> File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\mmdnn\conversion\_script\convertToIR.py", line 192, in _main ret = _convert(args) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\mmdnn\conversion\_script\convertToIR.py", line 92, in _convert parser = PytorchParser(args.network, inputshape[0]) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\mmdnn\conversion\pytorch\pytorch_parser.py", line 83, in __init__ self.pytorch_graph.build(self.input_shape) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\mmdnn\conversion\pytorch\pytorch_graph.py", line 122, in build trace, output = torch.jit.get_trace_graph(self.model, (dummy_input, )) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\jit\__init__.py", line 255, in get_trace_graph return LegacyTracedModule(f, nderivs=nderivs)(*args, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\jit\__init__.py", line 288, in forward out = self.inner(*trace_inputs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self._slow_forward(*input, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\module.py", line 479, in _slow_forward result = self.forward(*input, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\container.py", line 91, in forward input = module(input) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self._slow_forward(*input, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\module.py", line 479, in _slow_forward result = self.forward(*input, **kwargs) File "d:\c_progs\anaconda3\envs\python35\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' I have successfully exported the vgg19 pytorch model (from https://github.com/Microsoft/MMdnn/blob/master/mmdnn/models/README.md) to IR Structure using the below command mmtoir -f pytorch -d vgg19 -n imagenet_vgg19.pth --inputShape 3,224,224 so the error is caused by the model I am trying to import. Because I am not that familiar with pytorch I can't figure out how to solve this problem?
closed
2018-09-05T17:30:21Z
2022-09-15T15:59:33Z
https://github.com/microsoft/MMdnn/issues/402
[]
cudawarped
4
CorentinJ/Real-Time-Voice-Cloning
tensorflow
854
make the model work in french
hello everyone, I have the toolbox, I don't know how to make it work so that it clones voices in French, would someone have the solution please, I'm not a computer expert , thank you very much
closed
2021-09-30T06:25:44Z
2021-10-06T18:08:52Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/854
[]
upgrade1m
2
dmlc/gluon-nlp
numpy
1,016
Bug in Bert finetuning on glue
## Description In the finetunine_classifier.py, it updates the metric each batch and get the result after each epoch both in train() and evaluate(). However, there are some wrong settings: 1. We should set average = 'micro' for all the metrics, because basically we do not need average score but a single metric on all batches, especially during evalution. 2. Currently, the PearsonCorrelation() doesn't support micro. So we may need to add this feature. ### Error Message (Paste the complete error message, including stack trace.) ## To Reproduce (If you developed your own code, please provide a short script that reproduces the error. For existing examples, please provide link.) ### Steps to reproduce (Paste the commands you ran that produced the error.) 1. 2. ## What have you tried to solve it? 1. 2. ## Environment We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below: ``` curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python # paste outputs here ```
open
2019-11-20T04:09:30Z
2019-12-13T05:05:21Z
https://github.com/dmlc/gluon-nlp/issues/1016
[ "bug", "release focus" ]
zburning
2
BMW-InnovationLab/BMW-TensorFlow-Training-GUI
rest-api
4
Can I use this project without nvidia gpu?
I have a MacBook with Ubuntu in VirtualBox but my machine does not have a Nvidia GPU is there a way to run this project?
closed
2020-01-14T16:35:27Z
2020-01-16T08:22:35Z
https://github.com/BMW-InnovationLab/BMW-TensorFlow-Training-GUI/issues/4
[]
hemaAI
1
PaddlePaddle/models
computer-vision
5,346
PointNet++ 测试运行失败
基础Docker镜像: paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 1. 调整 ext_op/src/make.sh 代码,在 g++ 编译指令中加上 -D_GLIBCXX_USE_CXX11_ABI=0 ```bash # make.sh include_dir=$( python -c 'import paddle; print(paddle.sysconfig.get_include())' ) lib_dir=$( python -c 'import paddle; print(paddle.sysconfig.get_lib())' ) echo $include_dir echo $lib_dir OPS='farthest_point_sampling_op gather_point_op group_points_op query_ball_op three_interp_op three_nn_op' for op in ${OPS} do nvcc ${op}.cu -c -o ${op}.cu.o -ccbin cc -DPADDLE_WITH_CUDA -DEIGEN_USE_GPU -DPADDLE_USE_DSO -DPADDLE_WITH_MKLDNN -Xcompiler -fPIC -std=c++11 -Xcompiler -fPIC -w --expt-relaxed-constexpr -O0 -g -DNVCC \ -I ${include_dir}/third_party/ \ -I ${include_dir} done g++ farthest_point_sampling_op.cc farthest_point_sampling_op.cu.o gather_point_op.cc gather_point_op.cu.o group_points_op.cc group_points_op.cu.o query_ball_op.cu.o query_ball_op.cc three_interp_op.cu.o three_interp_op.cc three_nn_op.cu.o three_nn_op.cc -o pointnet_lib.so -DPADDLE_WITH_MKLDNN -shared -fPIC -std=c++11 -O0 -g \ -I ${include_dir}/third_party/ \ -I ${include_dir} \ -L ${lib_dir} \ -L /usr/local/cuda/lib64 -lpaddle_framework -lcudart\ -D_GLIBCXX_USE_CXX11_ABI=0 rm *.cu.o ``` 2. 运行 make.sh 编译通过 3. 执行测试 ```bash export CUDA_VISIBLE_DEVICES=0 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`python -c 'import paddle; print(paddle.sysconfig.get_lib())'` export PYTHONPATH=$PYTHONPATH:`pwd` python tests/test_three_nn_op.py ``` 报错信息如下: ``` W0916 10:55:57.921620 376 init.cc:216] Warning: PaddlePaddle catches a failure signal, it may not work properly W0916 10:55:57.921661 376 init.cc:218] You could check whether you killed PaddlePaddle thread/process accidentally or report the case to PaddlePaddle W0916 10:55:57.921666 376 init.cc:221] The detail failure signal is: W0916 10:55:57.921671 376 init.cc:224] *** Aborted at 1631789757 (unix time) try "date -d @1631789757" if you are using GNU date *** W0916 10:55:57.923035 376 init.cc:224] PC: @ 0x0 (unknown) W0916 10:55:57.923224 376 init.cc:224] *** SIGFPE (@0x7f1e088ed83b) received by PID 376 (TID 0x7f1e0e650700) from PID 143579195; stack trace: *** W0916 10:55:57.924304 376 init.cc:224] @ 0x7f1e0e22e390 (unknown) W0916 10:55:57.925643 376 init.cc:224] @ 0x7f1e088ed83b std::__detail::_Mod_range_hashing::operator()() W0916 10:55:57.926782 376 init.cc:224] @ 0x7f1e089104b2 std::__detail::_Hash_code_base<>::_M_bucket_index() W0916 10:55:57.927825 376 init.cc:224] @ 0x7f1e0890f5d0 std::_Hashtable<>::_M_bucket_index() W0916 10:55:57.928773 376 init.cc:224] @ 0x7f1e089119bb std::__detail::_Map_base<>::operator[]() W0916 10:55:57.929697 376 init.cc:224] @ 0x7f1e08910992 std::unordered_map<>::operator[]() W0916 10:55:57.930344 376 init.cc:224] @ 0x7f1e0890fe46 _ZN6paddle9framework19RegisterKernelClassINS_8platform9CUDAPlaceEfZNKS0_24OpKernelRegistrarFunctorIS3_Lb0ELm0EINS_9operators33FarthestPointSamplingOpCUDAKernelIfEENS6_IdEEEEclEPKcSB_iEUlRKNS0_16ExecutionContextEE_EEvSB_SB_iT1_ W0916 10:55:57.931105 376 init.cc:224] @ 0x7f1e0890f2e4 paddle::framework::OpKernelRegistrarFunctor<>::operator()() W0916 10:55:57.931761 376 init.cc:224] @ 0x7f1e0890e799 _ZN6paddle9framework17OpKernelRegistrarINS_8platform9CUDAPlaceEJNS_9operators33FarthestPointSamplingOpCUDAKernelIfEENS5_IdEEEEC2EPKcSA_i W0916 10:55:57.932322 376 init.cc:224] @ 0x7f1e0890af49 __static_initialization_and_destruction_0() W0916 10:55:57.932822 376 init.cc:224] @ 0x7f1e0890af77 _GLOBAL__sub_I_tmpxft_000000df_00000000_5_farthest_point_sampling_op.cudafe1.cpp W0916 10:55:57.933336 376 init.cc:224] @ 0x7f1e0e44a6ca (unknown) W0916 10:55:57.933843 376 init.cc:224] @ 0x7f1e0e44a7db (unknown) W0916 10:55:57.934345 376 init.cc:224] @ 0x7f1e0e44f8f2 (unknown) W0916 10:55:57.934846 376 init.cc:224] @ 0x7f1e0e44a574 (unknown) W0916 10:55:57.935348 376 init.cc:224] @ 0x7f1e0e44edb9 (unknown) W0916 10:55:57.935863 376 init.cc:224] @ 0x7f1e0dc4ff09 (unknown) W0916 10:55:57.936401 376 init.cc:224] @ 0x7f1e0e44a574 (unknown) W0916 10:55:57.936937 376 init.cc:224] @ 0x7f1e0dc50571 (unknown) W0916 10:55:57.937688 376 init.cc:224] @ 0x7f1e0dc4ffa1 dlopen W0916 10:55:57.945061 376 init.cc:224] @ 0x7f1da08da6d3 paddle::platform::dynload::GetOpDsoHandle() W0916 10:55:57.950942 376 init.cc:224] @ 0x7f1d9cfbe71d paddle::framework::LoadOpLib() W0916 10:55:57.953336 376 init.cc:224] @ 0x7f1d9d0239ed _ZZN8pybind1112cpp_function10initializeIRPFvRKSsEvIS3_EINS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESN_ W0916 10:55:57.955534 376 init.cc:224] @ 0x7f1d9d048b39 pybind11::cpp_function::dispatcher() W0916 10:55:57.955688 376 init.cc:224] @ 0x4bc9ba PyEval_EvalFrameEx W0916 10:55:57.955794 376 init.cc:224] @ 0x4ba036 PyEval_EvalCodeEx W0916 10:55:57.955926 376 init.cc:224] @ 0x4c237b PyEval_EvalFrameEx W0916 10:55:57.956028 376 init.cc:224] @ 0x4ba036 PyEval_EvalCodeEx W0916 10:55:57.956147 376 init.cc:224] @ 0x4b9d26 PyEval_EvalCode W0916 10:55:57.956218 376 init.cc:224] @ 0x4b9c5f PyImport_ExecCodeModuleEx W0916 10:55:57.956341 376 init.cc:224] @ 0x4b2f86 (unknown) W0916 10:55:57.956454 376 init.cc:224] @ 0x4a4d21 (unknown) Floating point exception (core dumped) ```
open
2021-09-16T11:35:20Z
2024-02-26T05:08:43Z
https://github.com/PaddlePaddle/models/issues/5346
[]
zjuncd
1
yunjey/pytorch-tutorial
pytorch
35
[Image Captioning] out of memory immediately after training starts?
What size cards are these networks tested and trained on? I just tried running "09 - Image Captioning" and I immediately get errors. I am testing this on a Titan X with 12 GB of memory: ``` Namespace(batch_size=128, caption_path='./data/annotations/captions_train2014.json', crop_size=224, embed_size=256, hidden_size=512, image_dir='./data/resized2014', learning_rate=0.001, log_step=10, model_path='./models/', num_epochs=5, num_layers=1, num_workers=2, save_step=1000, vocab_path='./data/vocab.pkl') loading annotations into memory... Done (t=0.89s) creating index... index created! THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory Traceback (most recent call last): File "train.py", line 119, in <module> main(args) File "train.py", line 66, in main features = encoder(images) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/pytorch/pytorch-tutorial/tutorials/09 - Image Captioning/model.py", line 26, in forward features = self.resnet(images) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/.venv/local/lib/python2.7/site-packages/torchvision/models/resnet.py", line 146, in forward x = self.layer3(x) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/container.py", line 64, in forward input = module(input) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/.venv/local/lib/python2.7/site-packages/torchvision/models/resnet.py", line 85, in forward out = self.bn3(out) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/modules/batchnorm.py", line 43, in forward self.training, self.momentum, self.eps) File "/root/.venv/local/lib/python2.7/site-packages/torch/nn/functional.py", line 463, in batch_norm return f(input, weight, bias) RuntimeError: cuda runtime error (2) : out of memory at /b/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu:66 Command exited with non-zero status 1 ```
closed
2017-05-23T02:53:19Z
2017-05-28T11:16:03Z
https://github.com/yunjey/pytorch-tutorial/issues/35
[]
jtoy
16
horovod/horovod
machine-learning
2,939
how to kill the task that reports missing rank
I use the elastic code to train imagenet, but my server always appear that task stuck, and it's always report "missing rank", do not restart on other normal hosts. So how can I kill the missing rank node's task?
open
2021-05-26T07:08:49Z
2021-05-26T09:01:48Z
https://github.com/horovod/horovod/issues/2939
[ "enhancement" ]
TXacs
1
deeppavlov/DeepPavlov
tensorflow
1,476
Performance issue in the definition of CudnnGRU, deeppavlov/models/squad/utils.py(P1)
Hello, I found a performance issue in the difinition of `CudnnGRU`, deeppavlov/models/squad/utils.py, [tf.expand_dims](https://github.com/deepmipt/DeepPavlov/blob/d60649aaf26e0fc5f5c9850a76c41701410ec77c/deeppavlov/models/squad/utils.py#L33) will be called repeately during the program execution, resulting in reduced efficiency. So I think `init_bw ` should be created before the loop. The same issue exists in [here](https://github.com/deepmipt/DeepPavlov/blob/d60649aaf26e0fc5f5c9850a76c41701410ec77c/deeppavlov/models/squad/utils.py#L84). Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
closed
2021-08-20T12:38:26Z
2022-06-24T11:11:53Z
https://github.com/deeppavlov/DeepPavlov/issues/1476
[ "bug" ]
DLPerf
2
geopandas/geopandas
pandas
2,798
BUG: "CPLE_AppDefinedError: buildEllipsoid: non double value" when trying to read a shapefile using .read_file()
- [x] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the latest version of geopandas. - [ ] (optional) I have confirmed this bug exists on the main branch of geopandas. --- **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug. #### Code Sample, a copy-pastable example ```python # Your code here import geopandas as gpd df = gpd.read_file("/Users/roy/Downloads/2014_nei_isrm_emissions/onroad.shp") ``` #### Problem description I'm running this code cell in Jupyter Notebook. It resolves with the following error: <img width="1044" alt="image" src="https://user-images.githubusercontent.com/68200979/220196898-bb423a20-8c2d-410e-ad16-0dd8828ae911.png"> I tried looking for this CPLE_AppDefinedError pretty extensively, but it seems the ones that I've come across aren't necessarily related to my issue. I tried passing in different shapefiles, which yields the same error message. I've also been able to verify that the shapefiles look good. Any help would be greatly appreciated! #### Expected Output DataFrame should be returned/created #### Output of ``geopandas.show_versions()`` <details> SYSTEM INFO ----------- python : 3.11.0 | packaged by conda-forge | (main, Jan 15 2023, 05:44:48) [Clang 14.0.6 ] executable : /Users/roy/opt/anaconda3/envs/InMAP/bin/python3.11 machine : macOS-12.6-x86_64-i386-64bit GEOS, GDAL, PROJ INFO --------------------- GEOS : 3.11.1 GEOS lib : None GDAL : 3.6.2 GDAL data dir: /Users/roy/opt/anaconda3/envs/InMAP/lib/python3.11/site-packages/fiona/gdal_data PROJ : 9.1.1 PROJ data dir: /Users/roy/opt/anaconda3/envs/InMAP/lib/python3.11/site-packages/pyproj/proj_dir/share/proj PYTHON DEPENDENCIES ------------------- geopandas : 0.12.2 numpy : 1.24.2 pandas : 1.5.3 pyproj : 3.4.1 shapely : 2.0.1 fiona : 1.9.1 geoalchemy2: None geopy : None matplotlib : 3.7.0 mapclassify: 2.5.0 pygeos : None pyogrio : 0.5.1 psycopg2 : None pyarrow : None rtree : 1.0.1 </details>
closed
2023-02-20T21:00:14Z
2023-02-26T09:19:10Z
https://github.com/geopandas/geopandas/issues/2798
[ "bug", "upstream issue" ]
wang-roy
4
litestar-org/litestar
asyncio
3,654
Enhancement: Provide test client that can handle infinite SSE generators
### Summary Follow-up from https://github.com/orgs/litestar-org/discussions/3547 The litestart test clients directly call the ASGI app instance. When calling stream endpoints like SSEs, they will always try to consume the whole stream. If the endpoint serves an infinite generator, the test client is stuck. This does not happen for the test client websocket implementation is because it "cheats" by not implementing the actual websocket protocol but using an internal queue to transport the messages. The solution for this edge case is to run the app in a subprocess (e.g. via uvicorn) and then use an httpx client to connect to the web app. ### Basic Example _No response_ ### Drawbacks and Impact _No response_ ### Unresolved questions _No response_
closed
2024-08-03T12:15:06Z
2025-03-20T15:54:51Z
https://github.com/litestar-org/litestar/issues/3654
[ "Enhancement" ]
aranvir
1
Nemo2011/bilibili-api
api
19
`获取专栏内容并转换为 Markdown` 存在BUG
**Python 版本:** 3.x **运行环境:** Windows ```python from bilibili_api import article, sync async def main(): ar = article.Article(14693531) await ar.fetch_content() with open('article.md', 'w', encoding='utf8') as f: f.write(ar.markdown()) sync(main()) ``` --- 比如抓取`14693531`这篇文章,地址:https://www.bilibili.com/read/cv14693531/?from=readlist ,引用内容和引用间内容会丢失。 抓取其他文章时,出现引用内容后的一部分正文内容,也会被转为引用
closed
2022-07-06T09:46:32Z
2022-07-20T10:17:39Z
https://github.com/Nemo2011/bilibili-api/issues/19
[]
gyhyfj
1
jofpin/trape
flask
174
Goo.gl no longer in service
Perhaps swap it for bitly.com or tinyurl.com service? Or make the URL shortener an option?
open
2019-07-23T06:01:38Z
2019-07-23T06:01:38Z
https://github.com/jofpin/trape/issues/174
[]
hwac121
0
newpanjing/simpleui
django
13
内容加载问题
打开一个标签,该标签内容未被加载完的时候再打开一个标签,原来的标签请求会被中断,只有新打开的标签能加载完全,
closed
2019-03-14T09:06:43Z
2019-04-16T09:30:43Z
https://github.com/newpanjing/simpleui/issues/13
[]
duxiaoyang06
0
jina-ai/clip-as-service
pytorch
566
Tensorflow 2.0 AttributeError: module 'tensorflow' has no attribute 'logging'
**Prerequisites** > Please fill in by replacing `[ ]` with `[x]`. * [x ] Are you running the latest `bert-as-service`? * [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`? * [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)? * [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)? **System information** > Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh). - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): AWS Sagemaker conda_amazonei_tensorflow2_p36 - TensorFlow installed from (source or binary): preinstalled in Sagemaker - TensorFlow version: 2.0.2 - Python version: 3.6 - `bert-as-service` version: 1.10.1 - GPU model and memory: none - CPU model and memory: ml.t2.medium AWS instance --- ### Description when starting the server I get some errors, it seems like tensorflow 2.0 has done away with tf.logging. I:VENTILATOR:[__i:__i: 67]:freeze, optimize and export graph, could take a while... E:GRAPHOPT:[gra:opt:154]:fail to optimize the graph! Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/amazonei_tensorflow2_p36/lib/python3.6/site-packages/bert_serving/server/graph.py", line 42, in optimize_graph tf = import_tf(verbose=args.verbose) File "/home/ec2-user/anaconda3/envs/amazonei_tensorflow2_p36/lib/python3.6/site-packages/bert_serving/server/helper.py", line 186, in import_tf tf.logging.set_verbosity(tf.logging.DEBUG if verbose else tf.logging.ERROR) AttributeError: module 'tensorflow' has no attribute 'logging' Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/amazonei_tensorflow2_p36/bin/bert-serving-start", line 8, in <module> sys.exit(main()) File "/home/ec2-user/anaconda3/envs/amazonei_tensorflow2_p36/lib/python3.6/site-packages/bert_serving/server/cli/__init__.py", line 4, in main with BertServer(get_run_args()) as server: File "/home/ec2-user/anaconda3/envs/amazonei_tensorflow2_p36/lib/python3.6/site-packages/bert_serving/server/__init__.py", line 71, in __init__ self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,)) TypeError: 'NoneType' object is not iterable I'm using this command to start the server: !bert-serving-start -model_dir /data/wwm_uncased_L-24_H-1024_A-16/ -num_worker=1 and calling the server via: N/A Then this issue shows up: ![image](https://user-images.githubusercontent.com/11083975/84536700-68c82e00-acbc-11ea-943e-61c87eb1a810.png)
open
2020-06-12T19:02:11Z
2020-06-15T23:51:21Z
https://github.com/jina-ai/clip-as-service/issues/566
[]
MaxNiebergall
2
SCIR-HI/Huatuo-Llama-Med-Chinese
nlp
37
下载后的材料要怎么操作
我是编程小白,百度网盘里的资料都下载了,需要放到哪个文件夹里?下一步怎么操作?求解!
closed
2023-05-14T05:27:48Z
2023-07-04T00:56:28Z
https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/37
[]
ccccler
0
fastapiutils/fastapi-utils
fastapi
306
[BUG] README installation steps are pointing to `fastapi-restful`, not `fastapi-utils`
![image](https://github.com/dmontagu/fastapi-utils/assets/16413205/9e378686-4718-4990-a39f-74a38cb714d9)
closed
2024-05-17T07:29:08Z
2024-06-07T15:16:47Z
https://github.com/fastapiutils/fastapi-utils/issues/306
[ "bug" ]
IgnacioHeredia
0
microsoft/hummingbird
scikit-learn
571
Use hummingbird with synapseML
Can we use hummingbird with synapseML model?
closed
2022-03-12T03:26:23Z
2022-03-15T01:02:30Z
https://github.com/microsoft/hummingbird/issues/571
[]
congnt-dsv
3
guohongze/adminset
django
31
客户端安装脚本要增加对rhel的支持
install/client/client_install.sh 修改: if (echo $os|grep centos) || (echo $os|grep 'Red Hat')
closed
2017-11-25T05:39:16Z
2018-02-16T11:09:36Z
https://github.com/guohongze/adminset/issues/31
[]
fjibj
1
xorbitsai/xorbits
numpy
222
[BUG] Got stuck when IPython exit
### Describe the bug When run on master, it got stuck when IPython exit until pressed Ctrl+C. ### To Reproduce ```IPython In [1]: import xorbits.pandas as pd In [2]: len(pd.read_csv('/Users/hekaisheng/Downloads/user.csv')) /Users/hekaisheng/Documents/projects/xorbits/python/xorbits/_mars/deploy/oscar/session.py:2054: UserWarning: No existing session found, creating a new local session now. warnings.warn(warning_msg) Web service started at http://0.0.0.0:19090 100%|█████████████████████████████████████████████████████████████████████████████| 100.00/100 [00:00<00:00, 250.85it/s] Out[2]: 6613 In [3]: exit ^CError in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/hekaisheng/miniconda3/lib/python3.9/multiprocessing/connection.py", line 936, in wait ready = selector.select(timeout) File "/Users/hekaisheng/miniconda3/lib/python3.9/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) KeyboardInterrupt ```
closed
2023-02-17T10:30:28Z
2023-02-22T08:02:02Z
https://github.com/xorbitsai/xorbits/issues/222
[ "bug" ]
aresnow1
0
pallets/quart
asyncio
266
asyncio.exceptions.CancelledError
I am using the `Quart` framework, and after bumping up the dependencies version (Quart-schema), we are getting some weird issues. Not sure if it is related to Quart or not. Can someone give me some suggestions please? Thanks ``` File "/app/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request conn = await self._connector.connect( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect proto = await self._create_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection _, proto = await self._create_direct_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1152, in _create_direct_connection hosts = await asyncio.shield(host_resolved) ``` ``` File "/app/lere/model/personalisation/hot_leads/sticky_hot_leads.py", line 39, in get_hot_leads_offers hot_leads = await get_cache_for_hot_leads(user_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lere/logs/logger.py", line 262, in async_wrapper value = await logging_logic.execute_async() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lere/logs/logger.py", line 248, in execute_async self.result = await async_function(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/lere/model/personalisation/hot_leads/cache.py", line 22, in get_cache_for_hot_leads await asyncio.to_thread( File "/usr/local/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ asyncio.exceptions.CancelledError ``` ``` File "/app/lere/model/personalisation/hot_leads/cache.py", line 22, in get_cache_for_hot_leads await asyncio.to_thread( File "/usr/local/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ asyncio.exceptions.CancelledError ``` But sorry about that, i have no idea how to re-produce them as it happened randomly. Sometimes it could happen when the service keep running for 2 hours, sometimes it could get those errors in half hour. :-( Environment: - Python version: 3.11.3 - Quart version: 0.18.4
closed
2023-09-05T06:52:23Z
2023-10-01T00:20:18Z
https://github.com/pallets/quart/issues/266
[]
JerryHuang-LE
2
opengeos/leafmap
jupyter
217
[Suggestion] Pin requirement versions (specifically python-box)
Hello, I am the developer of python-box and see that it is a requirement in this repo and has not been version pinned. I suggest that you pin it to the max known [compatible version](https://peps.python.org/pep-0440/#compatible-release) in your requirements.txt and/or setup.py file(s): ``` python-box[all]~=5.4 ``` Or without external dependencies ``` python-box~=5.4 ``` Using `~=5.0` (or any minor version) will lock it to the major version of `5` and minimum of minor version specified. If you add a bugfix space for `5.4.0` it would lock it to the minor version `5.4.*`. The [next major release](https://github.com/cdgriffith/Box/pull/216) of Box is right around the corner, and while it has many improvements, I want to ensure you have a smooth transition by being able to test at your own leisure to ensure your standard user cases do not run into any issues. I am keeping track of [major changes](https://github.com/cdgriffith/Box/wiki/Major-Version-Breaking-Changes), so please check there as a quick overview of any differences. To test new changes, try out the release candidate: ``` pip install python-box[all]~=6.0.0rc4 ```
closed
2022-03-12T04:24:05Z
2022-03-12T05:01:05Z
https://github.com/opengeos/leafmap/issues/217
[ "Feature Request" ]
cdgriffith
1
microsoft/qlib
deep-learning
1,780
ValueError: unknown metric `mse`
## ❓ Questions and Help We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue. ![image](https://github.com/microsoft/qlib/assets/135490367/b23bd0c8-a580-4265-831d-4453df461a20) ![image](https://github.com/microsoft/qlib/assets/135490367/53ea3f48-f722-42f3-91fd-d228188c7c06)
closed
2024-04-23T03:20:02Z
2024-06-19T09:32:59Z
https://github.com/microsoft/qlib/issues/1780
[ "question" ]
Hosen760
2
Miserlou/Zappa
django
2,195
How to get RequestId from view
I would like to get `RequestId` in the view logs to correlate it to `START/END/REPORT` log entries. I have tried `request.META['lambda.event']['requestContext']` but It has different `RequestId`
open
2021-01-06T14:02:29Z
2021-01-06T14:02:55Z
https://github.com/Miserlou/Zappa/issues/2195
[]
vladimirnani
0
huggingface/transformers
python
36,267
ci/v4.49-release: tests collection fail with "No module named 'transformers.models.marian.convert_marian_to_pytorch'" on v4.48/v4.49 release branches
Tests collection fails with the following error in https://github.com/huggingface/transformers/tree/v4.49-release branch (and for v4.48-release branch too). ``` python3 -m pytest tests/ ===================================================== test session starts ===================================================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 rootdir: /home/dvrogozh/git/huggingface/transformers configfile: pyproject.toml plugins: anyio-4.8.0, rich-0.2.0, subtests-0.14.1, xdist-3.6.1, asyncio-0.23.8, timeout-2.3.1, hypothesis-6.122.3, reportlog-0.4.0, dash-2.18.2, cov-6.0.0, typeguard-4.3.0 asyncio: mode=strict collected 84774 items / 1 error =========================================================== ERRORS ============================================================ ________________________________ ERROR collecting tests/models/marian/test_modeling_marian.py _________________________________ ImportError while importing test module '/home/dvrogozh/git/huggingface/transformers/tests/models/marian/test_modeling_marian.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/models/marian/test_modeling_marian.py:50: in <module> from transformers.models.marian.convert_marian_to_pytorch import ( E ModuleNotFoundError: No module named 'transformers.models.marian.convert_marian_to_pytorch' ====================================================== warnings summary ======================================================= src/transformers/optimization.py:640 /home/dvrogozh/git/huggingface/transformers/src/transformers/optimization.py:640: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =================================================== short test summary info =================================================== ERROR tests/models/marian/test_modeling_marian.py !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ================================================ 1 warning, 1 error in 17.54s ================================================= ``` This happens due to some files are being pruned on release creation commit, see https://github.com/huggingface/transformers/commit/a22a4378d97d06b7a1d9abad6e0086d30fdea199. **Can we resolve this issue to be able to trigger all the tests on release branches?** CC: @ArthurZucker @ydshieh
closed
2025-02-18T23:09:36Z
2025-02-20T12:22:11Z
https://github.com/huggingface/transformers/issues/36267
[]
dvrogozh
1
pytest-dev/pytest-django
pytest
443
django-rest-framework api_client fixture?
DRF adds an APIClient class extending django's Client class. Since I'm using DRF and pytest to test it an `api_client` fixture seems appropriate. Is this something you're willing to consider?
closed
2016-12-22T11:26:05Z
2016-12-22T18:25:44Z
https://github.com/pytest-dev/pytest-django/issues/443
[]
nirizr
2
django-import-export/django-import-export
django
1,594
Can it export fields the same as admin setting
Can it export fields the same as admin setting?The demo is as follows, I want to export the fields corresponding to admin. ```python @admin.register(models.MeasuringPoint) class MeasuringPointAdmin(admin.ModelAdmin): list_display = ( "mpoint_id", "project_id", "mitem_name", "mpoint_name", "element_id", "huasi_id", "threshold_names", "status", "longitude", "latitude", "create_time") @admin.display(description='BimFace Element Id') def element_id(self, obj: models.MeasuringPoint): return obj.epoint_id.epoint_name if obj.epoint_id else '' @admin.display(description="measuring item name") def mitem_name(self, obj: models.MeasuringPoint): return obj.mitem_id.mitem_name if obj.mitem_id else '' @admin.display(description="project name") def project_name(self, obj: models.MeasuringPoint): return obj.project_id.project_name if obj.project_id else '' ```
closed
2023-06-07T10:11:39Z
2023-06-07T11:31:22Z
https://github.com/django-import-export/django-import-export/issues/1594
[ "question" ]
Undertone0809
1
comfyanonymous/ComfyUI
pytorch
6,482
Each time the comfyui page is refreshed, it outputs a list of lora.
### Your question Each time the comfyui page is refreshed, it outputs a list of lora.The sheer number of lora I have locally causes the command line to completely fill up and lose some information every time it output a list.I would like to ask what modifications should be made to solve this problem. ### Logs _No response_ ### Other _No response_
open
2025-01-16T02:52:07Z
2025-01-16T08:58:40Z
https://github.com/comfyanonymous/ComfyUI/issues/6482
[ "User Support" ]
muxuezzz
1
chaos-genius/chaos_genius
data-visualization
340
Make population calculation & statistical filtering parameters globally configurable
Make population calculation & statistical filtering parameters globally configurable in env & docker compose. A thorough KPI level configuration will come in a later release but this will lay the foundation for it.
closed
2021-10-27T04:21:47Z
2021-10-29T15:16:39Z
https://github.com/chaos-genius/chaos_genius/issues/340
[ "🛠️ backend", "🧮 algorithms" ]
suranah
0
davidteather/TikTok-Api
api
289
[INSTALLATION] - How to get a custom running SQUID proxy to work.
**Describe the error** Hello, recently I posted an issue about asking for spanish tiktoks and getting russian ones. The answer was to use a proxy. So I installed a proxy server on my server using squid. The thing is I can't get tiktok petitions to work now, am I doing it wrong? Maybe it is a server configuration error so sorry for that if it is a reason Thanks! Great job! Bruno **The buggy code** @app.route('/get_trending') def get_trending(): results = 10 if request.args.get('limit') is None else int(request.args.get('limit')) lang = 'en' if request.args.get('lang') is None else request.args.get('lang') print("Getting {0} results in {1} language".format(results, lang)) try: global proxy_url print(proxy_url) trending = api.trending(count=results,language=lang,proxy=proxy_url) except BadStatusLine as e: print(str(e)) time.sleep(30) return jsonify({"tiktoks": trending}) Where proxy_url is a string that contains my proxy url witouth https,only the ip:port **Error Trace (if any)** Put the error trace below if there's any error thrown. ``` At start this popped up at first request, then nothing. raise ProxyError(e, request=request) requests.exceptions.ProxyError: HTTPSConnectionPool(host='m.tiktok.com', port=443): Max retries exceeded with url: /api/item_list/?aid=1988&app_name=tiktok_web&device_platform=web&referer=https%3A%2F%2Fwww.tiktok.com%2F&user_agent=Mozilla%252F5.0%2B%28Windows%2BNT%2B10.0%253B%2BWin64%253B%2Bx64%29%2BAppleWebKit%252F537.36%2B%28KHTML%2C%2Blike%2BGecko%29%2BChrome%252F84.0.4147.125%2BSafari%252F537.36&cookie_enabled=true&screen_width=800&screen_height=600&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&ac=4g&timezone_name=&appId=1233&appType=m&isAndroid=False&isMobile=False&isIOS=False&OS=windows&count=10&id=1&type=5&secUid=&maxCursor=0&minCursor=0&sourceType=12&appId=1233&region=US&priority_region=US&language=es&verifyFp=XSWO0PKTiZ9SdfnS&did=18007046&_signature=_02B4Z6wo00f01BuaJlQAAIBDJ0UT2rX1NcAbiiLAAFln7a (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))) ``` **Desktop (please complete the following information):** - OS: [e.g. Windows 10] Debian 9 - TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue 3.5.0 **Additional context** Put what you have already tried. Your problem is probably in the closed issues tab already.
closed
2020-10-14T19:02:36Z
2020-11-11T07:45:27Z
https://github.com/davidteather/TikTok-Api/issues/289
[ "installation_help" ]
elblogbruno
5
yuka-friends/Windrecorder
streamlit
18
中文OCR准确率很低
我单独调用了Windows.Media.Ocr.Cli.exe来进行OCR测试,但效果非常差。这也导致了分词和搜索的效果较差,在我看来,这是整个软件的一个明显短板。 在源码中,还有第二种OCR方式:OCR文本-chineseOCRlite。我想知道为什么不再使用这个配置? 我希望能够自由切换OCR引擎,并增加一个OCR测试页面,有更好的ocr效果。 如果ocr不准确的话,现在我搜索的效果都很差,后面的LLM等创意都没有意义了
closed
2023-11-04T14:46:12Z
2024-08-03T07:58:04Z
https://github.com/yuka-friends/Windrecorder/issues/18
[ "enhancement", "P2" ]
linchuanXu
10
HIT-SCIR/ltp
nlp
431
请问能否根据自定义字典更新命名实体识别效果?
比如我有如下一条微博: ``` 8:30的快速路实时路况,快速路上现在拥堵较长的路段有:南北高架西侧广中西路至永兴路,东侧徐家汇路至延东立交;内环高架内侧鲁班立交至天钥桥路。请筒子们减速慢行,谨慎驾驶 ``` ltp的识别结果如下: ``` Ns : 永兴路 Ns : 徐家汇路 Ns : 延东 ``` 我尝试着把想识别的地点比如```南北高架```加到用户自定义词典里面,然后用: ```Python # user_dict.txt 是词典文件, max_window是最大前向分词窗口 ltp.init_dict(path="roads_for_entity.txt", max_window=10) ``` 加载,发现还是没有识别出来。。。。 ``` Ns : 广中西路 Ns : 永兴路 Ns : 徐家汇路 Ns : 延东 ``` 请问这种个性化的命名实体,特别是地点实体识别怎么通过ltp实现呢?多谢!
closed
2020-11-02T11:07:26Z
2020-11-10T02:08:30Z
https://github.com/HIT-SCIR/ltp/issues/431
[]
bright1993ff66
1
ageitgey/face_recognition
machine-learning
612
Fails on Chinese people
* face_recognition version: * Python version: 3.5.2 * Operating System: Ubuntu 16.04 ### The library can't separate chinese people I tried finding https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=2ahUKEwjKwbLM6JTdAhWGMewKHekXBIYQjRx6BAgBEAU&url=https%3A%2F%2Flaotiantimes.com%2F2016%2F09%2F05%2Fchinese-prime-minister-to-visit-laos%2F&psig=AOvVaw0jNXiklPM4YaxtrqIEOjQe&ust=1535719348182900 in https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=2ahUKEwiWwOmD55TdAhVNMewKHaGnBo0QjRx6BAgBEAU&url=https%3A%2F%2Fwww.japantimes.co.jp%2Fnews%2F2015%2F01%2F09%2Fnational%2Fhistory%2Fchina-brings-sex-slave-issue-spotlight%2F&psig=AOvVaw0jNXiklPM4YaxtrqIEOjQe&ust=1535719348182900 image and it identified the first person as the target person. Thus it can't accurately compare faces.
open
2018-08-30T12:54:10Z
2018-08-30T18:17:59Z
https://github.com/ageitgey/face_recognition/issues/612
[]
f3uplift
1
ipython/ipython
data-science
14,820
Cannot display a single dot character as Markdown
```python from IPython import display display.display(display.Markdown('.')) ``` ``` --------------------------------------------------------------------------- IsADirectoryError Traceback (most recent call last) <ipython-input-6-0a1e025990e8> in <cell line: 0>() 1 from IPython import display ----> 2 display.display(display.Markdown('.')) /usr/local/lib/python3.11/dist-packages/IPython/core/display.py in __init__(self, data, url, filename, metadata) 635 self.metadata = {} 636 --> 637 self.reload() 638 self._check_data() 639 /usr/local/lib/python3.11/dist-packages/IPython/core/display.py in reload(self) 660 """Reload the raw data from file or URL.""" 661 if self.filename is not None: --> 662 with open(self.filename, self._read_flags) as f: 663 self.data = f.read() 664 elif self.url is not None: IsADirectoryError: [Errno 21] Is a directory: '.' ``` This is because [this code](https://github.com/ipython/ipython/blob/1a7363fb2b20691d68c0f8ebdb4b5760d24c3840/IPython/core/display.py#L326-L329) explicitly checks whether the `data` argument to `display.Markdown` is an existing path, and if so, reads that path instead of displaying the text itself. This also leads to a weird side effects, allowing to display the files one may not intend to display: <img width="1027" alt="Image" src="https://github.com/user-attachments/assets/7eda18f5-d071-4192-88b1-fa3fb3da3802" /> Is it a good idea to decide whether to read a file or to display the text depending on whether the text resolves to an existing path? Perhaps this should be done with a flag instead, i.e. if the user explicitly passes `filename=`?
open
2025-03-05T12:56:32Z
2025-03-05T17:10:33Z
https://github.com/ipython/ipython/issues/14820
[]
dniku
0
microsoft/nni
machine-learning
5,321
TypeError: forward() missing 1 required positional argument for ModelSpeedup
**Describe the issue**: TypeError: forward() missing 1 required positional argument: 'input' while doing ModelSpeedup for VGG19 (basically for every architecture out there) **Environment**: - NNI version: 2.10 - Training service (local|remote|pai|aml|etc): remote - Server OS (for remote mode only): centOS - Python version: 3.9 - PyTorch/TensorFlow version: 1.13 **Log message**: Completed pruner.compress() [2023-01-20 11:24:16] WARNING: Nothing will happen when calling this function. This pruner is an iterative pruner and does not directly wrap the model. WARNING:nni.algorithms.compression.v2.pytorch.pruning.iterative_pruner:Nothing will happen when calling this function. This pruner is an iterative pruner and does not directly wrap the model. [2023-01-20 11:24:17] start to speedup the model INFO:nni.compression.pytorch.speedup.compressor:start to speedup the model WARNING:FixMaskConflict:both dim0 and dim1 masks found. [2023-01-20 11:24:17] infer module masks... INFO:nni.compression.pytorch.speedup.compressor:infer module masks... [2023-01-20 11:24:17] Update mask for .prim::TupleUnpack.55 INFO:nni.compression.pytorch.speedup.compressor:Update mask for .prim::TupleUnpack.55 [2023-01-20 11:24:17] Update mask for module.features.0 INFO:nni.compression.pytorch.speedup.compressor:Update mask for module.features.0 Traceback (most recent call last): File "/scratch/work/khatria1/pr_1/DeeperCompression/extract_metadata.py", line 344, in <module> main_prepare_metadata(dataset_list, model_dict, combo, File "/scratch/work/khatria1/pr_1/DeeperCompression/extract_metadata.py", line 246, in main_prepare_metadata model_copy, complexity_dict = prune_tune_quant_tune('level', model_name, model_copy, sparsity, quant_prec, tune_set, tune_epochs, File "/scratch/work/khatria1/pr_1/DeeperCompression/compression_methods/compress.py", line 157, in prune_tune_quant_tune pruned_model, pruned_inf_time, pruner_obj = gradual_prune_model(pruning_method, model, model_name, target_sparsity, dataset, File "/scratch/work/khatria1/pr_1/DeeperCompression/compression_methods/compress.py", line 59, in gradual_prune_model pruned_model, pruner_obj = pruner.process() File "/scratch/work/khatria1/pr_1/DeeperCompression/compression_methods/nni_compression.py", line 135, in process ModelSpeedup(model, self.dummy_input, masks).speedup_model() File "/home/khatria1/.local/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py", line 546, in speedup_model self.infer_modules_masks() File "/home/khatria1/.local/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py", line 383, in infer_modules_masks self.update_direct_sparsity(curnode) File "/home/khatria1/.local/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py", line 244, in update_direct_sparsity _auto_infer = AutoMaskInference( File "/home/khatria1/.local/lib/python3.9/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 80, in __init__ self.output = self.module(*dummy_input) File "/share/apps/anaconda-ci/fgci-centos7-anaconda/software/anaconda/2022-04/18a3eb7e/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'input' **How to reproduce it?**: The AGP pruner code we're using: def process(self): # Define Evaluator optimizer = nni.trace(torch.optim.Adam)(self.model.parameters(), lr=5e-5) criterion = torch.nn.CrossEntropyLoss() lr_scheduler = nni.trace(StepLR)(optimizer, step_size=5, gamma=0.1) # TorchEvaluator initialization evaluator = TorchEvaluator(training_func=self.train_func, optimizers=optimizer, criterion=criterion, lr_schedulers=lr_scheduler, dummy_input=self.dummy_input, evaluating_func=self.eval_func) # Define AGPPruner config_list = [{'op_types': ['Conv2d', 'Linear'], 'sparsity': self.sparsity}] if self.pruning_method == 'level': pruner = AGPPruner(self.model, config_list, pruning_algorithm='level', total_iteration=self.iters, log_dir='meta_logs/compressed/'+self.model_name+'_'+str(self.sparsity), evaluator=evaluator, speedup=False) elif self.pruning_method == 'taylorfo': # params = {'criterion': criterion, # 'trainer': self.train_func(self.model, optimizer, criterion, lr_scheduler, max_epochs=self.iters), # 'training_batches': 20, # 'traced_optimizer': optimizer} params = {'evaluator': evaluator, 'training_steps': 5, 'mode': 'normal', 'dummy_input': self.dummy_input} pruner = AGPPruner(self.model, config_list, pruning_algorithm='taylorfo', total_iteration=self.iters, log_dir='meta_logs/compressed/'+self.model_name+'_'+str(self.sparsity), evaluator=evaluator, speedup=False, pruning_params=params) # Prune and evaluate the model pruner.compress() print("Completed pruner.compress()") _, model, masks, _, _ = pruner.get_best_result() # show the masks sparsity # for name, mask in masks.items(): # print(name, ' sparsity : ', '{:.2}'.format(mask['weight'].sum() / mask['weight'].numel())) # print("Pruned model accuracy on tuning dataset: ", self.eval_func(model)) # Perform speedup pruner._unwrap_model() ModelSpeedup(model, self.dummy_input, masks).speedup_model() return model, pruner
closed
2023-01-20T09:27:18Z
2023-02-24T02:34:05Z
https://github.com/microsoft/nni/issues/5321
[]
ankitknitj
4
dask/dask
numpy
11,080
CI is Failing
https://github.com/dask/dask/pull/11079 Alternatively, if we don't care about these runs then we should probably turn them off.
closed
2024-04-29T18:24:40Z
2024-04-29T20:40:19Z
https://github.com/dask/dask/issues/11080
[ "needs triage" ]
mrocklin
4
pyjanitor-devs/pyjanitor
pandas
603
[DOC] Fix formatting of code block in `CONTRIBUTING.rst`
# Brief Description of Fix <!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs page, and what you would like to see it become. Example starter wording is provided. --> New code block in `CONTRIBUTING.rst` on installing pre-commit isn't formatted correctly. I will submit a PR momentarily # Relevant Context <!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available to get you started. --> - [Link to documentation page](http://pyjanitor.readthedocs.io) - [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/CONTRIBUTING.rst)
closed
2019-10-30T00:02:38Z
2019-10-30T13:36:54Z
https://github.com/pyjanitor-devs/pyjanitor/issues/603
[]
hectormz
0
3b1b/manim
python
2,001
Rectangle stroke has gaps
### Describe the bug Rectangle and its subclasses cannot display their stroke (i.e. their sides) consistently. As the rendered rectangle's sides have some gaps rather show a consistent line ### Version - Platform: Mac - Python version: Python 3.9.12 - OpenGL version: 3.1.6 - manim version: ManimGL v1.6.1 ### Codes and rendered results ``` python Square( side_length=1., stroke_color = WHITE, stroke_width = 2.0, fill_color = BLUE, fill_opacity = 1.0, ).round_corners(0.15) ``` <img width="648" alt="manim rendered rectangles" src="https://user-images.githubusercontent.com/94862670/224512714-67883603-5755-4480-9cdd-3d215b989114.png">
open
2023-03-11T21:51:52Z
2023-03-11T21:51:52Z
https://github.com/3b1b/manim/issues/2001
[ "bug" ]
zang-langyan
0
python-gino/gino
sqlalchemy
258
Replace assert statement
`assert` is preferred not to be used in production code because it's no effect when optimization mode is on (executed with parameter `-O`).
closed
2018-06-23T14:16:20Z
2020-04-21T02:01:08Z
https://github.com/python-gino/gino/issues/258
[ "task" ]
wwwjfy
2
ivy-llc/ivy
tensorflow
28,320
Fix Frontend Failing Test: tensorflow - manipulation.paddle.take_along_axis
To-do List: https://github.com/unifyai/ivy/issues/27499
closed
2024-02-18T17:12:38Z
2024-02-20T09:26:47Z
https://github.com/ivy-llc/ivy/issues/28320
[ "Sub Task" ]
Sai-Suraj-27
0
ray-project/ray
pytorch
51,601
[Cluster] Ray job submit/logs sporadically stops following logs
### What happened + What you expected to happen The commands `ray job submit` and `ray job logs -f` are supposed to keep printing the output of the job until it terminates. Lately though, I find these commands sporadically stop following logs and exit. Interestingly, before exiting, they print the job status, same as `ray job status` would. Example: ``` root@a687ff44d99c:/code# ray job submit ... ... ------------------------------------------------------- Job 'raysubmit_aBW2LEDfWwhL1kSh' submitted successfully ------------------------------------------------------- Next steps Query the logs of the job: ray job logs raysubmit_aBW2LEDfWwhL1kSh Query the status of the job: ray job status raysubmit_aBW2LEDfWwhL1kSh Request the job to be stopped: ray job stop raysubmit_aBW2LEDfWwhL1kSh Tailing logs until the job exits (disable with --no-wait): 2025-03-21 18:47:42,202 INFO job_manager.py:530 -- Runtime env is setting up. 2025-03-21 18:47:45,289 INFO worker.py:1514 -- Using address 10.212.79.105:6379 set in the environment variable RAY_ADDRESS 2025-03-21 18:47:45,290 INFO worker.py:1654 -- Connecting to existing Ray cluster at address: 10.212.79.105:6379... 2025-03-21 18:47:45,302 INFO worker.py:1832 -- Connected to Ray cluster. View the dashboard at 10.212.79.105:8265 Creating placement group... (autoscaler +2s) Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0. (autoscaler +2s) Adding 100 node(s) of type t3a.large1. Status for job 'raysubmit_aBW2LEDfWwhL1kSh': RUNNING Status message: Job is currently running. ``` ... and here is where `ray job submit` has exited. Note that I am running Ray using KubeRay, and setting RAY_ADDRESS to a doman name that maps to a Kubernetes ingress that routes all HTTP traffic to the head node's dashboard port (8265). I suspect it may be this networking configuration that plays a role, however, the issue manifests in the `ray job` command so I am submitting the issue here rather than in KubeRay. ### Versions / Dependencies Ray: 2.43.0 KubeRay: 1.3.0 Python: 3.10 ### Reproduction script Any `ray job submit` or `ray job logs -f` will do. ### Issue Severity Low: It annoys or frustrates me.
open
2025-03-21T19:20:55Z
2025-03-21T22:51:27Z
https://github.com/ray-project/ray/issues/51601
[ "bug", "triage", "core" ]
jleben
0
seleniumbase/SeleniumBase
pytest
2,985
DeviceMetrics wrong required type
Currently, device metrics require int for pixelRatio, but any number is actually correct (int or float). If you try to pass float to seleniumbase, it will use default settings instead. Many mobile devices use fractional scaling, also any high resolution PC uses a scaling factor. See: https://developer.chrome.com/docs/chromedriver/mobile-emulation and https://chromedevtools.github.io/devtools-protocol/tot/Emulation/#method-setDeviceMetricsOverride
closed
2024-08-02T05:13:55Z
2024-08-03T05:04:28Z
https://github.com/seleniumbase/SeleniumBase/issues/2985
[ "bug" ]
tkozybski
4
amisadmin/fastapi-amis-admin
fastapi
145
Code Editor not follow Locale settings (and how add support Jinja syntax support?)
```python from fastapi_amis_admin import i18n i18n.set_language(language='en_US') site = AdminSite(settings=Settings( language="en_US", database_url_async=database_url_async, site_icon="https://fastapi.tiangolo.com/img/favicon.png", # amis_cdn="https://www.npmjs.com/package/", )) ``` 1. Monaco Editor does not follow Locale settings, how to fix it? 2. How add support for Jinja syntax? 3. Maybe better to fully rework to English language support by default? I think this helps you cool package be more popular. 4. Are any other amis_cdn? unpkg.com - work not stable sometimes ![image](https://github.com/amisadmin/fastapi-amis-admin/assets/128525598/bbf3e563-a9a6-4a21-a315-ef660d68753d)
closed
2023-11-24T01:18:46Z
2024-01-26T13:10:05Z
https://github.com/amisadmin/fastapi-amis-admin/issues/145
[]
MatsiukMykola
1
tensorflow/tensor2tensor
machine-learning
1,237
How can i set gpu device like gpu:2?
### Description i have four gpu, but i only use singe. So i must set device but i can't find this params for set gpu devices, just set gpu numbers. Seem hparams='datashard_devices="gpu:2"' is not working. And what should i do? ...
closed
2018-11-19T08:40:54Z
2018-11-21T15:24:49Z
https://github.com/tensorflow/tensor2tensor/issues/1237
[]
zepen
5
explosion/spaCy
deep-learning
13,098
Lemmatisation of "did" is no longer "do" in the context of n't
I just updated from spacy 3.6 to 3.7 and "did" is no longer getting lemmatised as "do" when followed by n't. ``` nlp = spacy.load("en_core_web_sm") sent = "I didn't go to the bank at the weekend." for tok in nlp(sent): print(tok.text, tok.lemma_) ``` Output: ``` I I did did # lemma should be "do" n't n't go go to to the the bank bank at at the the weekend weekend . . ``` ## How to reproduce the behaviour <!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> - **spaCy version:** 3.7.2 - **Platform:** Linux-6.2.0-1017-gcp-x86_64-with-glibc2.35 - **Python version:** 3.10.12 - **Pipelines:** en_core_web_sm (3.7.0)
closed
2023-11-02T06:34:59Z
2023-12-15T00:02:20Z
https://github.com/explosion/spaCy/issues/13098
[ "bug", "lang / en", "feat / lemmatizer" ]
chrisjbryant
14