repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
neuml/txtai | nlp | 456 | Check for empty queue before attempting to convert inputs to dictionaries | Related to https://github.com/neuml/paperai/issues/67
A check should be added to ensure the extractor input queue has content before attempting to convert inputs to dictionaries. | closed | 2023-03-29T19:59:45Z | 2023-03-29T20:01:23Z | https://github.com/neuml/txtai/issues/456 | [
"bug"
] | davidmezzetti | 0 |
jupyterlab/jupyter-ai | jupyter | 1,103 | Chat window works, but IPython magics do not |
## Description
I installed jupyter, jupyterai and langchain-openai in a clean environment (using pixi). I entered my openAI api key and the chat window seems to work. however, using the %%ai magic in the notebook doesn't work
## Reproduce
### screenshot showing that the chat window works
<img width="1285" alt="Screenshot 2024-11-11 at 7 11 16 PM" src="https://github.com/user-attachments/assets/7d7feb1e-7ccf-4ab5-b99c-029a31b1c7f5">
### screenshot showing that it knows about my openAI key:
<img width="1392" alt="Screenshot 2024-11-11 at 7 12 01 PM" src="https://github.com/user-attachments/assets/8777cbbe-8abb-4d53-823f-4a316a7901e9">
### error:
AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-QoS0U***************************************KCxR. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
## Expected behavior
it to not throw an authenticationerror
## Context
<!--Complete the following for context, and add any other relevant context-->
jupyter = ">=1.1.1,<2"
jupyterlab = ">=4.2.5,<5"
jupyter-ai = ">=2.28.1,<3"
langchain-openai = ">=0.1.25,<0.2"
| open | 2024-11-12T00:13:06Z | 2025-02-09T14:16:36Z | https://github.com/jupyterlab/jupyter-ai/issues/1103 | [
"bug"
] | sg-s | 8 |
flasgger/flasgger | api | 10 | Fix regex to add missing type to rules | Currently rules only work if type is defined
bad
```
@route('/<param>/<another>')
```
good
```
@route('/<string:param>/<string:another>')
```
But we should accept the first, so regex have to be fixed.
| closed | 2016-01-11T12:49:59Z | 2016-01-14T00:34:21Z | https://github.com/flasgger/flasgger/issues/10 | [] | rochacbruno | 7 |
skforecast/skforecast | scikit-learn | 391 | Prophet | Hi, is Prophet supported?
I see they declare sklearn API.
https://facebook.github.io/prophet/docs/quick_start.html
| closed | 2023-04-08T04:55:37Z | 2023-04-09T19:46:02Z | https://github.com/skforecast/skforecast/issues/391 | [
"help wanted",
"question"
] | AVPokrovsky | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 549 | Up大大,运行predict时导入模型出现错误 | up大大您好,我使用了您的Faster R-CNN代码,我使用规范的VOC数据集保存的方式组织了自己的文件夹,同时运行train_res50_fpn.py这个脚本成功生成了15个epoch的权重文件(已正确指定检测的类别数),但是在使用predict.py预测脚本时,在导入自己的模型这一步出现了问题,我查询了很多资料也没能解决该问题,所以通过这种方式打扰您


| closed | 2022-05-16T15:22:26Z | 2023-10-25T00:25:06Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/549 | [] | superYeah123 | 0 |
explosion/spaCy | nlp | 13,374 | Problems converting Doc object to/from json | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
I'm having problems converting `Doc`s to and from json. (I'm also getting errors with pickling in some cases, but the json problem is more reproducible.)
The docstring for `.from_json()` says that it takes a dict, but when I send a dict generated by `to_json()`, a `doesn't apply to a 'dict' object` error is raised.
Perhaps I'm using the methods wrongly, but I'm not sure how.
TIA
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Python 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
>>> import spacy
>>> nlp = spacy.load('en_core_web_sm')
>>> d=nlp("Dave saw Mary.")
>>> d
Dave saw Mary.
>>> j=d.to_json()
>>> spacy.tokens.doc.Doc.from_json(j)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: descriptor 'from_json' for 'spacy.tokens.doc.Doc' objects doesn't apply to a 'dict' object
>>>
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
## Info about spaCy
- **spaCy version:** 3.7.4
- **Platform:** macOS-14.3.1-x86_64-i386-64bit
- **Python version:** 3.11.4
- **Pipelines:** en_core_web_sm (3.7.1) | closed | 2024-03-13T09:55:09Z | 2024-03-26T13:29:42Z | https://github.com/explosion/spaCy/issues/13374 | [
"feat / serialize",
"feat / doc"
] | undercertainty | 1 |
reiinakano/scikit-plot | scikit-learn | 119 | Bug: Failing to import Scikitplot fails due to ImportError | Hey @reiinakano,
first, your work was a great addition to the open-source ML community and is still used. 👍
Unfortunately, importing a freshly installed scikit-plot now fails with the following ImportErrror:
```
import scikitplot
print("Hello World")
```
>>> import scikitplot
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\test-scikitplot\skplot\Lib\site-packages\scikitplot\__init__.py", line 2, in <module>
from . import metrics, cluster, decomposition, estimators
File "D:\test-scikitplot\skplot\Lib\site-packages\scikitplot\metrics.py", line 27, in <module>
from scipy import interp
ImportError: cannot import name 'interp' from 'scipy' (D:\test-scikitplot\skplot\Lib\site-packages\scipy\__init__.py)
So what fails is https://github.com/reiinakano/scikit-plot/blob/2dd3e6a76df77edcbd724c4db25575f70abb57cb/scikitplot/metrics.py#L27
The interp-function can be imported from numpy instead.
This bug is severe, because it affects all packages trying to import scikit-plot at the moment.
I would highly appreciate you looking into this! Thanks a lot and best wishes! | open | 2024-01-22T10:19:08Z | 2024-04-10T13:18:08Z | https://github.com/reiinakano/scikit-plot/issues/119 | [] | radlfabs | 4 |
SALib/SALib | numpy | 643 | Grouping of parameters yields NaN sigma values. | Hello there,
firstly thank you for setting up SALIB, it's a great and intuitive tool!
Currently, I am working on a model which includes wind and solar profiles as well as electricity market prices which change hourly. I made dynamic bounds for every hour for each parameter. In my analysis I group these parameters into a single e.g. cf_wind parameter over the time horizon of the model. However, for their sigma I get NaN Values. The sigma of mu_star does have a value again. I attached a screenshot:

So far, in hours like midnight where there is never sun over a year, I add a very tiny range. I am wondering if maybe the inclusion of bounds for these time-steps is the problem here. Also the default-method of morris I am using allows the grouped parameters to take a step in opposite directions. They might cancel each other out as well.
I am not quite sure and hope your feedback could help me identifiy the next steps.
Kind regards! | closed | 2024-12-12T08:46:28Z | 2024-12-12T14:21:55Z | https://github.com/SALib/SALib/issues/643 | [] | JMH-gif | 2 |
TencentARC/GFPGAN | deep-learning | 245 | error: (-215:Assertion failed) !buf.empty() in function 'imdecode_' | First I met the error as follow:

Then I add int at ’quality‘ as [https://github.com/TencentARC/GFPGAN/issues/93](url)
But I got another error as follow:

| closed | 2022-08-29T14:45:47Z | 2022-09-16T12:14:21Z | https://github.com/TencentARC/GFPGAN/issues/245 | [] | dongfengxijian | 1 |
jmcnamara/XlsxWriter | pandas | 913 | Bug: Adding a lot of pictures to the workbook causes a picture error | ### Current behavior
Add multiple images, the images are different, but the result is that the two pictures are the same.

### Expected behavior
Different image data shows different image content
### Sample code to reproduce
```markdown
import io
import qrcode
from xlsxwriter import Workbook
def get_io_qr(qr_str, version=5, box_size=2, border=1) -> io.BytesIO:
qr = qrcode.QRCode(version=version, box_size=box_size, border=border)
qr.add_data(qr_str)
qr_data = io.BytesIO()
qr.make(fit=True)
img = qr.make_image()
img.save(qr_data)
return qr_data
with Workbook('test.xlsx') as workbook:
worksheet = workbook.add_worksheet('test')
label_str = '1137249488/6000|KMC51030079V001|KM51006110V001|1'
qr_data = get_io_qr(label_str, version=4, box_size=2)
# qr_data.getvalue() = b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00F\x00\x00\x00F\x01\x00\x00\x00\x00YU\xdf)\x00\x00\x013IDATx\x9cmR1J\x04A\x10\xac\xbd\xd9\xe4`d6\x15L\x0e6=\x18\x18\xc1d\x8f\xf1\x01\xf7\x18S\xc1\xdc\x07\xf8\x12\x7f02\x86\xbd\xf4\x82\xe9\x81\xa1\xe9,+\x98\xcc0\x06\x82\xd7+vXtuU\x17\xd5T\xfcL\xd9\xe0w\x02\xb2\xe1H\xd0glc\xf2\xb4\xeb5j\x10X\xe9\xf0t\xe8\xf1\x02\x81\x01j\xe0b\xb1\xc6z\x1c\xd0I\xac\xe1\xbd\xc2\xa8\x8a\x1b\xcfX;\xb7\x04\x144B\x17\xb5\xb2\xc9\x95k\x16\xf7\xa6R\xfav\xda\xcd\xb7\xd2\xb3\xa9\xc6\x18E$\xee\xb92\xdb\xf2\x88\xfe(\xb8\xa7\xf8^\xde\xb6\xf7\xa7Ip\x93\x0e\t\xc6\xd6A\xde\x8bI3a\x91^\x16\xadRr\xd1f\x89U\xa7\xfc0\x90\xd8kq\x111\x1f\x97\xd7$\xf6"{\x1b-\xb2\xe4&\xcb\xce;kH\xfef@\xcfZ\xed\xad\xd4\xc8n`\xf6Q\xcb\xfcP\xec\xf4e\xc7\x07\xb1Gy\t:\xd0\xca\x1f\x19\xa8\\y\xa5\x1b(\x9aA\xc1\xfc\xc99z\xe6\xa0\xd7=\xc8:\xea(s\xe1\\\x17_Y\xfe\xbb)\x1d.\xcb\xf5|\x93d\x06(\x1f\xa7O\xbd\xed\xd6\xdd\xb8\x9a\xb7w\xb2\x07\xe0\x9c\xac\xd6\x9eV\xfe@.\xd8\xe8\x84n\xf3OO\xbf\x01\x0c\x14\x92\x9dA\x96\x0f]\x00\x00\x00\x00IEND\xaeB`\x82'
worksheet.insert_image(0, 0, label_str,
{'image_data': qr_data, 'x_offset': 5, 'y_offset': 5})
label_str = '1137249493/6000|KMC51030079V001|KM51123515V000|1.0'
qr_data = get_io_qr(label_str, version=4, box_size=2)
# qr_data.getvalue() = b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00F\x00\x00\x00F\x01\x00\x00\x00\x00YU\xdf)\x00\x00\x013IDATx\x9cm\x92\xc1j\xc30\x0c\x86\xe5$0\x06\x02\xe7:\xd8\xa5\xd0\xeb\xc0#\x85^\x1c\\\xd8u\xaf\xb6\x87)\xec\x01R<\x18\x03\x15\xfb:\xd8C8\xd4\xd0\x8b\x8dv\x18,\xca\x98\x8e\x1f?\xfa\x84$\xc5\xf0S\xb5\x81\xdf\x9a\xa0\xb8\xe0\tpa\x9d.\xef\xc7\xf6\xab\xf2yaM\xed\xf1i\xdc\xc2\t\x04\x03\xc8\x14\xaa\x8153q\x84^2\x08\x85\x1d\x05b\x12\x8e\xb9\xa3\x17\xa8\xa0\x84\x17\x98I\x17\x0e\\D\xbfX!wq3\x1f\x16\xa6\xc2a\xa3#\xe4\x0f\xe9H%\xb1\xf5!Y\xc9\xf4\x84:\x18Z9\xcc\xc5\x806,r\xca\xdb8\xeb\xba\xbb\xdc\xc9\x9c%J\x83\x071\x0b\x14\xed\xc8Y\xbb\xea\xe7\xa9\xe0\x84\xecE\xae3\xb0\xf3y\xab\x8a\x9c\xcf\x97\x00n\xb0\x9a\xe4No\x1e\xf1\x88\xd7\x07#w\x05L\x81\x9d\x17\x8e&s\xdd\xd7\xb7\xd7*sDH\x98J\x12^EW\x1cS\x9bo\xa3\xbc/\x9ab[\xd0\xf2\xbe\xeer\xba\xb6\x04\xf3\xb3p\x9c\x94\x81s\x1e[\xe1ht\x81:X/\xfb5\xb5\x87\xfb\xda\xcf\xfb$g\xc1\x84ap\x88\xab?\xd0i\xfe\x8c\x7f\xfe\xa0xDGr\xa7\x13L<\x19?\xc8{\xfc\xf3\xa7\xdf\xc6\x99\x9a\x1c\x19\x0f\x96\x05\x00\x00\x00\x00IEND\xaeB`\x82'
worksheet.insert_image(10, 0, label_str,
{'image_data': qr_data, 'x_offset': 5, 'y_offset': 5})
```
### Environment
```markdown
- XlsxWriter version:3.0.2
- Python version:3.8.10
- Excel version:office2019
- OS:windows 10 21H2
```
### Any other information
_No response_
### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2022-10-11T05:08:11Z | 2022-10-11T14:57:23Z | https://github.com/jmcnamara/XlsxWriter/issues/913 | [
"bug",
"awaiting user feedback"
] | ye-xue | 15 |
anselal/antminer-monitor | dash | 132 | [ERROR] Invalid username or password | It gives me error on v0.5
Any idea how to solve it?
| closed | 2018-10-05T10:15:07Z | 2018-10-05T10:19:17Z | https://github.com/anselal/antminer-monitor/issues/132 | [
":octocat: help wanted"
] | papampi | 2 |
polakowo/vectorbt | data-visualization | 320 | Best practice to install vectorbt on Apple silicon (M1) | Dear Oleg,
Thank you for this excellent tool!
You mentioned in the docs that you are running vectorbt on an M1 Macbok Air.
I usually use pyenv with the venv module from the standard library and pip.
Since numba/llvmlite is not pip installable on the M1, I would be very interested in how you set up your python environment.
| closed | 2022-01-05T09:56:04Z | 2025-02-06T21:23:49Z | https://github.com/polakowo/vectorbt/issues/320 | [] | 1081 | 5 |
zappa/Zappa | flask | 549 | [Migrated] Unable to access json event data | Originally from: https://github.com/Miserlou/Zappa/issues/1458 by [joshlsullivan](https://github.com/joshlsullivan)
Hi there, when I deploy Zappa, I'm unable to access json data from the Lambda event. If I print the event data, this is what I get:
`[DEBUG] 2018-03-24T14:40:37.991Z 517bfc13-2f71-11e8-9ff3-ed7722cf9e11 Zappa Event: {'eventVersion': '1.0', 'eventName': 'edit_client_event', 'eventArgs': {'jobUUID': 'a5aa3a03-b290-4469-b7ce-711045a57dfb'}, 'auth': {'accountUUID': 'ce9fee13-3327-4bf2-9eb9-89930316690b', 'staffUUID': 'd5b495e7-e3ec-45ff-8ca6-214bfacd13cb'}}`
Here's how I was able to access the json data before deploying Zappa:
`def lambda_handler(event, context):
print(event)
job = event['eventArgs']['jobUUID']`
Any ideas? | closed | 2021-02-20T12:22:36Z | 2024-04-13T16:37:17Z | https://github.com/zappa/Zappa/issues/549 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
sergree/matchering | numpy | 8 | CAN'T SAVE TO A DIFFERENT FOLDER in DJANGO |
Request Method: | GET
-- | --
http://127.0.0.1:8000/dashboard/track/8/master
3.0.2
RuntimeError
Error opening './media/goody/mastered/my_song_master_16bit.wav': System error.
/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/lib/python3.6/site-packages/soundfile.py in _error_check, line 1357
/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/bin/python
3.6.9
['/home/goodness/Documents/django_dev/MeshakProj', '/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/lib/python36.zip', '/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/lib/python3.6', '/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/lib/python3.6/lib-dynload', '/usr/lib/python3.6', '/home/goodness/.local/share/virtualenvs/MeshakProj--GI6wqXg/lib/python3.6/site-packages']
Thu, 30 Jan 2020 10:52:20 +0100
| closed | 2020-01-30T09:55:34Z | 2020-02-03T13:59:18Z | https://github.com/sergree/matchering/issues/8 | [] | GoodnessEzeokafor | 8 |
smarie/python-pytest-cases | pytest | 29 | pytest_fixture_plus does not seem to work with pytest.param parameters | First of all let me thank you for the nice `pytest_fixture_plus`, I am really excited about it.
The problem I am having can be seen in the following example.
This works:
```
from pytest_cases import pytest_fixture_plus as fixture
import pytest
@fixture
@pytest.mark.parametrize("arg1, arg2", [
(1,2),
(3,4)
])
def myfix(arg1, arg2):
return arg1, arg2
@pytest.mark.parametrize("arg3, arg4", [
pytest.param(10,20),
pytest.param(30,40)
])
def test_one(myfix, arg3, arg4):
print(myfix)
```
```
test_one.py::test_one[1-2-10-20] PASSED [ 25%]
test_one.py::test_one[1-2-30-40] PASSED [ 50%]
test_one.py::test_one[3-4-10-20] PASSED [ 75%]
test_one.py::test_one[3-4-30-40] PASSED [100%]
```
But if we enclose the fixture parameters using `pytest.param`, it does not anymore:
```
from pytest_cases import pytest_fixture_plus as fixture
import pytest
@fixture
@pytest.mark.parametrize("arg1, arg2", [
pytest.param(1,2),
pytest.param(3,4)
])
def myfix(arg1, arg2):
return arg1, arg2
@pytest.mark.parametrize("arg3, arg4", [
pytest.param(10,20),
pytest.param(30,40)
])
def test_one(myfix, arg3, arg4):
print(myfix)
```
```
test_one.py:8: in <module>
pytest.param(3,4)
..\venv368\lib\site-packages\decopatch\main.py:349: in new_decorator
return call_in_appropriate_mode(impl_function, dk, disambiguation_result)
..\venv368\lib\site-packages\decopatch\utils_calls.py:34: in call_in_appropriate_mode
return no_parenthesis_usage(impl_function, dk.first_arg_value)
..\venv368\lib\site-packages\decopatch\utils_calls.py:71: in no_parenthesis_usage
return decorator_function()(decorated)
..\venv368\lib\site-packages\decopatch\utils_modes.py:129: in _apply_decorator
return user_provided_applier(*args, **kwargs)
..\venv368\lib\site-packages\pytest_cases\main.py:299: in pytest_fixture_plus
raise ValueError("Internal error - unsupported pytest parametrization+mark combination. Please "
E ValueError: Internal error - unsupported pytest parametrization+mark combination. Please report this issue
```
| closed | 2019-03-21T20:07:42Z | 2019-03-22T16:57:47Z | https://github.com/smarie/python-pytest-cases/issues/29 | [] | Sup3rGeo | 4 |
awtkns/fastapi-crudrouter | fastapi | 119 | Custom ORM | So. I have a ORM which I'd like to use, and to intigrate.
It's a homegrown ORM for the fun of it, but I like my work, so I wanna use it here.
I however couldn't find any section in the docs on how to implement an adapter for your own. | open | 2021-11-19T14:22:45Z | 2021-11-19T14:22:55Z | https://github.com/awtkns/fastapi-crudrouter/issues/119 | [] | luckydonald | 0 |
serpapi/google-search-results-python | web-scraping | 61 | SSLCertVerificationError [SSL: CERTIFICATE_VERIFY_FAILED] error | A user reported receiving this error:
```
SSLCertVerificationError Traceback (most recent call last)
/opt/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
698 # Make the request on the httplib connection object.
--> 699 httplib_response = self._make_request(
700 conn,
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)
```
The solution for them was to turn off the VPN. | closed | 2023-11-22T01:26:54Z | 2023-11-22T01:27:01Z | https://github.com/serpapi/google-search-results-python/issues/61 | [] | hilmanski | 1 |
shibing624/text2vec | nlp | 66 | 是否可以商用? | RT, 模型是否可以商用?打算用在搜索领域,服务于我们自身的业务。
| closed | 2023-05-10T01:26:43Z | 2023-05-10T02:25:35Z | https://github.com/shibing624/text2vec/issues/66 | [
"question"
] | bh4ffu | 1 |
ploomber/ploomber | jupyter | 783 | SQLUpload constructor should validate that the client is a sqlalchemyclient | otherwise, it'll fail when trying to access the `engine` attribute - which isn't a clear error | closed | 2022-05-17T17:31:49Z | 2022-09-06T01:57:19Z | https://github.com/ploomber/ploomber/issues/783 | [] | edublancas | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 668 | ImportError: cannot import name 'get_source_inputs' from 'keras.engine' (/usr/local/lib/python3.7/dist-packages/keras/engine/__init__.py) | When I try to import the library in Google Colab, it raises an error. | closed | 2022-10-09T23:43:24Z | 2022-12-17T01:57:16Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/668 | [
"Stale"
] | Abecadarian | 2 |
plotly/plotly.py | plotly | 4,699 | Declare `kaleido` as an optional dependency | When calling `write_image` on a figure, it is required that `kaleido` is installed in the active environment. It would be great if this dependency were declared in pyproject.toml as an optional group.
For example, here is the approach taken in pyvista:
https://github.com/pyvista/pyvista/blob/e1401a34cbd281bbe74bce8dccbb78b85ab36dc4/pyproject.toml#L40-L59
This would improve ergonomics for packages that depend on plotly, since `kaleido` is (from their perspective) an implementation detail. Such a package could then depend on `plotly[io]`, for example. | open | 2024-07-29T09:26:51Z | 2024-08-13T13:26:53Z | https://github.com/plotly/plotly.py/issues/4699 | [
"feature",
"P3",
"infrastructure"
] | tpgillam | 0 |
snarfed/granary | rest-api | 23 | instagram: implement location | [evidently i left it as a TODO](https://github.com/snarfed/activitystreams-unofficial/blob/master/instagram.py#L207) and never came back to it. :P
ideally, fixing this should make IG photo map locations show up visibly in my feed reader via https://instagram-atom.appspot.com/
| closed | 2015-01-06T18:00:34Z | 2015-01-07T04:09:37Z | https://github.com/snarfed/granary/issues/23 | [] | snarfed | 2 |
home-assistant/core | asyncio | 140,510 | SolaX Power 'measurement' but 'last_reset' is missing | ### The problem
The SolaX integration itself works fine but reporting to energy the energy dashboard seems to have the following error in the energy dashboard configuration that "last_reset" is missing. The same problem as #127805
The X3.0 stick has been tested with various firmware versions. Unfortunately, the results are always the same.

### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
SolaX Power Firmware: 3.006.04
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/solax
### Diagnostics information
No diagnostics information available.
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
I didn´t find any solutions for that problem.... | open | 2025-03-13T12:25:05Z | 2025-03-13T12:25:16Z | https://github.com/home-assistant/core/issues/140510 | [
"integration: solax"
] | Tonestep | 1 |
cvat-ai/cvat | pytorch | 8,386 | Delay to check my assessment | Hello, it a month since i submitted my training assessment and i have not heard anything from you. | closed | 2024-09-02T06:09:21Z | 2024-09-02T06:26:33Z | https://github.com/cvat-ai/cvat/issues/8386 | [] | peterizcrisp | 0 |
microsoft/nni | deep-learning | 5,685 | Removing redundant string format in the final experiment log | ### This is a small but very simple request.
In the final experiment summary JSON generated through the NNI WebUI, there are some fields that were originally dictionaries that have been reformatted into strings. This is a small but annoying detail and probably easy to fix.
Most notably, this happens for values in the entry 'finalMetricData', which contains the default metric for the trial. When more than just the default metric are being tracked however, for example when a dictionary of metrics is added at each intermediate and final metric recordings, the value of the 'finalMetricData' field may look something like this:
`'"{\\"train_loss\\": 1.2782151699066162, \\"test_loss\\": 0.9486784338951111, \\"default\\": 0.5564953684806824}"'`
when it should simply be
```
{'train_loss': '1.2782151699066162',
'test_loss': '0.9486784338951111',
'default': '0.5564953684806824'}
```
I've reformatted it with these simple two lines:
```
keys_values = log['trialMessage'][0]['finalMetricData'][0]['data'].replace('"', '').replace(': ', '').replace(', ', '').strip('{}').split('\\')
reformatted = {k: v for k, v in zip(keys_values[1::2], keys_values[2::2])}
```
It would be quite nice and save unnecessary reprocessing if this could just be a regular JSON dictionary and not a stringified dictionary :)
### Reproducing this:
After downloading the experiment summary as a json, the following code would reproduce the above behavior (if the trial includes a multitude of metrics collected in a dict as opposed to just the default metric being recorded):
```
with open('path_to_experiment_json') as f:
log = json.load(f)
print(log['trialMessage'][0]['finalMetricData'][0]['data'])
> '"{\\"train_loss\\": 1.2782151699066162, \\"test_loss\\": 0.9486784338951111, \\"default\\": 0.5564953684806824}"'
```
A similar thing goes for the field `hyperParameters` field in each trial message, which is also a stringified dictionary.
```
log['trialMessage'][0]['hyperParameters']
> ['{"parameter_id":0,"parameter_source":"algorithm","parameters":{"batch_size":64,"seed":2,"steps":5000,"n_batches":1000,"linear_out1":512,"linear_out2":128,"conv2d_ks":2,"conv2d_out_channels":1},"parameter_index":0}']
``` | open | 2023-09-26T12:19:49Z | 2023-09-26T12:28:22Z | https://github.com/microsoft/nni/issues/5685 | [] | olive004 | 0 |
pydata/bottleneck | numpy | 125 | Where should we host the sphinx manual? | I'd like to move the sphinx manual. Where should we host it? Github? Readthedocs?
| closed | 2016-06-01T19:21:08Z | 2016-09-30T20:57:39Z | https://github.com/pydata/bottleneck/issues/125 | [] | kwgoodman | 5 |
skypilot-org/skypilot | data-science | 4,348 | [Serve] Fall back to latest ready version when detects unrecoverable failure | > Hi @alita-moore ! Thanks for reporting this. We do have some checks against unrecoverable error and stop scaling after detect such error. However, the case you described in the PR is a little bit complicated - it turns to READY first, and then NOT_READY, making it very similar to network transient error.
> https://github.com/skypilot-org/skypilot/blob/914328acb8269d79e304ad891f84d220e077565c/sky/serve/autoscalers.py#L409-L416
> Though, I think there are two things we should do:
> 1. Instead of "stop scaling", we should fallback to the latest ready version (or at least have a feature flag to enable this);
> 2. Implement some failure-cnt based method to detect such special case of error (e.g. if it fails on 10+ replicas, it is unlikely a netrowk transient error).
> Thanks for reporting this! We'll keep to work on this. LMK if you have other suggestions.
_Originally posted by @cblmemo in https://github.com/skypilot-org/skypilot/issues/4312#issuecomment-2474661242_
| open | 2024-11-13T20:59:01Z | 2024-12-19T23:08:53Z | https://github.com/skypilot-org/skypilot/issues/4348 | [] | cblmemo | 0 |
Netflix/metaflow | data-science | 1,919 | Silent failure to trigger Argo Workflow from CLI | If the workflow exists in the given namespace, it seems that `python flow.py argo-workflows trigger` can yield this output: `Workflow 'foo' triggered on Argo Workflows (run-id 'bar').`, even if the workflow was not triggered.
We use port forwarding to connect to our Argo Workflows server. If someone forget to port forward and attempts to run this command, the output is misleading because a run will not get triggered. I confirmed that I was able to create the run as expected when I used port forwarding. | closed | 2024-07-15T19:36:22Z | 2024-07-30T22:21:54Z | https://github.com/Netflix/metaflow/issues/1919 | [] | notablegrace | 2 |
widgetti/solara | jupyter | 868 | Solara Dev Documentation is Buggy | **Issue:**
When I go to [solara docs](https://solara.dev/documentation/), I cannot immediately scroll on the web page. I see the left side panel open and the content, but I can not scroll. Sometimes, when the page is loading, I noticed that I could scroll, but then a quick "flash" of a grey popup shows and disappears, and afterwards I cannot scroll again.
However, whenever I click on the content itself, the sidebar collapses (with no clear way to open again), and the page becomes scrollable.
**Ideal State:**
There are a few adjustments that need to be made:
- Whenever the doc pages first load (the issue seems to affect all doc pages), the page should be scrollable, even after the page finished loading.
- Whenever I click on the content and the left sidebar collapses, there should be a button that can open the sidebar back up again.
- Some pages seem to extend too far to the right, and there's no way to scroll horizontally. So the content is cutoff. Each page should be properly contained within the page size.
| open | 2024-11-21T16:03:56Z | 2024-11-22T09:47:55Z | https://github.com/widgetti/solara/issues/868 | [
"documentation"
] | jonkimdev | 1 |
scikit-learn-contrib/metric-learn | scikit-learn | 98 | Can this library be used for similarity metric learning | I have a set of vectors and for each pair of these vectors I have a distance (which is not Euclidean). I would like to embed the vectors into Euclidean space so that they are more likely to be close in R^d if they are close under the original measure of distance. I believe this is called similarity metric learning.
Is there a way to use this metric-learn library for this setup? It seems from the examples that you need a label for each vector, rather than a distance for each pair as input. Is there some way to get round this? | closed | 2018-06-26T08:41:07Z | 2018-07-04T08:51:05Z | https://github.com/scikit-learn-contrib/metric-learn/issues/98 | [] | lesshaste | 5 |
manbearwiz/youtube-dl-server | rest-api | 109 | Update function fails due to color in pip output | Update function fails due to color in pip output that is not expected since `print(output.decode("ascii"))` is used:
```
File "/usr/src/app/./youtube-dl-server.py", line 141, in <module>
update()
File "/usr/src/app/./youtube-dl-server.py", line 75, in update
print(output.decode("ascii"))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 177: ordinal not in range(128)
```
I believe the argument --no-color should be added to prevent this issue | closed | 2022-08-24T23:06:10Z | 2022-08-25T02:44:40Z | https://github.com/manbearwiz/youtube-dl-server/issues/109 | [] | 0xE1 | 1 |
lepture/authlib | django | 444 | Confusing behavior with OAuth2Session and state not being checked | **Describe the bug**
In the [documentation](https://docs.authlib.org/en/latest/client/oauth2.html#fetch-token) for how to use OAuth2Session client, it says that by supplying state when instantiating the object, then state will be checked when making the `fetch_token` request. In addition, the [docstring](https://github.com/lepture/authlib/blob/v1.0.0/authlib/oauth2/rfc6749/parameters.py#L131) for `parse_authorization_code_response` says that state is a required parameter when state is present in the client authorization request, but the [code](https://github.com/lepture/authlib/blob/v1.0.0/authlib/oauth2/rfc6749/parameters.py#L154) doesn't enforce that. Instead, it skips the check for state unless the user explicitly passes the state kwarg into the call to `fetch_token`. This leads to misleading behavior, where state is not actually checked.
**Error Stacks**
None
**To Reproduce**
We know there is a Flask OAuth client, and our example below doesn't use it, but uses Flask to create an easy, reproducible example. In our real app, we are using OAuth2Session client and not using Flask.
```python
import flask
import authlib.integrations.requests_client
app = flask.Flask(__name__)
@app.route('/')
def index():
client = _client()
uri, _ = client.create_authorization_url(
'https://github.com/login/oauth/authorize',
'<your server ip address>:8000/auth-github-authorize'
)
return flask.redirect(uri)
@app.route('/auth-github-authorized')
def auth_github_authorized():
# FIXME: Supplying state here doesn't make a difference. It isn't checked.
client = _client(state='a totally made up state')
client.fetch_token(authorization_response=flask.request.url)
raise AssertionError('Should not have gotten here. State is invalid.')
def _client(state=None):
return authlib.integrations.requests_client.OAuth2Session(
'<your-github-oauth-key>',
'<your-github-oauth-secret>',
scope='user:email',
state=state,
token_endpoint='https://github.com/login/oauth/access_token',
)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
```
**Expected behavior**
authlib.oauth2.rfc6749.errors.MismatchingStateException should be raised.
**Environment:**
- OS: Fedora 32
- Python Version: 3.7.2
- Authlib Version: 1.0.0
| closed | 2022-03-22T16:34:48Z | 2022-07-02T19:31:51Z | https://github.com/lepture/authlib/issues/444 | [
"bug"
] | rorour | 2 |
SALib/SALib | numpy | 35 | Is there an API for Method of Morris? | This isn't an issue but a question or two. Is there an API for interfacing with SALib.sample.morris and SALib.analyze.morris, without the need to read and write parameter files, sample files, and model results? Is it possible to get the analysis results directly without using the operating system to write it to a file and then read it later?
| closed | 2015-01-23T03:09:57Z | 2015-01-29T16:21:08Z | https://github.com/SALib/SALib/issues/35 | [
"enhancement"
] | kdavies4 | 3 |
vitalik/django-ninja | rest-api | 333 | Dynamic response schema based on request parameters | Hello,
Is it currently possible to implement functionality similar to [djangorestframework-queryfields](https://github.com/wimglenn/djangorestframework-queryfields) with Django Ninja? That is, given a request parameter parameter `?fields=field1,field2` or `?exclude_fields=field3`, would it be possible to dynamically change the schema of the response?
The biggest reason you might want to do this is with a heavy endpoint that produces a great deal of data, you may want to avoid as much serialization cost as possible, and potentially restrict database IO to only the fields you care about (djangorestframework-queryfields cannot do this out of the box but with medium effort and a tolerance for hackiness you can implement it). | open | 2022-01-22T00:03:38Z | 2025-03-08T08:16:03Z | https://github.com/vitalik/django-ninja/issues/333 | [] | dralley | 10 |
slackapi/bolt-python | fastapi | 1,092 | Can't seem to pass back errors for modal field during `.action()` callback method execution | I'm trying to build a modal with a simple input block, and pass back errors to the user after an API call. It looks something like this:
```python
# This is what the input is described in the view
## Note: this is an excerpt, the modal + input renders fine. These are
## built with the slack_sdk's models here: https://slack.dev/python-slack-sdk/api-docs/slack_sdk/models/blocks/index.html
blocks.InputBlock(
label="ID(s)",
block_id="ids",
dispatch_action=True,
element=blocks.PlainTextInputElement(
action_id="contents",
),
),
# ...
# Elsewhere, I have defined a callback function that gets hit when the user hits "enter" after
# typing something in the above input field. This method is registered as an "action" for this the
# above block ID. That looks something like the following (simplified for brevity):
async def check_ids(
ack: AsyncAck,
body: BodyViewDict,
action: dict,
client: AsyncWebClient,
) -> None:
try:
response = httpx.get("/some/endpoint/that/will/fail").raise_for_status()
except httpx.HTTPStatusError as err:
errors: dict = {}
match err.response.status_code:
case 404:
# This is DEFINITELY being executed, I can see it while debugging
errors[action["block_id]] = "foo"
case _:
pass
await ack(response_action="errors", errors=errors)
return
# do something with the response here if we didn't run into any problems
ack ()
# Here we make sure that it responds to the input with the above method
app.action({"block_id": "ids", "action_id": "contents"})(self.check_ids)
```
The API call itself isn't the point, I can debug the code and see everything being executed perfectly fine. However, when the `ack(response_action="errors", errors=errors)` method gets called, nothing in the UI gets updated, but the action is indeed ack'd. I even set breakpoints in the Slack Bolt SDK [here](https://github.com/slackapi/bolt-python/blob/e78543d854eeb3272bbc6c30d21ee02156a6f1a5/slack_bolt/context/ack/internals.py#L53) and can see that the response is being sent, and that no other errors are being reported. It just... silently doesn't do anything (except successfully `ack()` without updating the UI).
I even removed everything in `check_ids()` and just immediately respond with `await ack(response_action="errors", errors={"ids": "foo!"})` and that doesn't work. I turned on debug logging and can even see the response from Slack:
```
2024-06-12T18:53:30.489939Z [debug ] Responding with status: 200 body: "{"response_action": "errors", "errors": {"ids": "foo!"}}" (13 millis) filename=asyncio_runner.py func_name=_debug_log_completion lineno=187 module=asyncio_runner thread=140737472996544 thread_name=MainThread
```
...but again, the modal is not updated with any annotations to say that there is a problem with the field
Can someone tell me what I'm doing wrong here? According to the documentation this feels like the right thing to do, so either something isn't documented and I'm doing it wrong or there's a problem in the Slack backend or something that I can't figure out. Any help would be very much appreciated, thanks!
### Reproducible in:
I'm using Poetry, so copy/pasting from my `poetry.lock` file here are the versions:
```
[[package]]
name = "slack-bolt"
version = "1.19.0"
description = "The Bolt Framework for Python"
optional = false
python-versions = ">=3.6"
files = [
{file = "slack_bolt-1.19.0-py2.py3-none-any.whl", hash = "sha256:810891cc110e0fb3948f26c044302ed90abda2a25e9ec1689e179da8bb2747cf"},
{file = "slack_bolt-1.19.0.tar.gz", hash = "sha256:45135b8a1dea40abeb20b9b9d1973953f6755b76388156ee218d6c61d96f992a"},
]
[package.dependencies]
slack-sdk = ">=3.25.0,<4"
[[package]]
name = "slack-sdk"
version = "3.27.2"
description = "The Slack API Platform SDK for Python"
optional = false
python-versions = ">=3.6"
files = [
{file = "slack_sdk-3.27.2-py2.py3-none-any.whl", hash = "sha256:af97158e6ac7f667e158e8036e63dc1f79db9bd36216a33c10fcc49be7c2f30c"},
{file = "slack_sdk-3.27.2.tar.gz", hash = "sha256:bb145bf2bd93b60a17cd55c05cb15868c9a07d845b6fb608c798b50bce21cb99"},
]
[package.extras]
optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=9.1,<13)"]
```
#### Python runtime version
Python 3.10.6
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
```
$ sw_vers && uname -v
ProductName: macOS
ProductVersion: 14.5
BuildVersion: 23F79
Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-06-12T18:57:17Z | 2024-06-14T00:52:02Z | https://github.com/slackapi/bolt-python/issues/1092 | [
"question"
] | macintacos | 5 |
sqlalchemy/alembic | sqlalchemy | 677 | Add --version argument to cli | I was looking for the alembic version in an env, and the cli does not seem to expose it.
There is special action of argparse for this. and is supported also in [python2](https://docs.python.org/2/library/argparse.html#action), so it should be fairly easy to add | closed | 2020-04-06T18:16:48Z | 2020-04-20T17:29:40Z | https://github.com/sqlalchemy/alembic/issues/677 | [
"easy",
"use case"
] | CaselIT | 1 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 192 | 调用api:idphoto_crop报错ValueError: could not broadcast input array from shape (835,596,3) into shape (835,596,4) | 
| open | 2024-10-14T08:28:19Z | 2024-10-16T09:10:42Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/192 | [] | snowhahaha | 1 |
LAION-AI/Open-Assistant | python | 2,769 | 中文回答过程 | 中文的回答过程中有乱码,回答完毕后,部分文本能正常展示中文,部分展示异常 | closed | 2023-04-20T05:45:38Z | 2023-04-20T06:02:53Z | https://github.com/LAION-AI/Open-Assistant/issues/2769 | [] | TotheAnts | 2 |
Ehco1996/django-sspanel | django | 11 | 大神,看下是我姿势的问题吗 | 随机端口那个我提过了
签到按钮,点击签到之后,弹出签到成功页面,点击确定。
然后刷新页面,会弹出签到失败,刷新一次,弹出一次。
资料编辑
修改协议之后,点击提交按钮,然后刷新页面,总是弹出已经修改成功的页面。刷新一次,弹出一次。
修改混淆,修改加密方式,都这样
希望刷新之后,不弹出各种提示,,因为点击之后,已经弹出过一次提示框了,刷新的话,不应该再次弹出的。 | closed | 2017-10-06T22:00:54Z | 2017-10-08T03:55:31Z | https://github.com/Ehco1996/django-sspanel/issues/11 | [] | cheapssr | 2 |
babysor/MockingBird | deep-learning | 269 | 运行synthesizer_preprocess_audio.py进行预处理时报错 | 
结构如图,也有了对应的alignment对齐文本

数据对齐文本

| closed | 2021-12-14T08:25:54Z | 2021-12-14T08:33:39Z | https://github.com/babysor/MockingBird/issues/269 | [] | Emiya0415 | 1 |
pytest-dev/pytest-xdist | pytest | 585 | Make psutil dependency optional | As https://github.com/pytest-dev/pytest-xdist/issues/583 shows, unfortunately `psutil` does not provide wheels for Linux (which is common), so in some systems `pytest-xdist` is harder to install than it should be.
I suggest we make `psutil` dependency optional, falling back to using `multiprocessing` to detect the number of CPUs if `psutil` is not installed.
ref: https://github.com/pytest-dev/pytest-xdist/pull/560 | closed | 2020-08-17T12:42:53Z | 2020-08-25T12:46:52Z | https://github.com/pytest-dev/pytest-xdist/issues/585 | [
"enhancement"
] | nicoddemus | 1 |
kennethreitz/records | sqlalchemy | 131 | Would it be possible to issue a new release? | The lazy connection feature added in fe0ed3199dd952d57bfa12ecdb6c69acd1c98ece is critical for many use case (e.g. an API). Would you be so kind as to release a new version of records that contains it?
Thanks, Kenneth.
Charles | closed | 2018-03-13T09:46:49Z | 2019-03-17T15:08:18Z | https://github.com/kennethreitz/records/issues/131 | [
"question"
] | charlax | 7 |
pytest-dev/pytest-xdist | pytest | 462 | Test suite memory leak? | While attempting to run the tests with Python 3.7 there seem to be a memory leak. I did not have the issue on Python 2.7.
```sh
Installing collected packages: pytest-xdist
Successfully installed pytest-xdist-1.28.0
/build/pytest-xdist-1.28.0
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/qpma67a2xwsf489sr5agzh55zmp83q9r-python3.7-pytest-xdist-1.28.0
strip is /nix/store/kj9ynabqbdba10632p8yp13910bflzr6-binutils-2.31.1/bin/strip
stripping (with command strip and flags -S) in /nix/store/qpma67a2xwsf489sr5agzh55zmp83q9r-python3.7-pytest-xdist-1.28.0/lib
patching script interpreter paths in /nix/store/qpma67a2xwsf489sr5agzh55zmp83q9r-python3.7-pytest-xdist-1.28.0
checking for references to /build/ in /nix/store/qpma67a2xwsf489sr5agzh55zmp83q9r-python3.7-pytest-xdist-1.28.0...
running install tests
============================= test session starts ==============================
platform linux -- Python 3.7.4, pytest-5.1.0, py-1.8.0, pluggy-0.12.0
rootdir: /build/pytest-xdist-1.28.0, inifile: tox.ini, testpaths: testing
plugins: xdist-1.28.0, forked-1.0.2
collected 148 items / 5 deselected / 143 selected
```
Dependencies in Nix:
```
"inputDrvs": {
"/nix/store/2m2isdn9s0x6ybf7j7sm9h87ba8z62yc-python3.7-six-1.12.0.drv": [
"out"
],
"/nix/store/37bl2ylsycnyp4a0kmxf11zhylyxbk8d-python3.7-execnet-1.7.0.drv": [
"out"
],
"/nix/store/4i60rs5jq781b5qclcfdz81l0rg49xkq-python3-3.7.4.drv": [
"out"
],
"/nix/store/72y5wq26g3alkplr29hpamfn27g9lcmr-pytest-xdist-1.28.0.tar.gz.drv": [
"out"
],
"/nix/store/9fwyivx1w7dblc711qwg2hp34m2q3l9k-hook.drv": [
"out"
],
"/nix/store/gdp7vz51myy3l3mikkq9887ni9d5888m-python3.7-setuptools-41.0.1.drv": [
"out"
],
"/nix/store/h3vr9yv5k86glxi62k3pmqpybfrp0588-python3.7-pytest-5.1.0.drv": [
"out"
],
"/nix/store/kc20g0hvialmpkdb4zq8axwl3c7dd1xp-python3.7-pytest-forked-1.0.2.drv": [
"out"
],
"/nix/store/qglg08n0vnrk6n2abgwkx8k3qf91pjxs-hook.drv": [
"out"
],
"/nix/store/rdv9fzmp5mmnjyn5qnx8sp09ij754jix-bash-4.4-p23.drv": [
"out"
],
"/nix/store/rfc3lfprjylr77ikb195gi0gi1hs4128-python3.7-filelock-3.0.12.drv": [
"out"
],
"/nix/store/vgqg7ccdfr4mj8l0al6lr2b0b6pxlbks-python3.7-bootstrapped-pip-19.1.1.drv": [
"out"
],
"/nix/store/whw4llh6r16xsn4648g2k9xb250ksm83-python3.7-setuptools_scm-3.2.0.drv": [
"out"
],
"/nix/store/z7vng602pfq5pr4mg5pjvjr3d1y55h5g-stdenv-linux.drv": [
"out"
]
},
``` | open | 2019-08-18T07:17:53Z | 2019-08-18T10:26:16Z | https://github.com/pytest-dev/pytest-xdist/issues/462 | [] | FRidh | 6 |
JaidedAI/EasyOCR | deep-learning | 1,186 | Inconsistent color conversion, BGR or RGB | Looking at the source code, depending on what data type the input image is supplied as, it will get converted to either RGB or BGR.
If the image is a bytestring, then it is converted to RGB:
https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/utils.py#L745
If the image is a numpy array with 4 channels (RGBA) then it is converted BGR:
https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/utils.py#L760
If the image is a JPEG File, then it is converted to BGR:
https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/utils.py#L764
Why is this not consistent? Surely the image should be in a consistent colorspace before processing. So it is supposed to be converted to RGB or BGR? | open | 2023-12-15T00:16:33Z | 2024-03-29T06:33:33Z | https://github.com/JaidedAI/EasyOCR/issues/1186 | [] | dkbarn | 4 |
aleju/imgaug | machine-learning | 616 | normalize_shape causes ambiguities | I just spent a good hour tracking down this bug.
It turns out that when you pass the `shape` keyword argument to `KeypointsOnImage` or `BoundingBoxesOnImage` they use `normalize_shape` to preprocess the input. If `shape` is a tuple then there is no problem.
However, the issue happens when I pass `shape` as a numpy array. Instead of trusting that what I gave it represents the shape of the image, I guess it assumes I'm passing the image itself and returns the `arr.shape` attribute, which turns out to be (2,) when the ndarray is actually representing the shape (which happens often when you want to mathematically manipulate shapes).
I'm going to guess that this was probably put in there as a convenience, but I found the behavior unintuitive. I'm guessing others might as well. It might be a good idea to discourage the passing of ndarrays as the shape parameter. Because this feature has already been released, I don't think it would be a good idea to backtrack on it, but it may be a good idea to raise a warning whenever the user passes a ndarray into normalize_shape saying that they should be passing the shape tuple instead.
If the maintainers of the repo really like the current behavior of normalize shape and want to keep it that way, I would understand. It is somewhat convenient if you are in the know, but my intuition is that this might trip up a lot of other users and it might be a good idea to simplify the API and disallow or discourage passing the actual image as the shape parameter. | closed | 2020-02-16T23:21:28Z | 2020-02-29T15:34:26Z | https://github.com/aleju/imgaug/issues/616 | [
"TODO"
] | Erotemic | 1 |
odoo/odoo | python | 202,420 | [16.0] point_of_sale: It does not allow you to deselect the product when splitting. Ticket Odoo 4656992 | ### Odoo Version
- [x] 16.0
- [ ] 17.0
- [ ] 18.0
- [ ] Other (specify)
### Steps to Reproduce
1. Create a product of type good/stockable.
2. Add one or more variants to the product from step 1.
3. Open the point of sale (verify that the product is visible).
4. Open an order.
5. Select the products created in step 1.
6. Split the order.
7. Select the products and deselect them if one of them will not be created in the new order.
Current behaviour:
When deselecting a product chosen by mistake, it does not allow you to deselect it.
Expected behaviour:
Any product can be deselected
Video/Screenshot link (optional):
https://drive.google.com/file/d/1m9vNs_vvnaB08qQypqybqPqx7V-hoikg/view?usp=sharing
### Log Output
```shell
```
### Support Ticket
_No response_ | open | 2025-03-18T20:51:48Z | 2025-03-18T21:08:52Z | https://github.com/odoo/odoo/issues/202420 | [] | luandryperez | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,589 | [Bug]: OSError: Cannot find empty port in range: 7860-7860 with EC2 in Auto scaling group | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When I deploy source on a normal ec2, when starting ec2, it does not have this error. But when I deploy source on an ec2 in auto scaling group, it will have this error.

### Steps to reproduce the problem
1. Auto scaling group scale out 1 new ec2
2. EC2 running
3. SD start => error
4. SD restart => success
### What should have happened?
SD should start successfully instead of port error and SD will restart
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
I use ec2 with instance type g6e.xlarge
### Console logs
```Shell
ct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: Launching launch.py...
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: ################################################################
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: glibc version is 2.34
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Version: v1.6.0-1704-gc24ff95d
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Commit hash: c24ff95d305bf56e4afe5fdf76a5350481661c17
Oct 25 01:37:36 ip-20-0-1-115.ec2.internal sh[949]: CUDA 12.1
Oct 25 01:37:36 ip-20-0-1-115.ec2.internal sh[949]: Launching Web UI with arguments: --api --listen --cors-allow-origins '*' --port=7860
Oct 25 01:39:44 ip-20-0-1-115.ec2.internal sh[949]: no module 'xformers'. Processing without...
Oct 25 01:39:44 ip-20-0-1-115.ec2.internal sh[949]: no module 'xformers'. Processing without...
Oct 25 01:39:46 ip-20-0-1-115.ec2.internal sh[949]: No module 'xformers'. Proceeding without it.
Oct 25 01:40:08 ip-20-0-1-115.ec2.internal sh[949]: ControlNet preprocessor location: /home/ec2-user/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
Oct 25 01:40:24 ip-20-0-1-115.ec2.internal sh[949]: 2024-10-25 01:40:24,757 - ControlNet - INFO - ControlNet v1.1.455
Oct 25 01:40:38 ip-20-0-1-115.ec2.internal sh[949]: 01:40:38 - ReActor - STATUS - Running v0.7.1-b1 on Device: CUDA
Oct 25 01:40:38 ip-20-0-1-115.ec2.internal sh[949]: Loading weights [bc2f30f4ad] from /home/ec2-user/stable-diffusion-webui/models/Stable-diffusion/beautifulRealistic_v60.safetensors
Oct 25 01:40:41 ip-20-0-1-115.ec2.internal sh[949]: 2024-10-25 01:40:41,227 - ControlNet - INFO - ControlNet UI callback registered.
Oct 25 01:40:48 ip-20-0-1-115.ec2.internal sh[949]: Traceback (most recent call last):
Oct 25 01:40:48 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/launch.py", line 48, in <module>
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: main()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/launch.py", line 44, in main
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: start()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/modules/launch_utils.py", line 469, in start
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: webui.webui()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/webui.py", line 79, in webui
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: app, local_url, share_url = shared.demo.launch(
Oct 25 01:40:50 ip-20-0-1-115.ec2.internal sh[949]: ^^^^^^^^^^^^^^^^^^^
Oct 25 01:40:50 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1896, in launch
Oct 25 01:40:51 ip-20-0-1-115.ec2.internal sh[949]: ) = networking.start_server(
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: ^^^^^^^^^^^^^^^^^^^^^^^^
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/networking.py", line 169, in start_server
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: raise OSError(
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.
Oct 25 01:40:54 ip-20-0-1-115.ec2.internal sh[949]: Creating model from config: /home/ec2-user/stable-diffusion-webui/configs/v1-inference.yaml
Oct 25 01:41:43 ip-20-0-1-115.ec2.internal sh[949]: Applying attention optimization: Doggettx... done.
Oct 25 01:41:53 ip-20-0-1-115.ec2.internal sh[949]: Model loaded in 74.7s (load weights from disk: 15.3s, create model: 1.1s, apply weights to model: 48.5s, load textual inversion embeddings: 1.5s, calculate empty prompt: 8.1s).
Oct 25 01:42:07 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Deactivated successfully.
Oct 25 01:42:07 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Consumed 17.646s CPU time.
Oct 25 01:42:27 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Scheduled restart job, restart counter is at 1.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: Stopped Run stable diffusion webui.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Consumed 17.646s CPU time.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: Started Run stable diffusion webui.
```
### Additional information
_No response_ | open | 2024-10-25T02:48:22Z | 2024-10-25T07:21:13Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16589 | [
"asking-for-help-with-local-system-issues"
] | PiPyL | 1 |
davidteather/TikTok-Api | api | 207 | Why can't i get object user ? | Hi, can you help me? Why does not it work?
code
```
from TikTokApi import TikTokApi
api = TikTokApi()
tiktoks = api.byUsername('donelij')
print(tiktoks)
for tiktok in tiktoks:
print(tiktok)
```
error
> `https://m.tiktok.com/api/user/detail/?uniqueId=donelij&language=en&verifyFp=verify_kdf52cly_U2LFCuwM_JFHm_4Fk3_8l68_fGaEzZPtjX4a&_signature=_02B4Z6wo00f01zGrCoQAAIBAWvEwRGVyrasxqy4AAJNLc1
> Converting response to JSON failed response is below (probably empty)
>
> Traceback (most recent call last):
> .....
> raise Exception('Invalid Response')
> Exception: Invalid Response` | closed | 2020-08-03T23:24:39Z | 2020-08-09T18:22:42Z | https://github.com/davidteather/TikTok-Api/issues/207 | [
"bug",
"question"
] | markdrrr | 7 |
Buuntu/fastapi-react | fastapi | 45 | Documentation for deployment options | Maybe start with Heroku and Docker Swarm? https://dockerswarm.rocks/ | closed | 2020-05-27T13:21:25Z | 2020-08-05T14:49:46Z | https://github.com/Buuntu/fastapi-react/issues/45 | [
"documentation",
"enhancement",
"good first issue"
] | Buuntu | 0 |
laughingman7743/PyAthena | sqlalchemy | 45 | Handle InvalidRequestException errors raised | I have a script using Athena + SQLAlchemy to run a query and have the results of that query read in as a pandas DataFrame. However, for some queries (which are long), I get this error:
```
botocore.errorfactory.InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartQueryExecution operation: Your query has exceeded the maximum query length of 262144 bytes. Please reduce the length of your query and try again. If you continue to see this issue after reducing your query length, contact customer support for further assistance.
```
But I can't do this
```
try:
run_query()
except botocore.errorfactory.InvalidRequestException:
run_query_differently()
```
because it says
```
AttributeError: module 'botocore.errorfactory' has no attribute 'InvalidRequestException'
```
The way it's usually handled is to use `client.exceptions.InvalidRequestException` but that requires access to the same client that was used to run the query.
Any ideas on how to to do this using the connections that PyAthena creates? | closed | 2018-08-09T21:15:54Z | 2018-08-13T16:52:09Z | https://github.com/laughingman7743/PyAthena/issues/45 | [] | koshy1123 | 3 |
mljar/mljar-supervised | scikit-learn | 640 | How to select models for more SHAP plots? | Hello MLJAR Team! I followed the attached tutorial, and my question is how to use a specific model for predictions and more detailed Shapley Values? After completing the following tutorial:
```python
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from supervised.automl import AutoML
#> IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
import supervised
supervised.__version__
#> '1.0.2'
data = datasets.load_iris()
X = pd.DataFrame(data["data"], columns=data["feature_names"])
y = pd.Series(data["target"], name="target").map({i:v for i, v in enumerate(data["target_names"])})
# Use 70% for training
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.3)
automl = AutoML(total_time_limit=5*60)
automl.fit(X_train, y_train)
#> AutoML directory: AutoML_2
#> The task is multiclass_classification with evaluation metric logloss
#> AutoML will use algorithms: ['Baseline', 'Linear', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
#> AutoML will ensemble available models
#> AutoML steps: ['simple_algorithms', 'default_algorithms', 'ensemble']
#> * Step simple_algorithms will try to check up to 3 models
#> 1_Baseline logloss 1.098612 trained in 0.29 seconds
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/supervised/utils/shap.py:116: UserWarning: The figure layout has changed to tight
#> DecisionTreeAlgorithm should either be a classifier to be used with response_method=predict_proba or the response_method should be 'predict'. Got a regressor with response_method=predict_proba instead.
#> Problem during computing permutation importance. Skipping ...
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py:972: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> 2_DecisionTree logloss 0.013075 trained in 4.52 seconds
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/shap/plots/_beeswarm.py:925: UserWarning: The figure layout has changed to tight
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/supervised/utils/shap.py:116: UserWarning: The figure layout has changed to tight
#> LinearAlgorithm should either be a classifier to be used with response_method=predict_proba or the response_method should be 'predict'. Got a regressor with response_method=predict_proba instead.
#> Problem during computing permutation importance. Skipping ...
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py:972: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> 3_Linear logloss 0.163424 trained in 5.84 seconds
#> * Step default_algorithms will try to check up to 3 models
#> XgbAlgorithm should either be a classifier to be used with response_method=predict_proba or the response_method should be 'predict'. Got a regressor with response_method=predict_proba instead.
#> Problem during computing permutation importance. Skipping ...
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/shap/plots/_beeswarm.py:925: UserWarning: The figure layout has changed to tight
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/supervised/utils/shap.py:116: UserWarning: The figure layout has changed to tight
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py:972: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
#> 4_Default_Xgboost logloss 0.010908 trained in 5.33 seconds
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> MLPAlgorithm should either be a classifier to be used with response_method=predict_proba or the response_method should be 'predict'. Got a regressor with response_method=predict_proba instead.
#> Problem during computing permutation importance. Skipping ...
#> 5_Default_NeuralNetwork logloss 0.263295 trained in 0.33 seconds
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/supervised/utils/shap.py:116: UserWarning: The figure layout has changed to tight
#> RandomForestAlgorithm should either be a classifier to be used with response_method=predict_proba or the response_method should be 'predict'. Got a regressor with response_method=predict_proba instead.
#> Problem during computing permutation importance. Skipping ...
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py:972: FutureWarning: `sparse` was renamed to `sparse_output` in version 1.2 and will be removed in 1.4. `sparse_output` is ignored unless you leave `sparse` to its default value.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> 6_Default_RandomForest logloss 0.027566 trained in 4.44 seconds
#> * Step ensemble will try to check up to 1 model
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> /Users/michaelmazzucco/Desktop/stiffness_ml/venv/lib/python3.10/site-packages/sklearn/metrics/_classification.py:2916: UserWarning: The y_pred values do not sum to one. Starting from 1.5 thiswill result in an error.
#> Ensemble logloss 0.010908 trained in 0.35 seconds
#> AutoML fit time: 29.33 seconds
#> AutoML best model: 4_Default_Xgboost
#> AutoML(total_time_limit=300)
# Predict
y_predicted = automl.predict(X_test)
result = pd.DataFrame({"Predicted": y_predicted, "Target": np.array(y_test)})
filtro = result.Predicted == result.Target
print(filtro.value_counts(normalize=True))
#> True 0.955556
#> False 0.044444
#> Name: proportion, dtype: float64
```
How could I select any model for further use? Be it another XGBoost, or even a Neural Net how can I directly select that model to generate something like:
```python
import xgboost
import shap
#> Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
# train XGBoost model
X,y = shap.datasets.adult()
model = xgboost.XGBClassifier().fit(X, y)
# compute SHAP values
explainer = shap.Explainer(model, X)
shap_values = explainer(X)
#>
#> 16%|=== | 5186/32561 [00:11<00:58]
#>
#> 17%|=== | 5675/32561 [00:12<00:56]
#>
#> 19%|==== | 6180/32561 [00:13<00:55]
#>
#> 21%|==== | 6681/32561 [00:14<00:54]
#>
#> 22%|==== | 7181/32561 [00:15<00:53]
#>
#> 24%|===== | 7692/32561 [00:16<00:51]
#>
#> 25%|===== | 8197/32561 [00:17<00:50]
#>
#> 27%|===== | 8700/32561 [00:18<00:49]
#>
#> 28%|====== | 9205/32561 [00:19<00:48]
#>
#> 30%|====== | 9717/32561 [00:20<00:47]
#>
#> 31%|====== | 10225/32561 [00:21<00:45]
#>
#> 33%|======= | 10729/32561 [00:22<00:44]
#>
#> 34%|======= | 11225/32561 [00:23<00:43]
#>
#> 36%|======= | 11724/32561 [00:24<00:42]
#>
#> 38%|======== | 12228/32561 [00:25<00:41]
#>
#> 39%|======== | 12744/32561 [00:26<00:40]
#>
#> 41%|======== | 13253/32561 [00:27<00:39]
#>
#> 42%|======== | 13751/32561 [00:28<00:38]
#>
#> 44%|========= | 14259/32561 [00:29<00:37]
#>
#> 45%|========= | 14759/32561 [00:30<00:36]
#>
#> 47%|========= | 15268/32561 [00:31<00:35]
#>
#> 48%|========== | 15778/32561 [00:32<00:34]
#>
#> 50%|========== | 16279/32561 [00:33<00:33]
#>
#> 52%|========== | 16782/32561 [00:34<00:31]
#>
#> 53%|=========== | 17291/32561 [00:35<00:30]
#>
#> 55%|=========== | 17800/32561 [00:36<00:29]
#>
#> 56%|=========== | 18305/32561 [00:37<00:28]
#>
#> 58%|============ | 18804/32561 [00:38<00:27]
#>
#> 59%|============ | 19308/32561 [00:39<00:26]
#>
#> 61%|============ | 19811/32561 [00:40<00:25]
#>
#> 62%|============ | 20314/32561 [00:41<00:24]
#>
#> 64%|============= | 20818/32561 [00:42<00:23]
#>
#> 65%|============= | 21323/32561 [00:43<00:22]
#>
#> 67%|============= | 21825/32561 [00:44<00:21]
#>
#> 69%|============== | 22338/32561 [00:45<00:20]
#>
#> 70%|============== | 22853/32561 [00:46<00:19]
#>
#> 72%|============== | 23352/32561 [00:47<00:18]
#>
#> 73%|=============== | 23850/32561 [00:48<00:17]
#>
#> 75%|=============== | 24346/32561 [00:49<00:16]
#>
#> 76%|=============== | 24862/32561 [00:50<00:15]
#>
#> 78%|================ | 25365/32561 [00:51<00:14]
#>
#> 79%|================ | 25863/32561 [00:52<00:13]
#>
#> 81%|================ | 26369/32561 [00:53<00:12]
#>
#> 83%|================= | 26868/32561 [00:54<00:11]
#>
#> 84%|================= | 27365/32561 [00:55<00:10]
#>
#> 86%|================= | 27873/32561 [00:56<00:09]
#>
#> 87%|================= | 28375/32561 [00:57<00:08]
#>
#> 89%|================== | 28877/32561 [00:58<00:07]
#>
#> 90%|================== | 29380/32561 [00:59<00:06]
#>
#> 92%|================== | 29868/32561 [01:00<00:05]
#>
#> 93%|=================== | 30364/32561 [01:01<00:04]
#>
#> 95%|=================== | 30867/32561 [01:02<00:03]
#>
#> 96%|=================== | 31370/32561 [01:03<00:02]
#>
#> 98%|===================| 31873/32561 [01:04<00:01]
#>
#> 99%|===================| 32381/32561 [01:05<00:00]
shap.plots.waterfall(shap_values[0])
```

Any direction is much appreciated! | open | 2023-07-28T23:01:34Z | 2023-10-18T07:34:07Z | https://github.com/mljar/mljar-supervised/issues/640 | [] | michael-mazzucco | 5 |
strawberry-graphql/strawberry | fastapi | 3,412 | enhancing robustness of int/float castings for string values containing commas | I run into this problem again and again and it is always annoying to fix it
## Feature Request Type
- [x] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
```
ValueError: could not convert string to float: '3,3'
File "graphql/type/scalars.py", line 123, in serialize_float
num = output_value if isinstance(output_value, float) else float(output_value
```
Currently values with `,` will break int/float resolvers, but most times this is not a real error.
It would be awesome if we can handle `,` like `.` if the value is a string like this:
```
output_value = output_value.replace(',', '.').strip() if isinstance(output_value, str) else output_value
num = output_value if isinstance(output_value, float) else float(output_value)
```
The same happens with integers | closed | 2024-03-19T14:14:27Z | 2025-03-20T15:56:37Z | https://github.com/strawberry-graphql/strawberry/issues/3412 | [] | Speedy1991 | 1 |
deepinsight/insightface | pytorch | 2,060 | Should I change brightness for best results in face recognition? | If it "yes" HOW should I do it? Maybe I should normalize brightness or something like this?! | open | 2022-07-28T11:48:58Z | 2022-08-07T15:04:07Z | https://github.com/deepinsight/insightface/issues/2060 | [] | IamSVP94 | 1 |
streamlit/streamlit | python | 10,750 | Support expandable blocks in markdown | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Support expandable content blocks in markdown similar to how its supported in Notion and Github, e.g.:
<details><summary>Toggle me!</summary>Peek a boo!</details>
### Why?
_No response_
### How?
Github markdown flavor supports this via basic HTML tags:
```
<details><summary>Toggle me!</summary>Peek a boo!</details>
```
Mkdocs material supports this via [collapsible blocks](https://squidfunk.github.io/mkdocs-material/reference/admonitions/#collapsible-blocks):
```
??? Toggle me!
Peek a boo!
```
### Additional Context
Related to https://github.com/streamlit/streamlit/issues/10751 | open | 2025-03-12T16:49:06Z | 2025-03-12T17:01:12Z | https://github.com/streamlit/streamlit/issues/10750 | [
"type:enhancement",
"feature:markdown"
] | lukasmasuch | 1 |
python-gitlab/python-gitlab | api | 2,630 | Add support for compliance_frameworks | ## Description of the problem, including code/CLI snippet
Please add support for compliance_frameworks
https://docs.gitlab.com/ee/api/projects.html
## Expected Behavior
## Actual Behavior
## Specifications
- python-gitlab version:
- API version you are using (v3/v4):
- Gitlab server version (or gitlab.com):
| closed | 2023-08-02T00:03:40Z | 2024-08-26T01:34:07Z | https://github.com/python-gitlab/python-gitlab/issues/2630 | [
"upstream"
] | coffeecoco | 1 |
microsoft/nni | machine-learning | 5,318 | How to use DDP in multi-trial NAS? | Hi, is there a easy way to use DDP in multi-trial NAS?
I tried multi-trial NAS based on this example: https://github.com/microsoft/nni/blob/master/examples/nas/multi-trial/mnist/search.py. Is it possible to wrap it with DDP? | closed | 2023-01-17T02:56:54Z | 2023-03-08T09:34:32Z | https://github.com/microsoft/nni/issues/5318 | [] | heibaidaolx123 | 3 |
bmoscon/cryptofeed | asyncio | 866 | deribit L2_BOOK raising ValueError Authenticated channel | **Describe the bug**
I am trying to subscribe to Deribit L2_BOOK channel (public data) and Cryptofeed tels me that the channel is authenticated and needs auth keys.
**To Reproduce**
f = FeedHandler()
f.add_feed(Deribit(symbols=['BTC-USD-PERP'], channels=[L2_BOOK], callbacks={L2_BOOK: book}))
f.run()
**Expected behavior**
As with other exchanges I would expect to receive L2_DATA
**Screenshots**
Traceback (most recent call last):
File "/home/docek/PycharmProjects/hb-core/noapp/option_price.py", line 28, in <module>
main()
File "/home/docek/PycharmProjects/hb-core/noapp/option_price.py", line 22, in main
f.add_feed(Deribit(symbols=SYMBOLS, channels=[L2_BOOK], callbacks={L2_BOOK: book}))
File "/home/docek/.cache/pypoetry/virtualenvs/hb-core-koz5fVag-py3.10/lib/python3.10/site-packages/cryptofeed/feed.py", line 109, in __init__
raise ValueError("Authenticated channel subscribed to, but no auth keys provided")
ValueError: Authenticated channel subscribed to, but no auth keys provided
**Operating System:**
Linux under ChromeOS
**Cryptofeed Version**
2.2.3 using Poetry
| closed | 2022-06-29T11:21:17Z | 2022-06-29T13:23:05Z | https://github.com/bmoscon/cryptofeed/issues/866 | [
"bug"
] | docek | 1 |
skypilot-org/skypilot | data-science | 4,861 | [Doc] document how to deploy multiple API servers and deploy server using existing ingress | Deploying our helm chart to k8s cluster created by `sky local up` raises the following errors:
```
Error: Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "skypilot"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "skypilot"
```
We need to document how to reuse existing ingress / deploy multiple API servers. | open | 2025-03-01T09:20:24Z | 2025-03-01T09:20:34Z | https://github.com/skypilot-org/skypilot/issues/4861 | [] | aylei | 0 |
pydantic/pydantic-core | pydantic | 1,437 | Is possible to publish pydantic-core to crates.io | Would it be possible to publish pydantic-core to crates.io? This would allow Rust developers to directly use and benefit from pydantic-core's functionality in their Rust projects. | closed | 2024-09-04T16:03:23Z | 2024-09-17T12:02:07Z | https://github.com/pydantic/pydantic-core/issues/1437 | [] | Folyd | 2 |
ets-labs/python-dependency-injector | flask | 56 | Review docs: Providers | closed | 2015-05-08T14:39:31Z | 2015-07-13T07:31:58Z | https://github.com/ets-labs/python-dependency-injector/issues/56 | [
"docs"
] | rmk135 | 0 | |
pallets-eco/flask-sqlalchemy | flask | 1,189 | __bind_key__ not working | Hello,
the bind key is not working for me. Is this a bug, or a problem with my code?
All data is written to `database.db`, but shold be seperated to the two databases. The `database_logging.db` was created but is empty.
The relevant extract of the code. I need the declarative_base because I want to seperate the table definitions over multiple files.
database.py
```
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
```
app.py
```
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'database/database.db')
app.config['SQLALCHEMY_BINDS'] = {
'logging': 'sqlite:///' + os.path.join(basedir, 'database/database_logging.db')
}
db = SQLAlchemy(app, model_class=Base)
with app.app_context():
db.create_all()
```
data.py
```
from sqlalchemy import *
from database.database import Base
class atable(Base):
__bind_key__ = "logging"
__tablename__ = "a"
id = Column(Integer, primary_key=True)
abc = Column(Text, nullable=False, index=True)
def __repr__(self):
return f'<a {self.abc}>'
class btable(Base):
__tablename__ = "b"
id = Column(Integer, primary_key=True)
abc = Column(Text, nullable=False, index=True)
def __repr__(self):
return f'<b {self.abc}>'
```
Environment:
- Python version: 3.10
- Flask-SQLAlchemy version: 3.0.3
- SQLAlchemy version: 2.0.9
| closed | 2023-04-08T17:22:21Z | 2023-04-23T01:10:28Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1189 | [] | Laserlicht | 2 |
pyg-team/pytorch_geometric | deep-learning | 9,393 | Error converting "to_dense_adj" from "from_scipy_sparse_matrix" | ### 🐛 Describe the bug
I'm trying to convert to a dense matrix but going via network x is too memory intensive with the real file. I was hoping that I could do it via the csr representations rather than via networkx as they are more efficient. Oddly this is occuring even with very small files (example attached)
```python
import pandas as pd
from sklearn.neighbors import radius_neighbors_graph
from torch_geometric.utils.convert import from_scipy_sparse_matrix
from torch_geometric.utils import to_dense_adj
df = pd.read_csv("example.csv")
A = radius_neighbors_graph(df.values, 1, mode='connectivity',include_self=False)
g = from_scipy_sparse_matrix(A)
g = to_dense_adj(g)
```
This generates the following error
```
Traceback (most recent call last):
File "d:\[edited for privacy]\gnn_precaculated_inputs.py", line 30, in <module>
g = to_dense_adj(g)
^^^^^^^^^^^^^^^
File "D:\[env path]\Lib\site-packages\torch_geometric\utils\_to_dense_adj.py", line 64, in
to_dense_adj
max_index = int(edge_index.max()) + 1 if edge_index.numel() > 0 else 0
^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'numel'
```
[example.csv](https://github.com/user-attachments/files/15567879/example.csv)
Thanks.
### Versions
Collecting environment information...
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:42:21) [MSC v.1916 64 bit
(AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000 Laptop GPU
Nvidia driver version: 538.27
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2611
DeviceID=CPU0
Family=179
L2CacheSize=10240
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2611
Name=Intel(R) Xeon(R) W-11955M CPU @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torch_geometric==2.5.2
[pip3] torch_scatter==2.1.2+pt22cu121
[pip3] torch_sparse==0.6.18+pt22cu121
[pip3] torchaudio==2.2.2
[pip3] torchvision==0.17.2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl_fft 1.3.8 py312h2bbff1b_0
[conda] mkl_random 1.2.4 py312h59b6b97_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] pyg 2.5.2 py312_torch_2.2.0_cu121 pyg
[conda] pytorch 2.2.2 py3.12_cuda12.1_cudnn8_0 pytorch
[conda] pytorch-cuda 12.1 hde6ce7c_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi | closed | 2024-06-04T16:57:09Z | 2024-06-11T11:27:20Z | https://github.com/pyg-team/pytorch_geometric/issues/9393 | [
"bug"
] | timdnewman | 2 |
numpy/numpy | numpy | 27,994 | ENH: Support of new str.format() syntax instead of the old %-formatting in numpy.savetxt | ### Proposed new feature or change:
Currently `numpy.savetxt` uses old %-formatting to format strings. But old formatting doesn't support some new features. For example: a control of negative zeros ([PEP-682](https://peps.python.org/pep-0682/)), or a binary output (#20755). It would be great to support new formatting style via `str.format()`. Probably a good way to ask for a new formatting would be to use column `:` character instead of percent `%` as a first character of the formatting string. | open | 2024-12-13T14:45:13Z | 2024-12-19T15:32:27Z | https://github.com/numpy/numpy/issues/27994 | [
"01 - Enhancement"
] | PavelStishenko | 5 |
encode/databases | asyncio | 422 | DatabaseUrl bug when using Unix domain socket | I'm deploying a FastAPI application on Google Cloud Run which connects to a Cloud SQL instance using this package. The crux of the issue is that connecting with:
```python
db = databases.Database(url)
await db.connect()
```
fails whereas connecting through sqlalchemy's `create_engine` with
```python
engine = create_engine(url)
engine.connect()
```
works.
The connection url uses `unix_sock` structure (docs [here](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#unix-domain-connections)) rather than the regular sqlalchemy connection url, something like this:
```python
# all these urls work fine when connecting with sqlalchemy create_engine
"postgresql://user:pass@/db_name?host=/path/to/sock"
"postgresql+psycopg2://user:pass@/db_name?host=/path/to/sock"
"postgresql+pg8000://user:pass@/db_name?unix_sock=/path/to/sock/.s.PGSQL.5432"
```
I'm unsure whether this would be an issue with using async in the Google Cloud environment or something about how connection urls like the one above get translated in this package to work with sqlalchemy. I've posted on Stack Overflow about it [here](https://stackoverflow.com/questions/69963202/issues-connecting-to-a-google-cloud-sql-instance-from-google-cloud-run?noredirect=1#comment123679622_69963202) but thought I'd raise an issue here as well in case it was the latter. | closed | 2021-11-15T11:57:51Z | 2021-11-16T09:22:47Z | https://github.com/encode/databases/issues/422 | [
"bug"
] | dbatten5 | 13 |
huggingface/datasets | deep-learning | 6,640 | Sign Language Support | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.)
### Your contribution
I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets. | open | 2024-02-02T21:54:51Z | 2024-02-02T21:54:51Z | https://github.com/huggingface/datasets/issues/6640 | [
"enhancement"
] | Merterm | 0 |
graphql-python/gql | graphql | 222 | RequestsHTTPTransport: retries option with POST method does not effect | - RequestsHTTPTransport's retries option uses requests library.
- RequestsHTTPTransport's HTTP(S) request uses POST method by default.
- `requests`'s max_retries option does not effects with POST method by default.
- requests.adapters.Retry == urllib3.util.retry.Retry
- https://stackoverflow.com/a/35707701
- https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.retry.Retry
- https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry.DEFAULT_ALLOWED_METHODS
----
1. run tshark to catpture packets.
```
$ sudo tshark -i eth1 'dst host httpstat.us'
```
2. open another terminal and run retry.py
```
$ mkdir xxx && cd xxx
$ pipenv install gql
$ pipenv run python -V
3.9.2
$ vi retry.py
$ pipenv run python ./retry.py
```
3. confirm output of tshark and i can see 1 post request only.
```
Running as user "root" and group "root". This could be dangerous.
Capturing on 'eth1'
1 0.000000000 xxx → xxx TCP 94 57164 → 80 [SYN] Seq=0 Win=64800 Len=0 MSS=1440 SACK_PERM=1 TSval=4064012293 TSecr=0 WS=128
2 0.003444162 xxx → xxx TCP 74 57164 → 80 [ACK] Seq=1 Ack=1 Win=64896 Len=0
3 0.003465623 xxx → xxx TCP 274 POST /500 HTTP/1.1 [TCP segment of a reassembled PDU]
4 0.003475453 xxx → xxx HTTP/JSON 1491 POST /500 HTTP/1.1 , JavaScript Object Notation (application/json)
5 0.233443595 xxx → xxx TCP 74 57164 → 80 [ACK] Seq=1618 Ack=1174 Win=64128 Len=0
6 0.239596636 xxx → xxx TCP 74 57164 → 80 [FIN, ACK] Seq=1618 Ack=1174 Win=64128 Len=0
7 0.243453262 xxx → xxx TCP 74 57164 → 80 [ACK] Seq=1619 Ack=1175 Win=64128 Len=0
```
retry.py: (to recieve 500 response, use httpstat.us)
```
from gql import Client, gql
from gql.transport.requests import RequestsHTTPTransport
resp = Client(
transport=RequestsHTTPTransport(
url="http://httpstat.us/500",
retries=3
),
fetch_schema_from_transport=True,
).execute(gql("query { xxx { value }}"))
print(resp)
```
| closed | 2021-07-30T12:53:05Z | 2021-10-26T08:46:27Z | https://github.com/graphql-python/gql/issues/222 | [
"type: bug"
] | khiker | 4 |
ghtmtt/DataPlotly | plotly | 55 | Translation updates | With the new UI all the new strings have to be pushed to transifex | closed | 2017-11-26T11:42:37Z | 2017-11-27T10:09:09Z | https://github.com/ghtmtt/DataPlotly/issues/55 | [
"docs"
] | ghtmtt | 1 |
pytorch/pytorch | machine-learning | 149,616 | intermittent toch.compiler failures when running gemma model | ### 🐛 Describe the bug
Hi, i'm trying to fix a intermittent torch.compiler failures with cpp wrapper when running gemma model and wonder if someone can help providing some clue for debug or get a minial reproducer? The error is not specific to arm but also reproducible on intel machines.
```TORCHINDUCTOR_CPP_WRAPPER=1 \
TORCHINDUCTOR_FREEZING=1 \
ONEDNN_DEFAUL_FPMATH_MODE=BF16 \
OMP_NUM_THREADS=16 \
IDEEP_CACHE_MATMUL_REORDERS=1 \
LRU_CACHE_CAPACITY=256 \`
model.forward = torch.compile(model.forward, backend='inductor',
dynamic=True, fullgraph=False)
```
when compiling the generated c++ code which show some variable is not declared.
```
error: ‘s9’ was not declared in this scope; did you mean ‘s1’?
2163 | const int64_t int_array_34[] = {1L, 4L, s9, 256L};
```
the following is what i try to debug:
setting `TORCH_COMPILE_DEBUG=1`, it seems there are something wrong in the generated `torchinductor/model__2_inference_2.2/fx_graph_readable.py`. in short, the kv cache tensors are marked as self._frozen_param in fx graph and corresponding c++ code for `torch.ops.aten.sym_size.int` is not gerneated in `output_code.py`.
the corresponding code in python file is :
```
if self.key_cache[layer_idx].device.type == "meta":
self.key_cache[layer_idx] = torch.zeros_like(self.key_cache[layer_idx], device=key_states.device)
self.value_cache[layer_idx] = torch.zeros_like(self.value_cache[layer_idx], device=value_states.device)
```
in the fx_graph_readable.py
```
# No stacktrace found for following nodes
arg0_1: "bf16[256000, 2304]" = self._frozen_param0
arg292_1: "bf16[1, 4, s9, 256]" = self._frozen_param292
arg296_1: "bf16[1, 4, s13, 256]" = self._frozen_param296
arg300_1: "bf16[1, 4, s17, 256]" = self._frozen_param300
arg304_1: "bf16[1, 4, s21, 256]" = self._frozen_param304
arg308_1: "bf16[1, 4, s25, 256]" = self._frozen_param308
arg312_1: "bf16[1, 4, s29, 256]" = self._frozen_param312
arg316_1: "bf16[1, 4, s33, 256]" = self._frozen_param316
arg320_1: "bf16[1, 4, s37, 256]" = self._frozen_param320
arg324_1: "bf16[1, 4, s41, 256]" = self._frozen_param324
arg328_1: "bf16[1, 4, s45, 256]" = self._frozen_param328
arg332_1: "bf16[1, 4, s49, 256]" = self._frozen_param332
arg336_1: "bf16[1, 4, s53, 256]" = self._frozen_param336
arg340_1: "bf16[1, 4, s57, 256]" = self._frozen_param340
# File: /home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/cache_utils.py:1736 in update, code: self.value_cache[layer_idx] = torch.zeros_like(self.value_cache[layer_idx], device=value_states.device)
sym_size_int_25: "Sym(s9)" = torch.ops.aten.sym_size.int(arg292_1, 2); arg292_1 = None
full_7: "bf16[1, 4, 40, 256]" = torch.ops.aten.full.default([1, 4, sym_size_int_25, 256], 0, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cpu'), pin_memory = False)
```
### Error logs
```I0313 11:56:01.742000 5716 torch/_dynamo/convert_frame.py:1121] [0/2] run_gc_after_compile: running gc
Traceback (most recent call last):
File "/home/ubuntu/workspace/scratchs/torch_compiler/run_gemma.py", line 98, in <module>
e2e, no_output_tokens = measure_end_to_end_latency()
File "/home/ubuntu/workspace/scratchs/torch_compiler/run_gemma.py", line 65, in measure_end_to_end_latency
model.generate(model_inputs, do_sample=False, max_new_tokens=30, min_new_tokens=30)
File "/home/ubuntu/workspace/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/generation/utils.py", line 2223, in generate
result = self._sample(
File "/home/ubuntu/workspace/torch_dev/lib/python3.10/site-packages/transformers/generation/utils.py", line 3211, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/home/ubuntu/workspace/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1213, in __call__
result = self._inner_convert(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ubuntu/workspace/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
self._return(inst)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3686, in _return
self.output.compile_subgraph(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1179, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1437, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1487, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
raise BackendCompilerFailed(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/ubuntu/workspace/pytorch/torch/__init__.py", line 2349, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1777, in compile_fx
return compile_fx(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 2089, in compile_fx
return aot_autograd(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/ubuntu/workspace/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1629, in fw_compiler_freezing
optimized_function = inner_compile(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 628, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/ubuntu/workspace/pytorch/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 735, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1295, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/compile_fx.py", line 1197, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/home/ubuntu/workspace/pytorch/torch/_inductor/graph.py", line 2083, in compile_to_module
return self._compile_to_module()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/graph.py", line 2130, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2747, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/runtime/compile_tasks.py", line 36, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_ubuntu/g6/cg6xubpp3mgot5ujrcq7ns7f2kmcmg6agwntl66tbytrm3dtpaym.py", line 24158, in <module>
inductor_entry = CppWrapperCodeCache.load_pybinding(
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2250, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2242, in future
result = get_result()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2051, in load_fn
result = worker_fn()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/codecache.py", line 2079, in _worker_compile_cpp
cpp_builder.build()
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 1596, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 355, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "/home/ubuntu/workspace/pytorch/torch/_inductor/cpp_builder.py", line 350, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_SVE -D CPU_CAPABILITY_SVE256 -D AT_BUILD_ARM_VEC256_WITH_SLEEF -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/usr/include/python3.10 -I/home/ubuntu/workspace/pytorch/torch/include -I/home/ubuntu/workspace/pytorch/torch/include/torch/csrc/api/include -march=armv8-a+sve -msve-vector-bits=256 -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/usr/lib/aarch64-linux-gnu -L/home/ubuntu/workspace/pytorch/torch/lib -o /tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.so
Output:
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp: In function ‘void inductor_entry_impl(AtenTensorOpaque**, AtenTensorOpaque**)’:
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2163:45: error: ‘s9’ was not declared in this scope; did you mean ‘s1’?
2163 | const int64_t int_array_34[] = {1L, 4L, s9, 256L};
| ^~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2373:45: error: ‘s13’ was not declared in this scope; did you mean ‘s10’?
2373 | const int64_t int_array_38[] = {1L, 4L, s13, 256L};
| ^~~
| s10
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2580:45: error: ‘s17’ was not declared in this scope; did you mean ‘s10’?
2580 | const int64_t int_array_41[] = {1L, 4L, s17, 256L};
| ^~~
| s10
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2787:45: error: ‘s21’ was not declared in this scope; did you mean ‘s1’?
2787 | const int64_t int_array_44[] = {1L, 4L, s21, 256L};
| ^~~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:2994:45: error: ‘s25’ was not declared in this scope
2994 | const int64_t int_array_47[] = {1L, 4L, s25, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3201:45: error: ‘s29’ was not declared in this scope
3201 | const int64_t int_array_50[] = {1L, 4L, s29, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3408:45: error: ‘s33’ was not declared in this scope
3408 | const int64_t int_array_53[] = {1L, 4L, s33, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3615:45: error: ‘s37’ was not declared in this scope
3615 | const int64_t int_array_56[] = {1L, 4L, s37, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:3822:45: error: ‘s41’ was not declared in this scope; did you mean ‘s1’?
3822 | const int64_t int_array_59[] = {1L, 4L, s41, 256L};
| ^~~
| s1
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4029:45: error: ‘s45’ was not declared in this scope
4029 | const int64_t int_array_62[] = {1L, 4L, s45, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4236:45: error: ‘s49’ was not declared in this scope
4236 | const int64_t int_array_65[] = {1L, 4L, s49, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4443:45: error: ‘s53’ was not declared in this scope
4443 | const int64_t int_array_68[] = {1L, 4L, s53, 256L};
| ^~~
/tmp/torchinductor_ubuntu/nk/cnkyayvmejrdlywox7k463stqyf476wkr3bykahxynmbmpsy2bce.cpp:4652:45: error: ‘s57’ was not declared in this scope
4652 | const int64_t int_array_71[] = {1L, 4L, s57, 256L};
| ^~~`
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitf349304
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1024-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.7.0a0+gitf349304
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov | closed | 2025-03-20T10:50:48Z | 2025-03-24T17:16:10Z | https://github.com/pytorch/pytorch/issues/149616 | [
"module: cpu",
"triaged",
"oncall: pt2",
"module: inductor"
] | taoye9 | 2 |
streamlit/streamlit | data-visualization | 10,415 | Multi-Index Columns with brackets get renamed to square brackets | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hey guys,
When using brackets in column titles and applying a multiindex, they are replaced with sqaure brackets.
When defining column configs and especially column order of the st.dataframe, this automatic renaming leads to a mismatch between what the user thinks the column names are and the actual column names. Im using column config and column order and had to debug for quite long to identify why some of my columns are missing in the displayed dataframe, for example.
### Reproducible Code Example
```Python
import pandas as pd
import streamlit as st
df_regular = pd.DataFrame({"Example": [1, 2, 3], "Example (1)": [4, 5, 6]})
multi_index = pd.MultiIndex.from_tuples([("Test", "Example"), ("Test", "Example (1)")])
df_multi = pd.DataFrame(df_regular.values, columns=multi_index)
st.dataframe(df_regular)
st.dataframe(df_multi)
```
### Steps To Reproduce
_No response_
### Expected Behavior
I expect the column names to stay the same
### Current Behavior
The column names are changed without my knowlege
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | open | 2025-02-17T12:01:30Z | 2025-03-06T16:13:31Z | https://github.com/streamlit/streamlit/issues/10415 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P4"
] | itsToggle | 3 |
vchaptsev/cookiecutter-django-vue | graphql | 28 | Subdomain support | Hi, cool project. Do you have plans to support subdomains? I'm working on a similar project and I am working on how to set up subdomain support. From what I have seen, I think it can be done with a middleware class on the Django side, some processing of the URL in Vue's router and editing nginx to catch the subdomains. | closed | 2018-10-07T23:48:16Z | 2019-03-15T13:58:58Z | https://github.com/vchaptsev/cookiecutter-django-vue/issues/28 | [
"question"
] | briancaffey | 2 |
FujiwaraChoki/MoneyPrinter | automation | 149 | [BUG] Error: float division by zero | it does this

here's my .env settings because i think it can maybe be this

| closed | 2024-02-10T14:24:13Z | 2024-02-11T09:01:28Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/149 | [] | N3xi0 | 7 |
taverntesting/tavern | pytest | 875 | Try to use the external function in the URL | Hi
I want to get an `ID` value from another program and splice it into the URL.
Similar to: http://localhost:7200/api/manager/system/organizations/{id}
so I try to use external function,
- Tavern `2.2.0`
```
- name: Delete a user
request:
method: DELETE
url:
$ext:
function: web.common:id_query
response:
status_code: 200
```
But there seems to be an anomaly.
Is this usage supported? | closed | 2023-07-04T11:39:08Z | 2023-09-18T14:50:52Z | https://github.com/taverntesting/tavern/issues/875 | [] | IrisBlume-dot | 1 |
nolar/kopf | asyncio | 684 | Scoping and liveness in embedded mode? | ## Question
<!-- What problem do you currently face and see no solution for it? -->
Hi, so I recently refactored some of my code to use [embedding](https://kopf.readthedocs.io/en/stable/embedding/) but now that I don't have access to the `kopf run` CLI flags, I feel a bit like a second-class citizen 😞
I'm currently trying to find out how to resolve this warning:
> /.venv/lib/python3.8/site-packages/kopf/reactor/running.py:157: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility. > warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
My understanding from checking the docs if that I basically need to do what's documented in [scopes](https://kopf.readthedocs.io/en/stable/scopes/) section but that's all using CLI flags...
I looked around the code and deeper into the docs but the closest thing I found was the [ScanningSettings](https://kopf.readthedocs.io/en/stable/packages/kopf.structs.configuration/#kopf.structs.configuration.ScanningSettings) which seems to be a different thing... so yeah, I don't know how to set specific namespace(s) or cluster-wide without the flags.
The other issue I'm having is setting the liveness probes which are also only [documented](https://kopf.readthedocs.io/en/stable/probing/?highlight=liveness#liveness-endpoints) via CLI flags.
I can probably survive without the liveness but the namespaces are ~critical~ more important... what can I do here?
P.S. Using the latest kopf 1.29.2
<!-- If possible, explain what other ways did you try to solve the problem? -->
## Checklist
- [x] I have read the [documentation](https://kopf.readthedocs.io/en/latest/) and searched there for the problem
- [x] I have searched in the [GitHub Issues](https://github.com/nolar/kopf/issues?utf8=%E2%9C%93&q=) for similar questions
## Keywords
<!-- Which keywords did you search for in the documentation/issue for this problem? -->
- liveness
- namespaces
- cluster-wide
- OperatorSettings | closed | 2021-02-16T20:07:17Z | 2021-02-17T04:21:00Z | https://github.com/nolar/kopf/issues/684 | [
"question"
] | OmegaVVeapon | 2 |
healthchecks/healthchecks | django | 289 | psycopg2-binary warnining | When I run the project I get the following warning from psycopg2
````
/usr/local/lib/python3.7/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
````
The underlying issue and cause if this warning is discussed here https://github.com/psycopg/psycopg2/issues/674
and a fix is mentioned here https://github.com/psycopg/psycopg2/issues/674#issuecomment-401840661.
After reading it I am still not quite sure why but I think the ``requirements.txt`` needs to be updated with `` psycopg2==2.7.5 --no-binary psycopg2``. Now my question: Should the ``requirements.txt`` be updated? | closed | 2019-09-29T14:38:35Z | 2019-09-30T13:08:49Z | https://github.com/healthchecks/healthchecks/issues/289 | [] | SuperSandro2000 | 1 |
keras-team/keras | deep-learning | 20,437 | Add `ifft2` method to ops | I'm curious why there is no `ops.ifft2`. Given that there is already `fft` and `fft2`, implementing one is trivial.
Here is an example of what an `ifft2` would look like:
```python
import keras
def keras_ops_ifft2(fft_real,fft_imag):
"""
Inputs are real and imaginary parts of array
of shape [...,H,W],[...,H,W] , where the last two
dimensions correspond tot he image dimensions
Returns tuple containing the real and imaginary parts
of the ifft2
Test:
from keras import ops
X = np.random.rand( 1,1,11,11 ).astype(np.float32)
X_real,X_imag = ops.real(X), ops.imag(X)
X_fft_real,X_fft_imag = keras.ops.fft2((X_real,X_imag))
X_recon,_ = keras_ops_ifft2(X_fft_real,X_fft_imag)
np.allclose(X,X_recon,atol=1e-6)
"""
H = ops.cast(ops.shape(fft_real)[-2],'float32') # height
W = ops.cast(ops.shape(fft_real)[-1],'float32') # width
# Conjugate the input
real_conj, imag_conj = fft_real, -fft_imag
# Compute FFT of conjugate
fft = ops.fft2((real_conj, imag_conj))
return fft[0] / (H*W), -fft[1] / (H*W)
``` | closed | 2024-11-01T18:32:58Z | 2024-11-05T00:14:56Z | https://github.com/keras-team/keras/issues/20437 | [
"stat:contributions welcome",
"type:feature"
] | markveillette | 1 |
mljar/mercury | jupyter | 275 | cant donwload files from output directory for private notebooks and s3 storage | Steps to reproduce:
- create private Site,
- create notebook with OutputDir,
- use s3 as storage,
When downloading a file you will get an error because token is added to requests to s3 server. We should send request to s3 without authentication token. | closed | 2023-05-16T10:21:00Z | 2023-05-16T11:03:35Z | https://github.com/mljar/mercury/issues/275 | [
"bug"
] | pplonski | 0 |
pallets-eco/flask-sqlalchemy | flask | 377 | Doc site missing link to PDF download | I was able to figure out the link is http://flask-sqlalchemy.pocoo.org/2.1/.latex/Flask-SQLAlchemy.pdf and successfully downloaded it.
Looking at the source, the intent was clearly to have the link in the sidebar.
(Don't have a chance to figure out how to make the change myself at the moment. I'll look into it as soon as I can.)
| closed | 2016-02-24T23:44:18Z | 2020-12-05T21:31:06Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/377 | [] | vassudanagunta | 3 |
slackapi/bolt-python | fastapi | 520 | How to know team_id in app_home_opened event listeners | Unable to see view in app_home_opened event:
- When someone opens a home tab, I am fetching the team_id from the Event payload (inside view) to identify the workspace and sending the corresponding home tab but It seems like there is no view section in the Event payload.
```python
@bolt_app.event("app_home_opened")
def handle_home_tab(client, event, logger):
print('Event: ', event)
# get event['view']
```
```bash
Event: {'type': 'app_home_opened', 'user': 'U0*********', 'channel': 'D0*********', 'tab': 'home', 'event_ts': '163*****.*******'},
```
No view is there, what I read is for first time when this event occur since no view was there we didn't get view field but I need team_id to identify that workspace in my backend. Is there any way I can fetch team_id before I publish my home view for the first time | closed | 2021-11-10T12:10:38Z | 2021-11-10T12:44:28Z | https://github.com/slackapi/bolt-python/issues/520 | [
"question"
] | Cyb-Nikh | 2 |
amidaware/tacticalrmm | django | 1,764 | 0.17.5 fails to install properly and requires some manual changes to shape it | **Server Info (please complete the following information):**
- OS: Ubuntu 22.04.3 LTS
- RMM Version (as shown in top left of web UI): 0.17.5
**Installation Method:**
- [x] Standard
- [ ] Docker
**Describe the bug**
A clear and concise description of what the bug is.
1. it's a fresh-new ubuntu vps
2. our first installation somehow 'completed' however it showed an error stating during install:
Creating meshcentral account and group
--------------------------------------------------------------------------------
Done.
Done. This command will only work if MeshCentral is stopped.
Mesh Central not ready yet...
ok mesh//e4E4O@9HJQBwrYZfcVt64rDPC9a5LJCFJwNQ3nrZ04mNHnEboMfXnv7ok1uziQ7j
Created symlink /etc/systemd/system/multi-user.target.wants/nats.service → /etc/systemd/system/nats.service.
Traceback (most recent call last):
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 264, in connect
sock = self.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 265, in <lambda>
lambda: self._connect(), lambda error: self.disconnect(error)
^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 627, in _connect
raise err
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 615, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/rmm/api/tacticalrmm/manage.py", line 21, in <module>
main()
File "/rmm/api/tacticalrmm/manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "/rmm/api/env/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/rmm/api/env/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/rmm/api/env/lib/python3.11/site-packages/django/core/management/base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
File "/rmm/api/env/lib/python3.11/site-packages/django/core/management/base.py", line 458, in execute
output = self.handle(*args, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rmm/api/tacticalrmm/core/management/commands/initial_db_setup.py", line 15, in handle
CoreSettings().save()
File "/rmm/api/tacticalrmm/core/models.py", line 111, in save
cache.delete(CORESETTINGS_CACHE_KEY)
File "/rmm/api/env/lib/python3.11/site-packages/django/core/cache/backends/redis.py", line 199, in delete
return self._cache.delete(key)
^^^^^^^^^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/django/core/cache/backends/redis.py", line 119, in delete
return bool(client.delete(key))
^^^^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/commands/core.py", line 1712, in delete
return self.execute_command("DEL", *names)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/client.py", line 533, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 1291, in get_connection
connection.connect()
File "/rmm/api/env/lib/python3.11/site-packages/redis/connection.py", line 270, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.
Reloading NATs configuration...
NATs configuration reloaded
and the installation was completely broken, however when we tried to launch redis, it was working fine
so we had to rollback everything except redis and re-start the installation from scratch
3. after install it was still unusable until we find out that 'TacticalRMM' group was not created inside the mesh
4. i don't know why, but mesh web interface just doesn't work while trying to push 'add group' button, so we had to create it manually via mesh cli
5. there was an error in nginx rmm.conf in /natws section - until after we've put proxy_set_header Upgrade "websocket" - we were getting 'error in Upgrade header' from nats server
| closed | 2024-02-21T15:34:48Z | 2024-02-21T17:07:47Z | https://github.com/amidaware/tacticalrmm/issues/1764 | [] | optiproplus | 2 |
pyeventsourcing/eventsourcing | sqlalchemy | 138 | Rebuilding Aggregate root | Hey @johnbywater
First of all a big thanks, this library is awesome!
I've got a question about rebuilding the aggregate root.
I've got this simple hangman web API and I do different calls to that API for guessing letters. I just noticed that every time that I do a call to the API, I will get new Aggregate Root id with the consequence that I never can guess the word, letters etc.
Is there a way to rebuild the aggregate to it's latest state?
Already many thanks in advance! | closed | 2018-02-15T22:51:44Z | 2018-02-15T23:17:18Z | https://github.com/pyeventsourcing/eventsourcing/issues/138 | [] | weemen | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,172 | Is this lib capable of TTS? | Hi, I think this is the only lib out here that can synthesize hq voice . I was wondering if it can also generate tts | open | 2023-03-10T18:15:56Z | 2023-03-10T18:15:56Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1172 | [] | destlaver | 0 |
microsoft/hummingbird | scikit-learn | 305 | Generate automatically the schema for ONNX models. | ONNX models contains all the input information therefore it should be possible to automatically generate the schema definition \ inputs starting just from the model.
If this is true, we can remove `ONNX_INITIAL_TYPES ` from the support input parameters. | closed | 2020-09-22T23:31:11Z | 2020-10-20T23:10:51Z | https://github.com/microsoft/hummingbird/issues/305 | [] | interesaaat | 0 |
keras-rl/keras-rl | tensorflow | 388 | Value error when running DQN.fit | I tried teaching AI how to play breakout but my code crashes when I try to teach DQN model.
``
import gym
import numpy as np
import tensorflow as tf
from rl.agents.dqn import DQNAgent
from rl.policy import LinearAnnealedPolicy, EpsGreedyQPolicy
from rl.memory import SequentialMemory
from keras.layers import Dense, Flatten, Convolution2D
env = gym.make('ALE/Breakout-v5', render_mode='rgb_array')
height, width, channels = env.observation_space.shape
actions = env.action_space.n
episodes = 10
for episode in range(1, episodes + 1):
env.reset()
done = False
score = 0
def buildModel(height, width, channels, actions):
model = tf.keras.Sequential()
model.add(Convolution2D(32, (8, 8), strides=(4, 4), activation='relu', input_shape=(3,height, width, channels)))
model.add(Convolution2D(64, (4, 4), strides=(2, 2), activation='relu'))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(actions, activation='linear'))
return model
def buildAgent(model, actions):
policy = LinearAnnealedPolicy(EpsGreedyQPolicy(), attr='eps', value_max=1., value_min=.1, value_test=.2, nb_steps=10000)
memory = SequentialMemory(limit=1000, window_length=3)
dqn = DQNAgent(model=model, memory=memory, policy=policy,
enable_dueling_network=True, dueling_type='avg',
nb_actions=actions, nb_steps_warmup=1000)
return dqn
model = buildModel(height, width, channels, actions)
DQN = buildAgent(model, actions)
DQN.compile(tf.keras.optimizers.Adam(learning_rate=1e-4), metrics=['mae'])
DQN.fit(env, nb_steps=1000000, visualize=True, verbose=1)
scores = DQN.test(env, nb_episodes=1000, visualize=True)
print(np.mean(scores.history['episode_reward']))
``
Error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() | open | 2022-04-16T19:25:24Z | 2022-09-29T17:56:02Z | https://github.com/keras-rl/keras-rl/issues/388 | [] | GravermanDev | 2 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 43 | Support for nvidia | Hello,
Thanks for your work ! Do you plan to add a docker image starting from nvidia/cuda to have cuda toolkit installed ?
| closed | 2020-05-15T08:05:04Z | 2020-06-15T12:58:48Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/43 | [
"answered"
] | griseau | 1 |
Gozargah/Marzban | api | 1,021 | Marzban-node "unable to get local issuer certificate" | I have server with few normally worked marzban-nodes.
I try to add new, do it same, but Marzban central server can not connect to node.
I get error:
`[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)`
Can you help me to understand? Certificates issued normally nad stored on needed place. | open | 2024-05-30T18:31:16Z | 2024-05-30T18:31:16Z | https://github.com/Gozargah/Marzban/issues/1021 | [
"Bug"
] | ultraleks | 0 |
albumentations-team/albumentations | deep-learning | 1,698 | Unexpected Kernel Size Application in `Blur` Transformation | I've discovered what seems to be a bug where the `Blur` transformation applies a smaller kernel size than specified by the `blur_limit`. This behavior occurs even when large values are set for the `blur_limit`, leading to blurring effects that do not match the expected intensity.
### Steps to Reproduce
I observed this issue while attempting to blur an image with significantly different kernel sizes. Specifically, when setting a `blur_limit` of `(13, 15)`, the blurring effect was similar to that produced by a much smaller kernel size of 3x3, suggesting that the transformation always uses `3` as kernel size.
To reproduce this behavior, follow these steps:
1. Generate a random image of size 512x512 with 3 color channels.
2. Apply the `Blur` transformation using a `blur_limit` of `(13, 15)`.
3. Apply OpenCV's `blur` function to the same image with a kernel size of 3x3.
4. Compare the two results to observe the discrepancy in blurring intensity.
Here is the Python code that demonstrates this issue:
```python
import albumentations as A
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Generate a random image
image = np.random.randint(0, 256, (512, 512, 3), dtype=np.uint8)
# Apply Albumentations Blur with a large kernel size range
albumentations_blur = A.Blur(p=1, blur_limit=(13, 15))
transformed_image = albumentations_blur(image=image)['image']
# Apply OpenCV Blur with a small kernel size
cv2_blurred_image = cv2.blur(image, (3, 3))
# Check if the images are identical
are_identical = np.array_equal(transformed_image, cv2_blurred_image)
print(f"Are the blurred images identical? {are_identical}")
```
### Expected Behavior
I expected that the image blurred with a `blur_limit` of `(13, 15)` would show a significantly more pronounced blur effect compared to one blurred with a kernel size of 3x3.
### Actual Behavior
The image blurred with the Albumentations `Blur` transformation (`blur_limit` of `(13, 15)`) showed a blur intensity similar to that of the image processed with OpenCV's 3x3 kernel.
### Environment
- **Operating System:** Ubuntu 20.04
- **Python Version:** 3.8.10
- **Albumentations Version:** 1.4.5
| closed | 2024-05-03T12:36:39Z | 2024-05-04T05:34:23Z | https://github.com/albumentations-team/albumentations/issues/1698 | [
"bug"
] | LeLub | 4 |
ARM-DOE/pyart | data-visualization | 1,037 | Remove numpy import in setup.py | When installing pyart in a brand new virtual env, errors appear when installing it along with other requirements (with pip).
The reason is that pyart needs `numpy` installed prior to its installation. We can see that `numpy` is required within the `setup.py` file 2 times :
`from numpy.distutils.misc_util import Configuration` (line 147)
`from numpy.distutils.core import setup` (line 165)
The presence of numpy which is not installed by default leads to errors in the installation of the package, in a brand new env.
Is there a way not to use `numpy` in `setup.py ` ? | closed | 2022-02-23T08:56:57Z | 2022-02-23T09:01:29Z | https://github.com/ARM-DOE/pyart/issues/1037 | [] | Vforcell | 0 |
deepspeedai/DeepSpeed | deep-learning | 6,987 | model.parameters() return [Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)] when using zero3 | **Describe the bug**
Try to print model.parameters() in transfomers trainer(), but get Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True) for all layers
In fact, I am trying to return the correct model.parameters() in DeepSpeed Zero-3 mode and use the EMA model. Could you suggest any ways to solve the above issue, or any other methods to use the EMA model under Zero-3?
**System Info**
transformers 4.44.2
accelerate 1.2.1
deepspeed 0.12.2
torch 2.2.2
torchaudio 2.2.2
torchvision 0.17.2
**Expected behavior**
Expect to see the gathered parameters
| closed | 2025-01-31T16:48:27Z | 2025-02-14T00:48:09Z | https://github.com/deepspeedai/DeepSpeed/issues/6987 | [
"bug",
"training"
] | fanfanffff1 | 1 |
robinhood/faust | asyncio | 324 | Autodiscovery: venusian.scan should ignore `__main__.py` | If venusian.scan imports `__main__.py` it causes the app to be created twice | closed | 2019-03-28T16:48:58Z | 2019-03-28T16:59:48Z | https://github.com/robinhood/faust/issues/324 | [] | ask | 0 |
AirtestProject/Airtest | automation | 346 | 为什么script中的图片和实际程序的截图识别吻合度相差很大? | **(重要!问题分类)**
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
使用airtestIDE直接连接整个windows屏幕。
分别在不同分辨率下进行测试图像识别(用了[#4](https://github.com/AirtestProject/Airtest/issues/4)提到的方法之后,使用运行很好,没有问题)
之前发现如果连接整个屏幕的话,script框内的图片也会被识别到
但是在不同分辨率下运行之后,我发现对于**其他程序识别度99.9%的图片,在script框内的识别度只有40-60%**
明明script框里的图片和实际程序的截图应该是一样的,为什么识别度会不一样呢?
| closed | 2019-04-08T07:02:43Z | 2019-07-10T09:15:35Z | https://github.com/AirtestProject/Airtest/issues/346 | [] | niuniuprice | 1 |
matplotlib/matplotlib | data-visualization | 29,659 | [Bug]: Unnecessary double start of `Animation` in some circumstances | ### Bug summary
When a a figure has already had a draw event happen, then an Animation object is intialized, after calling `animation.save` the animation will loop again after the save has finished.
This is due to this line: https://github.com/matplotlib/matplotlib/blob/964355130c5389926641a03154f56f8e081fbfd3/lib/matplotlib/animation.py#L897 as discussed [here](https://discourse.matplotlib.org/t/how-to-prevent-funcanimation-looping-a-single-time-after-save/21680/3?u=ianhi)
### Code for reproduction
In a jupyter notebook:
```Python
%matplotlib ipympl
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import animation
N = 100
slider = widgets.IntSlider(value=0, min=0, max=N)
tau = np.linspace(0.1, 2, N)
fig, ax = plt.subplots()
x = np.linspace(0, 20, 500)
lines = ax.plot(x, np.sin(tau[0] * x))
def update_lines(change):
lines[0].set_data(x, np.sin(tau[change["new"]] * x))
slider.observe(update_lines, names="value")
display(slider)
```
and then in a second cell pausing between cells to allow the plot to finish rendering:
```python
def animate(i):
update_lines({"new": i})
return []
anim = animation.FuncAnimation(fig, animate, frames=N, interval=20, repeat=False)
anim.save("anim.gif")
# neither of the below stop the extra loop :(
fig.canvas.flush_events()
anim.event_source.stop()
```
### Actual outcome
Animation loops once after calling save

### Expected outcome
Animation does not run a second time after the save.
### Additional information
Discussed here: https://discourse.matplotlib.org/t/how-to-prevent-funcanimation-looping-a-single-time-after-save/21680/2
recently revived by a commenter, and now I am making good on what I ought to have done 4 years ago.
This is most easily seen in a notebook backend, but the root cause is in `matplotlib` proper.
### Operating system
_No response_
### Matplotlib Version
3.10.0
### Matplotlib Backend
ipympl
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip | open | 2025-02-22T07:02:28Z | 2025-02-24T08:07:36Z | https://github.com/matplotlib/matplotlib/issues/29659 | [] | ianhi | 1 |
apify/crawlee-python | automation | 721 | Implement option for persistent context to PlaywrightCrawler | - Implement an option for persistent context (`user_data_dir`) to PlaywrightCrawler in a similar way as it is in the Crawlee JS.
- https://crawlee.dev/api/browser-pool/interface/BrowserPluginOptions#userDataDir
- Before implementation sync with @barjin, as he can provide further context and also suggest potential improvements. | closed | 2024-11-21T19:51:51Z | 2025-02-25T09:12:11Z | https://github.com/apify/crawlee-python/issues/721 | [
"enhancement",
"t-tooling"
] | vdusek | 0 |
onnx/onnx | tensorflow | 6,380 | Invalid protobuf error when loading successfully exported onnx model | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
Error reported in the after-stage of model conversion but can possibly be caused by unreported flaws during conversion.
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
On GCP Vertex VM, my UNet classifier model has been successfully exported to onnx format without raising a warning:
```
import torch
import torch.onnx
from model.UNet3D import UNETCLF
model = UNETCLF(in_channels=7 ,out_channels=1, features=[64, 128, 256, 512, 1024])
model.load_state_dict(torch.load('clf.pth'))
model.eval()
model = model.to('cuda')
dummy_input = torch.randn(1, 7, 16, 64, 64, device='cuda')
torch.onnx.export(model,
dummy_input,
"clf.onnx",
export_params=True,
opset_version=18,
do_constant_folding=True,
input_names=['input'],
output_names=['output'],
dynamic_axes={'input': {0: 'batch_size'},
'output': {0: 'batch_size'}})
```
I then tried to load it locally on VCS with:
```
def get_model():
client = storage.Client()
bucket = client.get_bucket('model-storage-bucket')
blob = bucket.blob('clf.onnx')
model = io.BytesIO()
blob.download_to_file(model)
model.seek(0)
return InferenceSession(model.read(), providers=["CPUExecutionProvider"])
```
It threw an invalid protobuf error:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Failed to load model because protobuf parsing failed.
```
When I looked more closely by:
```
onnx_model = onnx.load(local_file_path)
onnx.checker.check_model(onnx_model)
```
It threw this error message:
```
Error parsing message with type 'onnx.ModelProto'
```
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Ubuntu 22.04.4 LTS
- ONNX version (*e.g. 1.13*): 1.16.2 (same on both GCP VM and local device)
- Python version: 3.10.13
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version: 3.20.2 (same on both GCP VM and local device)
- Visual Studio version (if applicable): 1.89.1
### Reproduction instructions
<!--
Please let me know how you would reproduce the bug if required. I am happy to provide more information and instructions.
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The model to be successfully loaded locally with the above `get_model` function.
### Notes
<!-- Any additional information -->
I have spent quite some time modifying the model so that onnx export finally worked without throwing a warning. It might help if I provide the model here
```
class DoubleConv(nn.Module):
def __init__(self, in_channels, out_channels):
super(DoubleConv, self).__init__()
self.conv = nn.Sequential(
nn.Conv3d(in_channels, out_channels, 3, 1, 1, bias=False),
nn.BatchNorm3d(out_channels),
nn.ReLU(inplace=True),
nn.Conv3d(out_channels, out_channels, 3, 1, 1, bias=False),
nn.BatchNorm3d(out_channels),
nn.ReLU(inplace=True),
)
def forward(self, x):
return self.conv(x)
class UNETCLF(nn.Module):
def __init__(self, in_channels=7, out_channels=1, features=[64, 128, 256, 512, 1024]):
super(UNETCLF, self).__init__()
self.downs = nn.ModuleList()
self.ups = nn.ModuleList()
self.pool = nn.MaxPool3d(kernel_size=(1, 2, 2))
# Down Part of UNET
for feature in features:
self.downs.append(DoubleConv(in_channels, feature))
in_channels = feature
# Up Part of UNET
for feature in reversed(features):
self.ups.append(
nn.ConvTranspose3d(feature*2, feature, kernel_size=(1, 2, 2), stride=(1, 2, 2))
)
self.ups.append(DoubleConv(feature*2, feature))
self.bottleneck = DoubleConv(features[-1], features[-1]*2)
self.final_cov = nn.Conv3d(features[0], out_channels, kernel_size=1)
def forward(self, x):
skip_connections = []
for down in self.downs:
x = down(x)
skip_connections.append(x)
x = self.pool(x)
x = self.bottleneck(x)
skip_connections = skip_connections[::-1]
for idx in range(0, len(self.ups), 2):
x = self.ups[idx](x)
skip_connection = skip_connections[idx//2]
# Use padding instead of dynamic resizing
x = F.pad(x, [0, skip_connection.shape[4] - x.shape[4],
0, skip_connection.shape[3] - x.shape[3],
0, skip_connection.shape[2] - x.shape[2]])
concat_skip = torch.cat((skip_connection, x), dim=1)
x = self.ups[idx+1](concat_skip)
x = self.final_cov(x)
# Use a fixed index for the time dimension
o = x[:, :, x.shape[2]//2, :, :]
return o
``` | closed | 2024-09-21T00:27:56Z | 2024-11-01T14:57:29Z | https://github.com/onnx/onnx/issues/6380 | [] | DagonArises | 2 |
pallets/quart | asyncio | 274 | can't load variables with render_template | The bug stops me from even trying to load any kind of variable
1. Setup a basic app
2. make a html page in the templates folder
3. Import `render_template` and try to load a variable into the template
4. It will error out after running and visiting the page
`from quart import Quart, render_template`
`app = Quart(__name__)`
`@app.route("/")`
`async def index():`
` str = ""`
` return render_template("index.html", string=str)`
`if __name__ == "__main__":`
` app.run("0.0.0.0", 8080)`
### Error
[2023-09-20 15:28:58,478] ERROR in app: Exception on request GET /
Traceback (most recent call last):
File "/home/runner/tests/.pythonlibs/lib/python3.10/site-packages/quart/app.py", line 1650, in handle_request
return await self.full_dispatch_request(request_context)
File "/home/runner/tests/.pythonlibs/lib/python3.10/site-packages/quart/app.py", line 1676, in full_dispatch_request
return await self.finalize_request(result, request_context)
File "/home/runner/tests/.pythonlibs/lib/python3.10/site-packages/quart/app.py", line 1733, in finalize_request
response = await self.make_response(result)
File "/home/runner/tests/.pythonlibs/lib/python3.10/site-packages/quart/app.py", line 1635, in make_response
raise TypeError(f"The response value type ({type(value).__name__}) is not valid")
TypeError: The response value type (coroutine) is not valid
I believe it should've loaded a variable into the template, not stop it from running
Environment:
- Python version: 3.10
- Quart version: 0.18.4
| closed | 2023-09-20T16:00:15Z | 2023-10-05T00:17:27Z | https://github.com/pallets/quart/issues/274 | [] | Arcader717 | 2 |
vitalik/django-ninja | rest-api | 834 | [BUG] `servers` are null, but should be empty list/non-existent | **Describe the bug**
When exporting OpenAPI Schema, some tools (e.g. [openapi-generator-cli](https://github.com/OpenAPITools/openapi-generator-cli)) can't handle null value. Handling null is not documented, but empty list/lack of this property is ([OpenAPI specs](https://spec.openapis.org/oas/v3.1.0#oasServers)).
**Versions (please complete the following information):**
- Python version: 3.11.4
- Django version: 4.2.4
- Django-Ninja version: 0.22.2
- Pydantic version: 1.10.12 | closed | 2023-08-28T10:56:46Z | 2023-08-28T11:50:31Z | https://github.com/vitalik/django-ninja/issues/834 | [] | pgronkievitz | 2 |
AntonOsika/gpt-engineer | python | 596 | Run benchmark for `--steps tdd` and compare to "default" benchmark results | If --steps tdd doesn't work well, we should remove it.
If we see obvious ways to improve it, we can of course first consider fixing and see if it helps as part of the investigation.
| closed | 2023-08-15T21:19:46Z | 2023-09-06T11:08:55Z | https://github.com/AntonOsika/gpt-engineer/issues/596 | [
"good first issue"
] | AntonOsika | 5 |
comfyanonymous/ComfyUI | pytorch | 6,493 | Didn't see Nvidia Cosmos workflow | ### Feature Idea
Nvidia Cosmos 7B and 14B: text to video and image to video diffusion model support.
### Existing Solutions
_No response_
### Other
_No response_ | closed | 2025-01-17T01:51:14Z | 2025-01-18T17:43:45Z | https://github.com/comfyanonymous/ComfyUI/issues/6493 | [
"Feature"
] | IAFFeng | 1 |
vanna-ai/vanna | data-visualization | 447 | Why is "add_question_sql" storing vectors as question+sql? SQL isn't natural language, so it wouldn't affect the semantic matching of input queries? | 
Why is "add_question_sql" storing vectors as question+sql? SQL isn't natural language, so it wouldn't affect the semantic matching of input queries? | closed | 2024-05-17T01:50:31Z | 2024-05-23T02:32:05Z | https://github.com/vanna-ai/vanna/issues/447 | [] | qingwu11 | 2 |
pytest-dev/pytest-html | pytest | 750 | Captured stdout in a subtest (pytest-subtests) is not displayed properly | Captured stdout in a subtest is not displayed as expected. Tested with v3.2.0 and v4.0.2 - both have the issue in a different way. Also the issue behavior changes depending on the `--capture` option value. I tried `fd` (the default value) and `tee-sys`. Only v3.2.0 with `--capture tee-sys` seems to work as expected.
- Test code to reproduce:
```
def test_something(subtests):
print("main test")
with subtests.test("subtest"):
print("sub test")
```
- Results:
- v3.2.0
1. command: pytest --html fd_3.2.0.html
-> Captured stdout from the subtest is missing
2. command: pytest --html tee-sys_3.2.0.html --capture tee-sys
-> Looks good
- v4.0.2
1. command: pytest --html fd_4.0.2.html
-> main test and subtest are displayed as separate tests. One has captured stdout from the main test, the other has captured stdout from the subtest
2. command: pytest --html tee-sys_4.0.2.html --capture tee-sys
-> main test and subtest are displayed as separate tests. One has no captured stdout, the other has captured stdout from both main test and subtest




| closed | 2023-10-20T18:27:23Z | 2023-10-23T15:36:25Z | https://github.com/pytest-dev/pytest-html/issues/750 | [] | yugokato | 2 |
cvat-ai/cvat | tensorflow | 8,494 | Error in django request | Hi,
I have installed cvat v12.6.2.
Everything else is working fine, but I am getting these errors in Cvat_opa container logs
```typescript
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp: lookup cvat-server on 127.0.0.11:53: no such host","name":"cvat","plugin":"bundle","time":"2024-10-01T11:15:58Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp: lookup cvat-server on 127.0.0.11:53: no such host","name":"cvat","plugin":"bundle","time":"2024-10-01T11:15:59Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp: lookup cvat-server on 127.0.0.11:53: no such host","name":"cvat","plugin":"bundle","time":"2024-10-01T11:15:59Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp: lookup cvat-server on 127.0.0.11:53: no such host","name":"cvat","plugin":"bundle","time":"2024-10-01T11:15:59Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp: lookup cvat-server on 127.0.0.11:53: no such host","name":"cvat","plugin":"bundle","time":"2024-10-01T11:15:59Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:03Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:04Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:05Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:08Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:12Z"}
{"level":"error","msg":"Bundle load failed: request failed: Get \"http://cvat-server:8080/api/auth/rules\": dial tcp 172.19.0.17:8080: connect: connection refused","name":"cvat","plugin":"bundle","time":"2024-10-01T11:16:19Z"}
```
Cvat-server has no errors, healthcheck command for all container is running fine.
Also I am not able to post the requests to cvat (for example this one : http://ip_adress:8080/api/auth/login) it gives server error

And these error I am getting in cvat_server when above post is sent
```
[2024-10-01 11:17:41,839] ERROR django.request: Internal Server Error: /api/auth/login
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 284, in _get_response_async
response = await sync_to_async(
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/template/response.py", line 114, in render
self.content = self.rendered_content
File "/opt/venv/lib/python3.10/site-packages/rest_framework/response.py", line 70, in rendered_content
ret = renderer.render(self.data, accepted_media_type, context)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/renderers.py", line 723, in render
context = self.get_context(data, accepted_media_type, renderer_context)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/renderers.py", line 701, in get_context
'filter_form': self.get_filter_form(data, view, request),
File "/opt/venv/lib/python3.10/site-packages/rest_framework/renderers.py", line 629, in get_filter_form
queryset = view.get_queryset()
File "/opt/venv/lib/python3.10/site-packages/rest_framework/generics.py", line 63, in get_queryset
assert self.queryset is not None, (
AssertionError: 'LoginViewEx' should either include a `queryset` attribute, or override the `get_queryset()` method.
```
IS this related to OPA issue?
Can someone help me with the issue? | open | 2024-10-01T11:56:12Z | 2024-10-08T16:02:50Z | https://github.com/cvat-ai/cvat/issues/8494 | [
"bug"
] | ShreenidhiH | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.