repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
koxudaxi/fastapi-code-generator | fastapi | 84 | ValueError: 'template/main.jinja2' is not in the subpath | I'm trying to make my project compatible with poetry so I have to add a start method where I invoke uvicorn so that I can have the following in the poetry run script:
```
[tool.poetry.scripts]
run = "skeleton_python_api.main:start"
```
I have the following structure of a project:
```
โโโ README.md
โโโ openapi.yaml
โโโ poetry.lock
โโโ pyproject.toml
โโโ skeleton_python_api
โ โโโ __init__.py
โ โโโ main.py
โ โโโ models.py
โโโ template
โ โโโ main.jinja2
โโโ tests
โโโ __init__.py
โโโ test_skeleton_python_api.py
```
While running the following command:
```
(skeleton-python-api-PB31_aPS-py3.9) โ skeleton-python-api git:(master) โ fastapi-codegen --input openapi.yaml --output skeleton_python_api -t template
```
I'm getting an error:
```
Traceback (most recent call last):
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/bin/fastapi-codegen", line 8, in <module>
sys.exit(app())
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/fastapi_code_generator/__main__.py", line 28, in main
return generate_code(input_name, input_text, output_dir, template_dir)
File "/Users/bartosz.nadworny/Library/Caches/pypoetry/virtualenvs/skeleton-python-api-PB31_aPS-py3.9/lib/python3.9/site-packages/fastapi_code_generator/__main__.py", line 50, in generate_code
relative_path = target.relative_to(template_dir.absolute())
File "/usr/local/Cellar/python@3.9/3.9.0_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/pathlib.py", line 928, in relative_to
raise ValueError("{!r} is not in the subpath of {!r}"
ValueError: 'template/main.jinja2' is not in the subpath of '/Users/bartosz.nadworny/workspace/space/skeleton-python-api/template' OR one path is relative and the other is absolute.
```
Template:
```
from __future__ import annotations
import uvicorn
from fastapi import FastAPI
{{imports}}
app = FastAPI(
{% if info %}
{% for key,value in info.items() %}
{{ key }} = "{{ value }}",
{% endfor %}
{% endif %}
)
{% for operation in operations %}
@app.{{operation.type}}('{{operation.snake_case_path}}', response_model={{operation.response}})
def {{operation.function_name}}({{operation.snake_case_arguments}}) -> {{operation.response}}:
{%- if operation.summary %}
"""
{{ operation.summary }}
"""
{%- endif %}
pass
{% endfor %}
def start():
uvicorn.run(app, host="0.0.0.0", port=8000)
```
# Env
macos
python 3.9.0
fastapi-code-generator 0.1.0 | closed | 2021-01-07T12:13:38Z | 2021-01-13T09:29:49Z | https://github.com/koxudaxi/fastapi-code-generator/issues/84 | [] | nadworny | 1 |
miguelgrinberg/python-socketio | asyncio | 338 | Python Server & Node Client , On Authentication Fail client receives 1 fix error in any scenario. | Hello,
I have a scenario where HTTPS Server is of Python and Client is from the nodeJs,
For a client, I need to show 2 different messages for the below scenario
-If a client tries to connect with invalid URL like 'abc.com', I need to show the message "server not found"
-If the user enters valid URL and passes an invalid token, I need to show the message "Invalid token"
On Authentication fail if i raise any type of errors on server,on client i receive only 1 type of FIX error that is "**websocket error**".I can not receive same error object with text on client sidewhich is sent by server.
For Node, I have just 1 default function where I can receive any type of error on connection error...
```
socket.on("connect_error", (data) => {
console.log((data));
})
```
So here if from server i return _ConnectionRefusedError('authentication failed')_ (refer server code),so in node js i can compare datatype and text message of function parameter,and on its basis i can decide which message i need to show to user.
> My Python server HTTPS server +SocketIO code looks like:
```
import socketio
from aiohttp import web
import asyncio
import eventlet
import ssl
import jwt
from urllib.parse import urlparse, parse_qs
sio = socketio.AsyncServer(cors_allowed_origins="*")
app = web.Application()
sio.attach(app)
@sio.event
async def connect(sid, environ):
print('>>>> connect <<<<< ')
print(sid)
raise ConnectionRefusedError('authentication failed')
@sio.event
def disconnect(sid):
print('>>>> disconnect <<<<< ')
@sio.event
def error(sid, data):
print('>>>> error <<<<< ')
@sio.event
async def message(sid, data):
print('>>>> message <<<<< ')
print(data)
return 'Acknowledgement From Server'
ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
ctx.load_cert_chain('server.cert', 'server.key')
web.run_app(app, host='localhost', port=2499, ssl_context=ctx)
```
#################################################################
> My client code is:
```
const io = require('socket.io-client');
var jwt = require('jsonwebtoken');
let token = jwt.sign({ username: 'git' }, '2YyZ?&qLkKus`pGV', { expiresIn: 60 * 60 });
const socket = io.connect('wss://localhost:2499', {
forceNew: true,
autoConnect: true,
rejectUnauthorized: false,
reconnection: false,
secure: true,
transports: ['websocket'],
hostname: 'localhost',
port: 2499,
upgrade: false,
query: {
token
},
})
socket.on("connect", (data, aaaaa) => {
console.log("connect====>");
console.log(data);
})
socket.on("connect_error", (data) => {
console.log("connect_error====>");
console.log((data));
})
socket.on("message", (data) => {
console.log("message===>", data);
})
socket.on("error", (data) => {
console.log("Error===>", data);
})
socket.on("disconnect", (data) => {
console.log("disconnect===>", data);
});
```
What's Wrong here.Why i am not getting same error on client side?
How can i achieve desired output?
Thanks in advance | closed | 2019-08-21T08:38:00Z | 2019-08-29T05:46:39Z | https://github.com/miguelgrinberg/python-socketio/issues/338 | [
"question"
] | harshkoralwala | 6 |
bmoscon/cryptofeed | asyncio | 332 | OHLCV Aggregation Coinbase fails with `unexpected keyword argument 'order_type'` | I use the script `examples/demo_ohlcv.py`
```
from cryptofeed import FeedHandler
from cryptofeed.backends.aggregate import OHLCV
from cryptofeed.callback import Callback
from cryptofeed.defines import TRADES
from cryptofeed.exchanges import Coinbase
from cryptofeed.exchanges import Binance
async def ohlcv(data=None):
print(data)
def main():
f = FeedHandler()
f.add_feed(Coinbase(pairs=['BTC-USD', 'ETH-USD', 'BCH-USD'], channels=[TRADES], callbacks={TRADES: OHLCV(Callback(ohlcv), window=30)}))
#f.add_feed(Binance(pairs=['BTC-USDT'], channels=[TRADES], callbacks={TRADES: OHLCV(Callback(ohlcv), window=30)}))
f.run()
if __name__ == '__main__':
main()
```
Binance or FTX works, however, for Coinbase I get:
```
TypeError: __call__() got an unexpected keyword argument 'order_type'
```
I use the current dev version 1.6.2.
Do I use it wrong?
Thansk for a quick reply
| closed | 2020-11-20T18:43:57Z | 2020-11-21T01:35:42Z | https://github.com/bmoscon/cryptofeed/issues/332 | [
"bug"
] | degloff | 1 |
coqui-ai/TTS | pytorch | 3,799 | [Bug] Demo Inference Produces Distorted Audio Output | ### Describe the bug
I followed the demo code provided by Coqui to create a simple dataset and fine-tune a model using Gradio. However, when I load the model and perform inference, the output audio is heavily distorted, resembling the sound of a hair shaving machine.
You can listen to the output at the following link: [Distorted Audio Output](https://voca.ro/12zTyyaafKBF).
Steps to Reproduce:
Create Dataset:
Followed the instructions to create a simple dataset using the demo code.
Fine-Tune Model:
Used the Gradio interface as provided in the demo to fine-tune the model.
Load Model and Inference:
Loaded the fine-tuned model.
Create a simple dataset, fine-tune and performed inference using the Gradio interface with the following setup:
`py TTS/TTS/demos/xtts_ft_demo/xtts_demo.py`
The model should produce a clear and intelligible speech output corresponding to the input text.
Actual Result:
The output audio is distorted and unintelligible. You can hear the output here: [Distorted Audio Output](https://voca.ro/12zTyyaafKBF).
Additional Information:
I verified that CUDA and the NVIDIA drivers are correctly installed and operational.
The nvidia-smi command confirms that the GPU is recognized and utilized by the system.
Other models and libraries utilizing CUDA work as expected.
Logs and Error Messages:
No explicit error messages were encountered during the execution. The process completes without any exceptions.
Request:
Could you please provide guidance on how to resolve this issue or if there are any specific configurations required to avoid such distortion in the output?
Thank you for your assistance.
### To Reproduce
`py TTS/TTS/demos/xtts_ft_demo/xtts_demo.py`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- Operating System: Window 11
- Python Version: 3.10.4
- CUDA Version: 11.5
- PyTorch Version: 1.11.0+cu115
- coqui-ai Version: Last Update on github
```
### Additional context
_No response_ | closed | 2024-06-25T18:29:54Z | 2025-01-03T08:49:11Z | https://github.com/coqui-ai/TTS/issues/3799 | [
"bug",
"wontfix"
] | Heshamtr | 1 |
aio-libs/aiomysql | sqlalchemy | 195 | unable to perform operation on <TCPTransport closed=True reading=False 0x1e41248>; the handler is closed` | hi,i use
python3.5.3
aiohttp 2.0.7
aiomysql 0.0.9
sqlalchemy1.1.10
When i open the application for a long time.Throw the following error
`2017-07-27 10:28:13 rtgroom.py[line:141] ERROR Traceback (most recent call last):
File "/home/wwwroot/ykrealtime/rtgame/models/mysql/rtgroom.py", line 137, in get_invite_me_count
RTG_room.select().where(RTG_room.c.create_by == create_by).where(RTG_room.c.status == 1))
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/utils.py", line 66, in __await__
resp = yield from self._coro
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/sa/connection.py", line 107, in _execute
yield from cursor.execute(str(compiled), post_processed_params[0])
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/cursors.py", line 239, in execute
yield from self._query(query)
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/cursors.py", line 460, in _query
yield from conn.query(q)
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/connection.py", line 397, in query
yield from self._execute_command(COMMAND.COM_QUERY, sql)
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/connection.py", line 627, in _execute_command
self._write_bytes(prelude + sql[:chunk_size - 1])
File "/home/wwwroot/ykrealtime/venv/lib/python3.5/site-packages/aiomysql/connection.py", line 568, in _write_bytes
return self._writer.write(data)
File "/usr/local/lib/python3.5/asyncio/streams.py", line 294, in write
self._transport.write(data)
File "uvloop/handles/stream.pyx", line 632, in uvloop.loop.UVStream.write (uvloop/loop.c:74612)
File "uvloop/handles/handle.pyx", line 150, in uvloop.loop.UVHandle._ensure_alive (uvloop/loop.c:54917)
RuntimeError: unable to perform operation on <TCPTransport closed=True reading=False 0x1e41248>; the handler is closed` | closed | 2017-07-27T03:44:36Z | 2018-12-06T03:38:03Z | https://github.com/aio-libs/aiomysql/issues/195 | [] | larryclean | 4 |
taverntesting/tavern | pytest | 564 | Support using custom function in request.auth | As stated in the [requests document](https://requests.readthedocs.io/en/master/user/advanced/#custom-authentication), user may pass a sub class of AuthBase as the auth parameter. This is very useful when the authentication is a little bit more complicated than passing the session token or basic auth.
I guess this can be worked around by allowing user to pass a custom function which return AuthBase subclass? Something similar to the below.
```
request:
auth:
$ext:
function: security:prepare_auth
extra_kwargs:
username: "{username_variable}"
```
In the `security.py`, the caller can do the below
```
from requests.auth import AuthBase
class PizzaAuth(AuthBase):
"""Attaches HTTP Pizza Authentication to the given Request object."""
def __init__(self, username):
# setup any auth-related data here
self.username = username
def __call__(self, r):
# modify and return the request
r.headers['X-Pizza'] = self.username
return r
def prepare_auth(**kwargs):
username = kwargs['username']
return PizzaAuth(username)
```
I hope this makes sense ?
I do have a fix which can be submitted shortly. Let me know what you think. | open | 2020-06-28T13:08:51Z | 2021-01-13T13:36:33Z | https://github.com/taverntesting/tavern/issues/564 | [] | sohoffice | 2 |
allenai/allennlp | data-science | 5,105 | Build Fairness Library | **Motivation:** As models and datasets become increasingly large and complex, it is critical to evaluate the fairness of models according to multiple definitions of fairness and mitigate bias in learned representations. This library aims to make fairness metrics, fairness training tools, and bias mitigation algorithms extremely easy to use and accessible to researchers and practitioners of all levels.
**Success Criteria:**
* Create a fairness library, and apply it to the Textual Entailment model, publishing an analysis for where the present models fall short and where they should improve.
* Write a blog post and guide chapter and add a model and demo for the implementations of the fairness metrics and bias mitigation algorithms, and explain the broader impact.
**Milestones**
Implement the following:
Fairness Metrics
- [x] Independence, Separation, Sufficiency
- [x] Sparse Annotations for Ground-Truth
- [ ] Dataset Bias Amplification, Model Bias Amplification
Training-Time Fairness Algorithms (with and without Demographics):
- [x] Through Adversarial Learning (with Demographics)
- [ ] Minimax (without Demographics)
- [ ] Repeated Loss Minimization (without Demographics)
Bias Mitigation Algorithms:
- [x] Linear projection, Hard debiasing, OSCaR, Iterative Null Space Projection
- [x] Bias direction methods: Classification Normal, Two Means, Paired PCA, PCA
- [x] Contextualized word embeddings
Bias Metrics:
- [x] WEAT, Embedding Coherence Test, NLI
Communication:
- [x] blog post
- [x] guide chapter
- [x] demo
- [x] contribute binary gender bias-mitigated model for SNLI to allennlp-models
- [x] contribute binary gender bias-mitigated model for SNLI to demos
| open | 2021-04-08T21:25:54Z | 2022-12-15T16:09:49Z | https://github.com/allenai/allennlp/issues/5105 | [] | ArjunSubramonian | 38 |
jacobgil/pytorch-grad-cam | computer-vision | 491 | the example in README need to update | this link ๏ผhttps://jacobgil.github.io/pytorch-gradcam-book/Class%20Activation%20Maps%20for%20Semantic%20Segmentation.html

i found now the code can automatic use the same device of model๏ผ
```
class BaseCAM:
def __init__(self,
model: torch.nn.Module,
target_layers: List[torch.nn.Module],
reshape_transform: Callable = None,
compute_input_gradient: bool = False,
uses_gradients: bool = True,
tta_transforms: Optional[tta.Compose] = None) -> None:
self.model = model.eval()
self.target_layers = target_layers
# Use the same device as the model.
self.device = next(self.model.parameters()).device
xxx
``` | open | 2024-03-14T07:33:48Z | 2024-03-14T07:37:37Z | https://github.com/jacobgil/pytorch-grad-cam/issues/491 | [] | 578223592 | 1 |
pywinauto/pywinauto | automation | 1,171 | {AttributeError}'EditWrapper' object has no attribute 'is_editable' | ## Expected Behavior
I get a edit control from a window. I want to check whether the control is editable.
[https://pywinauto.readthedocs.io/en/latest/code/pywinauto.controls.uia_controls.html?highlight=is_editable#pywinauto.controls.uia_controls.EditWrapper.is_editable](url)
According to the document, method "is_editable" can be used to pywinauto.controls.uia_controls.EditWrapper.
## Actual Behavior
when I use is_editable, it throw the error "{AttributeError}'EditWrapper' object has no attribute 'is_editable'"
## Steps to Reproduce the Problem
1. open software 7zFM
2. get descendants whose control type is "Edit"
3. choose edit control "uia_controls.EditWrapper - 'C:', Edit"
4. call method is_editable()
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version:0.6.8
- Python version and bitness:python 3.8
- Platform and OS:windows 10
| open | 2022-01-25T01:09:29Z | 2022-01-25T01:09:29Z | https://github.com/pywinauto/pywinauto/issues/1171 | [] | jiliguluss | 0 |
jschneier/django-storages | django | 1,243 | S3Boto3Storage.exists() always returns False | Hey guys, need a small help again
I'm having an issue with S3Boto3Storage.exists()
It always returns false even though I have that directory present in the bucket.
I need to know what is going wrong as I want to make sure that if a user uploads new file with same content but, with different file name I would want only the newly uploaded to be visible.
I'm attaching AWS configs, storages_backends.py, views.py
settings.py
```
# AWS Config
AWS_ACCESS_KEY_ID = 'AWS_ACCESS_KEY_ID '
AWS_SECRET_ACCESS_KEY = 'AWS_SECRET_ACCESS_KEY'
AWS_STORAGE_BUCKET_NAME = 'AWS_STORAGE_BUCKET_NAME'
AWS_S3_SIGNATURE_NAME = 's3v4',
AWS_S3_REGION_NAME = 'ap-south-1'
AWS_S3_FILE_OVERWRITE = True
AWS_DEFAULT_ACL = None
AWS_S3_VERITY = True
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
```
storages_backends.py
```
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
bucket_name = 'bucket-name'
```
views.py (where the logic is going wrong
```
if request.method == 'POST':
print('INSIDE POST DOC UPLOAD')
doc_storage = MediaStorage()
team = Team.objects.filter(teamID=user.username).get()
media_storage = MediaStorage()
file_path_bucket = 'documents/{0}'.format(request.user.username)
print('Path is: ', file_path_bucket)
Below line always returns False
print('Check Dir', media_storage.exists(file_path_bucket))
```
For example if I upload a file to documents/CSE-001 the directory present. So, when I pass this directory to exists() it should return True instead of False because, that directory exists inside the bucket. I'm attaching an screenshot of the directory which is created when a user uploads a file.

Please help me with that...
I'm not sure where I have gone wrong
Thank you | closed | 2023-04-24T16:08:38Z | 2023-05-20T18:06:14Z | https://github.com/jschneier/django-storages/issues/1243 | [] | bphariharan1301 | 1 |
holoviz/panel | jupyter | 7,150 | global loading spinner static asset not available | #### ALL software version info
panel 1.3.8
Docker version 26.1.3, build b72abbb
conda 24.1.2
the app is running locally within a docker container
#### Description of expected behavior and the observed behavior
I would expect the loading spinner to be loaded successfully.
#### Complete, minimal, self-contained example code that reproduces the issue
```
panel serve /home/jovy/work/notebooks/A.ipynb --port 5006 --address 0.0.0.0 --allow-websocket-origin=0.0.0.0:5006 --log-level debug --autoreload --reuse-sessions --global-loading-spinner
```
2024-08-15 16:02:06,114 Uncaught exception GET [/static/extensions/panel//arc_spinner.svg](http://localhost:8888/static/extensions/panel//arc_spinner.svg) (172.17.0.1)
HTTPServerRequest(protocol='http', host='0.0.0.0:5006', method='GET', uri='[/static/extensions/panel//arc_spinner.svg](http://localhost:8888/static/extensions/panel//arc_spinner.svg)', version='HTTP[/1.1](http://localhost:8888/1.1)', remote_ip='172.17.0.1')
Traceback (most recent call last):
File "[/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py", line 1792](http://localhost:8888/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py#line=1791), in _execute
self.finish()
File "[/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py", line 1218](http://localhost:8888/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py#line=1217), in finish
self.set_etag_header()
File "[/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py", line 1702](http://localhost:8888/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py#line=1701), in set_etag_header
etag = self.compute_etag()
^^^^^^^^^^^^^^^^^^^
File "[/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py", line 2775](http://localhost:8888/opt/conda/envs/myenv/lib/python3.11/site-packages/tornado/web.py#line=2774), in compute_etag
assert self.absolute_path is not None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
2024-08-15 16:02:06,115 500 GET [/static/extensions/panel//arc_spinner.svg](http://localhost:8888/static/extensions/panel//arc_spinner.svg) (172.17.0.1) 1.75ms
2024-08-15 16:02:06,120 Subprotocol header received
2024-08-15 16:02:06,121 WebSocket connection opened | closed | 2024-08-15T16:09:39Z | 2024-08-24T12:15:02Z | https://github.com/holoviz/panel/issues/7150 | [] | updiversity | 0 |
LAION-AI/Open-Assistant | machine-learning | 3,747 | Not able to get to the dashboard | There is no way for me to access the dashboard tools. I will see the dashboard for a split second, and then it just goes back to the main page, naming off contributors and affiliates. | closed | 2024-01-31T01:26:35Z | 2024-01-31T05:14:23Z | https://github.com/LAION-AI/Open-Assistant/issues/3747 | [] | RayneDrip | 1 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 71 | Question about the framework | Thanks for your excellent wor for so many implementations, i was wondering that would you accept some algorithms that are implemented using tensorflow, mxnet or paddlepaddle, rather than pytorch? | closed | 2021-07-27T05:07:38Z | 2021-08-07T02:17:25Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/71 | [
"question"
] | littletomatodonkey | 2 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 142 | ๆๅๅ้ณ็ๅ
ๅฎน | ๆไธไปฝ่ฎญ็ปๆฐๆฎ๏ผๆฏไปฝ่ฏญ้ณๅ
ๅฎนๅ
ๆฌ็ฉบ็ฝๅ
ๅฎน่ทไบบ็ๅๅฃฐๅ
ๅฎนใๆณ่ฏท้ฎไธๆไปไนๆนๆณๅฏไปฅๆไบบ็ๅๅฃฐๅ
ๅฎนๅ็ฌๆๅๅบๆฅไฟๅญๆwav? | closed | 2019-09-18T06:46:32Z | 2021-11-22T14:06:12Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/142 | [] | zraul | 5 |
ultralytics/yolov5 | machine-learning | 12,527 | speed estimate using yolo5 - put coridantes of cars in xml file or csv | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
hi i am using yolo5 py porchand i can detect cars and everything ok but i have a code for speed detecting but needs cars cordinates . can i do someting else using only yolo for speed at my video?
thank you for help

### Additional
_No response_ | closed | 2023-12-19T22:06:32Z | 2024-10-20T19:34:52Z | https://github.com/ultralytics/yolov5/issues/12527 | [
"question"
] | gchinta1 | 6 |
bmoscon/cryptofeed | asyncio | 217 | [Feature request] support to record "Market price" or "index" in bitmex | seems they are terribly out of line when things get funny | open | 2020-03-13T03:55:55Z | 2020-08-01T00:49:43Z | https://github.com/bmoscon/cryptofeed/issues/217 | [
"Feature Request"
] | xiandong79 | 7 |
jmcnamara/XlsxWriter | pandas | 193 | Problem with one formula | Hello.
I sorry, I don't speak English very well (I am French)
I have a small problem with a formula.
I reduced my program easier to explain my problem.
``` python
import xlsxwriter
workbook = xlsxwriter.Workbook('test.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'SUCCEED')
worksheet.write('A2', 'FAILED')
worksheet.write('A3', 'SUCCEED')
#worksheet.write_formula('A5', '=NB.SI(A1:A3;"SUCCEED")') #French
worksheet.write_formula('B5', '=COUNTIF(A1:A3;"SUCCEED")') #English
workbook.close()
```
I want to count the number of times that there is "succed" from my results.
But I am unable to open excel when the .xlsx is generated.
The error (in french):
"Dรฉsolรฉ... Nous avons trouvรฉ un problรจme dans le contenu de "test.xlsx". mais nous pouvons essayer de rรฉcupรฉrer le maximum de contenu. Si la source de ce classeur est fiable cliquer sur oui"
I thinks the english error is:
"We're sorry. We can't open test.xlsx because we found a problem with its contents."
The repport:
``` xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<recoveryLog xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"><logFileName>error087360_01.xml</logFileName><summary>Des erreurs ont รฉtรฉ dรฉtectรฉes dans le fichier ยซย C:\Users\ArnaudF\Desktop\test.xlsxย ยป</summary><removedRecords summary="Liste des enregistrements supprimรฉs ci-dessousย :"><removedRecord>Enregistrements supprimรฉs: Formule dans la partie /xl/worksheets/sheet1.xml</removedRecord></removedRecords></recoveryLog>
```
And LibreOffice wrote "Err :508" instead of result
Yet I don't see error in the python code ...
An idea of the problem?
Thank you and good day/evening
| closed | 2014-12-12T14:58:03Z | 2019-10-17T14:14:32Z | https://github.com/jmcnamara/XlsxWriter/issues/193 | [] | Keevar | 4 |
strawberry-graphql/strawberry | asyncio | 3,617 | Unable to import strawberry.django since v0.236.0 | I receive an error when trying to build the project since [`v0.236.0`](https://github.com/strawberry-graphql/strawberry/releases/tag/0.236.0)
## Describe the Bug
```
File "/Users/boesch/.pyenv/versions/project/lib/python3.10/site-packages/strawberry/django/__init__.py", line 16, in __getattr__
raise AttributeError(
AttributeError: Attempted import of strawberry.django.type failed. Make sure to install the'strawberry-graphql-django' package to use the Strawberry Django extension API.
```
## System Information
```
# requirements.in
strawberry-graphql[asgi]==0.236.0
strawberry-graphql-django==0.37.0
```
Note that I'm keeping strawberry django at `v0.37.0` even though their latest release is [`v0.47.2`](https://github.com/strawberry-graphql/strawberry-django/releases/tag/v0.47.2) because, from what I can tell without digging into it too much yet, [`v0.37.1`](https://github.com/strawberry-graphql/strawberry-django/releases/tag/v0.37.1) forces asgi 3.8+ when django 4.2 wants 3.7
Not the problem to resolve here, but fyi, I don't _think_ I can upgrade strawberry django yet. Will probably make an issue over there
## Additional Context
Tbh I'm not seeing anything at first glance in https://github.com/strawberry-graphql/strawberry/pull/3546/files that would cause this (though it is a big PR ๐
). Nothing significant changed within the `strawberry/django` package at least ๐คท
Running `strawberry upgrade update-imports` just updates some unset types for me, still have the issue, fwiw. | closed | 2024-09-04T14:48:25Z | 2025-03-20T15:56:51Z | https://github.com/strawberry-graphql/strawberry/issues/3617 | [
"bug"
] | bradleyoesch | 3 |
gradio-app/gradio | python | 10,658 | Events injecting function instead of called function value for gr.State | ### Describe the bug
I've noticed that the render function is injecting the value of gr.State before the state value is called AND after the state value is called. It should only inject the called value not the callable itself if I understand correctly
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
input_text = gr.Textbox(label="input")
input_state = gr.State(
lambda: bool()
)
@gr.render(inputs=[input_text, input_state])
def show_split(text, state):
print(state)
if len(text) == 0:
gr.Markdown("## No Input Provided")
else:
for letter in text:
gr.Textbox(letter)
demo.launch()
```
### Logs
Here is the output of the above code from the print statement inside the decorated render function:
```shell
<function <lambda> at 0x000001F66D8CCE00>
False
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.17.1
gradio_client version: 1.7.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.1 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.0
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can't work around it | open | 2025-02-23T00:57:41Z | 2025-03-03T07:31:16Z | https://github.com/gradio-app/gradio/issues/10658 | [
"bug"
] | brycepg | 1 |
robinhood/faust | asyncio | 535 | Consumer thread not yet started when enable_kafka = False | I'd like to run a Faust worker without doing anything with Kafka, for example to run timers.
## Steps to reproduce
```
import faust
from faust.app.base import BootStrategy
class App(faust.App):
class BootStrategy(BootStrategy):
enable_kafka = False
app = App('test')
```
## Expected behavior
App starts.
## Actual behavior
App crashes initializing the TableManager:
```
[^Worker]: Error: ConsumerNotStarted('Consumer thread not yet started')
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/mode/worker.py", line 273, in execute_from_commandline
self.loop.run_until_complete(self._starting_fut)
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 612, in run_until_complete
return future.result()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 736, in start
await self._default_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start
await self._actually_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 767, in _actually_start
await child.maybe_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 795, in maybe_start
await self.start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 736, in start
await self._default_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start
await self._actually_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 767, in _actually_start
await child.maybe_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 795, in maybe_start
await self.start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 736, in start
await self._default_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start
await self._actually_start()
File "/usr/local/lib/python3.8/site-packages/mode/services.py", line 760, in _actually_start
await self.on_start()
File "/usr/local/lib/python3.8/site-packages/faust/tables/manager.py", line 143, in on_start
await self._update_channels()
File "/usr/local/lib/python3.8/site-packages/faust/tables/manager.py", line 162, in _update_channels
tp for tp in self.app.consumer.assignment()
File "/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py", line 1292, in assignment
return self._thread.assignment()
File "/usr/local/lib/python3.8/site-packages/faust/transport/drivers/aiokafka.py", line 754, in assignment
return ensure_TPset(self._ensure_consumer().assignment())
File "/usr/local/lib/python3.8/site-packages/faust/transport/drivers/aiokafka.py", line 792, in _ensure_consumer
raise ConsumerNotStarted('Consumer thread not yet started')
faust.exceptions.ConsumerNotStarted: Consumer thread not yet started
```
# Versions
* Python version: 3.8.1
* Faust version: 1.10.3
* Operating system: Debian Buster | closed | 2020-02-24T13:41:29Z | 2020-02-26T23:28:57Z | https://github.com/robinhood/faust/issues/535 | [] | joekohlsdorf | 1 |
aleju/imgaug | machine-learning | 669 | cval not behaving correctly when given float value | According to [the docs](https://imgaug.readthedocs.io/en/latest/source/api_augmenters_geometric.html) `cval` should accept float values and create new pixels according to the given value:
> **cval** (number ... ) โ The constant value to use when filling in newly created pixels. ... _It may be a float value._
However in practice (with imgaug.augmenters.Affine at least) this does not work. It appears that the actual value being returned is `int(cval)`. My particular use case is with `float32` images ranging from `[0.0, 1.0]`.
The issue can be reproduced this way:
```
import numpy as np
import matplotlib.pyplot as plt
import imgaug as ia
import imgaug.augmenters as iaa
im = np.array(ia.quokka(size=(256,256)),dtype=np.float32)
im = im/(2**8-1)
print("First pixel = " + str(im[0,0,:]))
aug = iaa.Affine(scale=0.8,cval=0.4)
aug_im = aug(image=im)
print("First pixel = " + str(aug_im[0,0,:]))
plt.imsave('./regular.png',im)
plt.imsave('./scaled.png',aug_im)
```
Output:
```
First pixel = [0.19215687 0.30588236 0.32156864]
First pixel = [0. 0. 0.]
```
Where this should now be `[0.4 0.4 0.4]`.
Resulting images:


If this can't be fixed please update the documentation, as currently this is not the expected behavior.
| closed | 2020-05-20T15:18:42Z | 2020-05-25T19:47:49Z | https://github.com/aleju/imgaug/issues/669 | [
"bug"
] | cdjameson | 1 |
deepinsight/insightface | pytorch | 1,855 | RAM | ```
# our RAM is 256G
mount -t tmpfs -o size=140G tmpfs /train_tmp
```
How to find my computer size
htop ? Mem? | open | 2021-12-11T09:31:01Z | 2021-12-13T02:57:32Z | https://github.com/deepinsight/insightface/issues/1855 | [] | alicera | 2 |
wkentaro/labelme | deep-learning | 987 | [Question] Why Labelme GUI not add open flags.txt | closed | 2022-02-15T06:00:50Z | 2022-02-25T21:09:07Z | https://github.com/wkentaro/labelme/issues/987 | [] | YuaXan | 1 | |
pyjanitor-devs/pyjanitor | pandas | 570 | [ENH] Series toset() functionality | # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to propose toset() functionality similar to [tolist()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.tolist.html).
Basically it will be a function call to tolist and then conversion to set.
Note: if the collection has no hashable member it will raise an exception.
# Example API
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
Please modify the example API below to illustrate your proposed API, and then delete this sentence.
```python
# convert series to a set
df['col1'].toset()
| closed | 2019-09-15T07:26:08Z | 2019-09-24T13:28:33Z | https://github.com/pyjanitor-devs/pyjanitor/issues/570 | [
"enhancement",
"good first issue",
"good intermediate issue",
"being worked on"
] | eyaltrabelsi | 5 |
hzwer/ECCV2022-RIFE | computer-vision | 332 | Question about tracking a point | Thank you for this library: I tried it with the `Dockerfile` - without `GPU` - and I was able to generate a new file right away.
Let's say I have an input video with two contiguous frames: `frame_1` and `frame_2`.
On the input video, on `frame_1`, I have a point with known coordinates.
On the output video, is there a way to know the coordinates of the point on:
1) the generated frames between `frame_1` and `frame_2`
2) on `frame_2`
?
Thank you very much again. | open | 2023-08-01T10:37:46Z | 2023-08-03T12:33:49Z | https://github.com/hzwer/ECCV2022-RIFE/issues/332 | [] | carlok | 2 |
comfyanonymous/ComfyUI | pytorch | 6,652 | Image generation on 3090 is sometimes broken and worse than on 2060, and can't reproduce it | ### Expected Behavior
I have a workflow that produces extremely different results on 2060 GPU on a different PC, and my 3090. This image looks correct, and was generated on 2060.

### Actual Behavior
This is what gets generated on my 3090, no matter what I do: updating pytorch, drivers, changing VAE, changing attention options, changing fp32/bf16 settings. This results in slight changes in the image but it remains broken. Btw, generating on CPU is completely broken.

### Steps to Reproduce
[ComfyUI_01254_.json](https://github.com/user-attachments/files/18609546/ComfyUI_01254_.json)
The model used is obsessionIllustrious_v31.safetensors, https://civitai.com/models/820208?modelVersionId=1136462
No custom nodes required
### Debug Logs
```powershell
@:~/github/ComfyUI$ python main.py --use-sage-attention --highvram --disable-all-custom-nodes
...
Checkpoint files will always be loaded safely.
Total VRAM 24135 MB, total RAM 64001 MB
pytorch version: 2.6.0+cu126
Set vram state to: HIGH_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using sage attention
ComfyUI version: 0.3.13
...
Skipping loading of custom nodes
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
loaded diffusion model directly to GPU
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 25/25 [00:07<00:00, 3.40it/s]
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
Prompt executed in 9.84 seconds
```
### Other
_No response_ | open | 2025-01-30T22:52:53Z | 2025-02-04T15:24:05Z | https://github.com/comfyanonymous/ComfyUI/issues/6652 | [
"Potential Bug"
] | Nekotekina | 4 |
jumpserver/jumpserver | django | 14,718 | [Question] JumpServer server requirement | ### Product Version
Latest
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Linux Server
### ๐ค Question Description
What are the specifications or server requirements for jumpserver?
example = OS version, Cpu cores, Ram, etc
minimum, recommended, and best practice.
### Expected Behavior
_No response_
### Additional Information
_No response_ | closed | 2024-12-24T02:09:35Z | 2024-12-24T09:33:30Z | https://github.com/jumpserver/jumpserver/issues/14718 | [
"๐ค Question"
] | aryasenawiryady | 2 |
jumpserver/jumpserver | django | 14,383 | [Bug] ๆฌๅฐGolangไปฃ็ ่ฟๆฅPgSQLๅคฑ่ดฅ | ### ไบงๅ็ๆฌ
v3.10.13
### ็ๆฌ็ฑปๅ
- [ ] ็คพๅบ็
- [X] ไผไธ็
- [ ] ไผไธ่ฏ็จ็
### ๅฎ่ฃ
ๆนๅผ
- [ ] ๅจ็บฟๅฎ่ฃ
(ไธ้ฎๅฝไปคๅฎ่ฃ
)
- [x] ็ฆป็บฟๅ
ๅฎ่ฃ
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] ๆบ็ ๅฎ่ฃ
### ็ฏๅขไฟกๆฏ
Jumpserver็ๆฌ๏ผJumpServer Enterprise Edition Version: v3.10.13
้จ็ฝฒๆถๆ๏ผ jumpserver server -> ๅ
ฌ็ฝ็ฝๅ็ฝๅ
ณ -> pgsql
### ๐ ็ผบ้ทๆ่ฟฐ
้่ฟJumpserverๅทฅไฝๅฐ้้ข่ทๅพ็ Database connect info ไฟกๆฏ๏ผๆฌๅฐGolangไปฃ็ ๆ ๆณ่ฟๆฅ
### ๅค็ฐๆญฅ้ชค
็ปๅ
ฅJumpserver ๏ผ่ทณ่ฝฌๅฐๅทฅไฝๅฐ๏ผ้ๆฉ่ฆ่ฟๆฅ็PgSQL่ตไบง๏ผ
่ฟๆฅไฟกๆฏ้ๆฉ Native -> DB Guide ๏ผ ๆฌๅฐGolangไปฃ็ ้่ฟ่ทๅ็DB Connectไฟกๆฏๅป่ฐ็จ๏ผๆ ๆณ่ฟๆฅ
### ๆๆ็ปๆ
ๅฏไปฅไฝฟ็จๆฌๅฐไปฃ็ ๏ผ้่ฟ DB Guide ่ทๅพ็ไฟกๆฏๅป่ฟๆฅๆฐๆฎๅบ
### ่กฅๅ
ไฟกๆฏ
_No response_
### ๅฐ่ฏ่ฟ็่งฃๅณๆนๆก
1๏ผๅทฒ็ปๅฐ่ฏ่ฟTcpๆๅ
ๅค็๏ผๆ ๆณๅปบ็ซTCP่ฟๆฅ๏ผ
2๏ผๅทฒ็ปๅฐ่ฏ่ฟ็ฝ็ป้ฒ็ซๅข๏ผ็กฎ่ฎคไธๆฏๅ ไธบ้ฒ็ซๅขๅๅ ๅฏผ่ดๆ ๆณๅปบ็ซTCPๆกๆ๏ผ | closed | 2024-10-30T09:44:40Z | 2024-11-28T08:41:59Z | https://github.com/jumpserver/jumpserver/issues/14383 | [
"๐ Bug",
"๐ Inactive"
] | ChenTitan49 | 3 |
thunlp/OpenPrompt | nlp | 309 | import break when using latest transformers | file: pipeline_base.py
line: 4
code: `from transformers.generation_utils import GenerationMixin`
this is broken, should be replaced with `from transformers import GenerationMixin` | open | 2024-05-02T19:24:14Z | 2024-05-02T19:24:14Z | https://github.com/thunlp/OpenPrompt/issues/309 | [] | xiyang-aads-lilly | 0 |
Lightning-AI/pytorch-lightning | pytorch | 20,530 | Batch size finder code example in dark mode is light instead of dark | ### ๐ Documentation
On the [Batch size finder advanced tricks](https://lightning.ai/docs/pytorch/stable/advanced/training_tricks.html#batch-size-finder) page, in dark mode the example code is in light mode makes it hard to read:
<img width="891" alt="Hard to read light mode code example" src="https://github.com/user-attachments/assets/84f54224-1281-4faf-8228-580d2f8db566" />
The code example should look instead look like this in dark mode:
<img width="974" alt="Screenshot 2025-01-06 at 12 59 06โฏPM" src="https://github.com/user-attachments/assets/0455b527-faae-4125-b2a7-e63db02519f0" />
cc @lantiga @borda | open | 2025-01-06T18:03:51Z | 2025-01-06T18:05:21Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20530 | [
"docs",
"needs triage"
] | nicolasperez19 | 0 |
Yorko/mlcourse.ai | seaborn | 169 | Missing image on Lesson 3 notebook | Hey,
Image _credit_scoring_toy_tree_english.png_ is missing on the topic3_decision_trees_kNN notebook. | closed | 2018-02-19T11:17:20Z | 2018-08-04T16:08:25Z | https://github.com/Yorko/mlcourse.ai/issues/169 | [
"minor_fix"
] | henriqueribeiro | 3 |
alteryx/featuretools | data-science | 2,399 | Refactor computation of primitive lists in `DeepFeatureSynthesis` `__init__` | When building the following lists, there is a lot of code duplication:
- `self.groupby_trans_primitives`
- `self.agg_primitives`
- `self.where_primitives`
- `self.trans_primitives`
Furthermore, refactoring this logic outside of the `__init__` would help make the code more expressive and testable. | open | 2022-12-13T05:03:45Z | 2023-03-15T22:48:59Z | https://github.com/alteryx/featuretools/issues/2399 | [
"enhancement",
"refactor",
"tech debt"
] | sbadithe | 0 |
oegedijk/explainerdashboard | dash | 220 | Feature value input to get_contrib_df | Hello,
I understand that the get_contrib_df function can be used to get the contributions of various features to the final predictions for a particular data index from the table. However, is it possible to get the contribution calculation/table by passing a list/array of data points to this function? I am guess this is possible since the Input Feature table and contributions plot work in the explainer dashboard, just not sure how to call this function as there are no input arguments that accept a list/ an array.
Thank you,
Andy | open | 2022-05-25T05:24:51Z | 2022-05-25T05:24:51Z | https://github.com/oegedijk/explainerdashboard/issues/220 | [] | andypatrac | 0 |
aeon-toolkit/aeon | scikit-learn | 2,043 | [ajb/remove_metric] is STALE | @TonyBagnall,
ajb/remove_metric has had no activity for 143 days.
This branch will be automatically deleted in 32 days. | closed | 2024-09-09T01:28:04Z | 2024-09-12T07:26:31Z | https://github.com/aeon-toolkit/aeon/issues/2043 | [
"stale branch"
] | aeon-actions-bot[bot] | 1 |
LAION-AI/Open-Assistant | machine-learning | 2,760 | ๆฒกๆๅ ้คๅๅฒไผ่ฏ็ๅ่ฝ | Delete history session prompted by the assistant, not executable | closed | 2023-04-19T15:44:02Z | 2023-04-23T20:02:49Z | https://github.com/LAION-AI/Open-Assistant/issues/2760 | [] | taskmgr0 | 1 |
tflearn/tflearn | tensorflow | 882 | About loss in Tensorboard | Hello everyone,
I run the example of Multi-layer perceptron, and visualize the loss in Tensorboard.
Does "Loss" refer to the training loss on each batch? And "Loss/Validation" refers to the loss on validation set? What does "Loss_var_loss" refer to?

| open | 2017-08-22T14:57:32Z | 2017-08-26T07:15:47Z | https://github.com/tflearn/tflearn/issues/882 | [] | zhao62 | 3 |
pytest-dev/pytest-xdist | pytest | 1,063 | Enable configuring numprocesses's default `tx` command | Over the years I've used and introduced xdist whenever possibly to speed up pytest runs.
Usually just using the `-n X` notation was sufficient. But in our current application, we have to use the `--tx` notation to ensure we're using eventlet.
```
--tx '4*popen//execmodel=eventlet'
```
This is a lot to type if you 'just' want to speed up tests. And combining it with `-n` reverts it to just using `popen`.
Ideally I'd configuring it to default to using `popen//execmodel=eventlet` and then up the processes using `-n X` notation.
So my feature request would be:
Enable configuring what 'executing method' `-n` actually uses. With `popen` being the default.
So that in your pytest.ini you can do something like this:
```ini
[pytest]
addopts = --default-tx popen//execmodel=eventlet
```
And then can add more worker as desired with the `-n` notation.
| open | 2024-04-12T14:19:07Z | 2024-04-16T09:52:35Z | https://github.com/pytest-dev/pytest-xdist/issues/1063 | [] | puittenbroek | 5 |
vaexio/vaex | data-science | 2,183 | [BUG-REPORT]"Unknown variables or column: ' while using jit | **Description**
Hi, I got this "Unknown variables or column: ' error while using jit_numba() / jit_cuda()
I am just trying a simplify version of your jit turtorial guide, and the code as below:
```
df = vaex.example()
def arc_distance(theta_1, phi_1, theta_2, phi_2):
"""
Calculates the pairwise arc distance
between all points in vector a and b.
"""
temp = (np.sin((theta_2-2-theta_1)/2)**2
+ np.cos(theta_1)*np.cos(theta_2) * np.sin((phi_2-phi_1)/2)**2)
distance_matrix = 2 * np.arctan2(np.sqrt(temp), np.sqrt(1-temp))
return distance_matrix
#without jit
df['arc_distance'] = arc_distance(df.x * np.pi/180,
df.y * np.pi/180,
df.z * np.pi/180,
df.vx * np.pi/180)
df.mean(df.arc_distance) # works fine here
df['arc_distance_cuda'] = df.arc_distance.jit_numba() # **Errorr here**
df.mean(df.arc_distance_cuda)
```

**Software information**
- Numpy: 1.22.0 / Numba: 0.56.0 / python: 3.9.13
- Vaex version {'vaex': '4.11.1', 'vaex-core': '4.11.1', 'vaex-viz': '0.5.2', 'vaex-hdf5': '0.12.3', 'vaex-server': '0.8.1', 'vaex-astro': '0.9.1', 'vaex-jupyter': '0.8.0', 'vaex-ml': '0.18.0'}
- Vaex was installed via: pip
- OS: Win10
I didn't encounter this problem in my another env (in python 3.8), and I think the package version aren't too much different to current env. Though @jit acceleration isn't a must-need function for me(at least for now), I still want to know how to avoid these mistake. | closed | 2022-08-24T06:03:54Z | 2022-08-26T09:18:59Z | https://github.com/vaexio/vaex/issues/2183 | [] | GMfatcat | 5 |
docarray/docarray | pydantic | 1,601 | Handle `max_elements` from HNSWLibIndexer | By default, `max_elements` is set to 1024. I believe this max_elements should be recomputed and indexes resized dynamically | closed | 2023-05-31T13:08:32Z | 2023-06-01T08:00:59Z | https://github.com/docarray/docarray/issues/1601 | [] | JoanFM | 0 |
supabase/supabase-py | fastapi | 717 | Test failures on Python 3.12 | # Bug report
## Describe the bug
Tests are broken against Python 3.12.
```AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?```
## To Reproduce
Run test script in a python 3.12 environment.
## Expected behavior
Tests should not fail.
## Logs
```bash
ERROR: invocation failed (exit code 1), logfile: /Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/log/py312-3.log
========================================================================== log start ===========================================================================
ERROR: Exception:
Traceback (most recent call last):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/commands/install.py", line 315, in run
session = self.get_default_session(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 98, in get_default_session
self._session = self.enter_context(self._build_session(options))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 125, in _build_session
session = PipSession(
^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 343, in __init__
self.headers["User-Agent"] = user_agent()
^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 175, in user_agent
setuptools_dist = get_default_environment().get_distribution("setuptools")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 180, in get_distribution
return next(matches, None)
^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 175, in <genexpr>
matches = (
^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/base.py", line 594, in iter_all_distributions
for dist in self._iter_distributions():
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 168, in _iter_distributions
for dist in finder.find_eggs(location):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 136, in find_eggs
yield from self._find_eggs_in_dir(location)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 103, in _find_eggs_in_dir
from pip._vendor.pkg_resources import find_distributions
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2164, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/__main__.py", line 31, in <module>
sys.exit(_main())
^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/main.py", line 70, in main
return command.main(cmd_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 101, in main
return self._main(args)
^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 223, in _main
self.handle_pip_version_check(options)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 179, in handle_pip_version_check
session = self._build_session(
^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/cli/req_command.py", line 125, in _build_session
session = PipSession(
^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 343, in __init__
self.headers["User-Agent"] = user_agent()
^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/network/session.py", line 175, in user_agent
setuptools_dist = get_default_environment().get_distribution("setuptools")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 180, in get_distribution
return next(matches, None)
^^^^^^^^^^^^^^^^^^^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 175, in <genexpr>
matches = (
^
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/base.py", line 594, in iter_all_distributions
for dist in self._iter_distributions():
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 168, in _iter_distributions
for dist in finder.find_eggs(location):
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 136, in find_eggs
yield from self._find_eggs_in_dir(location)
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_internal/metadata/importlib/_envs.py", line 103, in _find_eggs_in_dir
from pip._vendor.pkg_resources import find_distributions
File "/Users/harish/Workspaces/oss/supabase/supabase-py/.tox/py312/lib/python3.12/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2164, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
```
## System information
- OS: macOS
## Additional context
Launched tests via a `tox` script. (see https://github.com/supabase-community/supabase-py/issues/696)
| closed | 2024-03-03T05:12:43Z | 2024-03-23T13:24:45Z | https://github.com/supabase/supabase-py/issues/717 | [
"bug"
] | tinvaan | 2 |
QingdaoU/OnlineJudge | django | 476 | ๆไบคไปฃ็ ็ๆถๅๆพ็คบsystem error |

ๅจๆไบคissueไนๅ่ฏท
- ่ฎค็้
่ฏปๆๆกฃ http://docs.onlinejudge.me/#/
- ๆ็ดขๅๆฅ็ๅๅฒissues
- ๅฎๅ
จ็ฑป้ฎ้ข่ฏทไธ่ฆๅจ GitHub ไธๅ
ฌๅธ๏ผ่ฏทๅ้้ฎไปถๅฐ `admin@qduoj.com`๏ผๆ นๆฎๆผๆดๅฑๅฎณ็จๅบฆๅ้็บขๅ
ๆ่ฐขใ
็ถๅๆไบคissue่ฏทๅๆธ
ๆฅไธๅไบ้กน
ย - ่ฟ่กไปไนๆไฝ็ๆถๅ้ๅฐไบไปไน้ฎ้ข๏ผๆๅฅฝ่ฝๆๅค็ฐๆญฅ้ชค
ย - ้่ฏฏๆ็คบๆฏไปไน๏ผๅฆๆ็ไธๅฐ้่ฏฏๆ็คบ๏ผ่ฏทๅปdataๆไปถๅคนๆฅ็็ธๅบlogๆไปถใๅคงๆฎต็้่ฏฏๆ็คบ่ฏทๅ
ๅจไปฃ็ ๅๆ ่ฎฐ้้ขใ
- ไฝ ๅฐ่ฏไฟฎๅค้ฎ้ข็ๆไฝ
- ้กต้ข้ฎ้ข่ฏทๅๆธ
ๆต่งๅจ็ๆฌ๏ผๅฐฝ้ๆๆชๅพ
| open | 2024-09-15T12:54:37Z | 2024-09-15T12:54:37Z | https://github.com/QingdaoU/OnlineJudge/issues/476 | [] | leeway-z | 0 |
hankcs/HanLP | nlp | 1,059 | ไฝฟ็จ็น้ซๅ่ฉๅพ๏ผ้ซฎ่ฎ็ผ | <!--
ๆณจๆไบ้กนๅ็ๆฌๅทๅฟ
ๅกซ๏ผๅฆๅไธๅๅคใ่ฅๅธๆๅฐฝๅฟซๅพๅฐๅๅค๏ผ่ฏทๆๆจกๆฟ่ฎค็ๅกซๅ๏ผ่ฐข่ฐขๅไฝใ
-->
## ๆณจๆไบ้กน
่ฏท็กฎ่ฎคไธๅๆณจๆไบ้กน๏ผ
* ๆๅทฒไป็ป้
่ฏปไธๅๆๆกฃ๏ผ้ฝๆฒกๆๆพๅฐ็ญๆก๏ผ
- [้ฆ้กตๆๆกฃ](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [ๅธธ่ง้ฎ้ข](https://github.com/hankcs/HanLP/wiki/FAQ)
* ๆๅทฒ็ป้่ฟ[Google](https://www.google.com/#newwindow=1&q=HanLP)ๅ[issueๅบๆฃ็ดขๅ่ฝ](https://github.com/hankcs/HanLP/issues)ๆ็ดขไบๆ็้ฎ้ข๏ผไนๆฒกๆๆพๅฐ็ญๆกใ
* ๆๆ็ฝๅผๆบ็คพๅบๆฏๅบไบๅ
ด่ถฃ็ฑๅฅฝ่้่ตทๆฅ็่ช็ฑ็คพๅบ๏ผไธๆฟๆ
ไปปไฝ่ดฃไปปๆไนๅกใๆไผ็คผ่ฒๅ่จ๏ผๅๆฏไธไธชๅธฎๅฉๆ็ไบบ่กจ็คบๆ่ฐขใ
* [x] ๆๅจๆญคๆฌๅทๅ
่พๅ
ฅxๆ้ฉ๏ผไปฃ่กจไธ่ฟฐไบ้กน็กฎ่ฎคๅฎๆฏใ
## ็ๆฌๅท
<!-- ๅ่ก็่ฏทๆณจๆjarๆไปถๅๅปๆๆๅฑๅ็้จๅ๏ผGitHubไปๅบ็่ฏทๆณจๆmaster่ฟๆฏportableๅๆฏ -->
ๅฝๅๆๆฐ็ๆฌๅทๆฏ๏ผ1.7.1
ๆไฝฟ็จ็็ๆฌๆฏ๏ผ1.6.8
<!--ไปฅไธๅฑไบๅฟ
ๅกซ้กน๏ผไปฅไธๅฏ่ช็ฑๅๆฅ-->
## ๆ็้ฎ้ข
ไฝฟ็จ TraditionalChineseTokenizer.segment ไพๅ่ฉ "้ฃๅฉๆตฆๆด้ซฎ้ ๅๅน้ขจๆขณ"
### ๆๆ่พๅบ
```
[้ฃๅฉๆตฆ/ntc, ๆด้ซฎ/v, ้ ๅ/n, ๅน้ขจ/vn, ๆขณ/v]
```
### ๅฎ้
่พๅบ
```
[้ฃๅฉๆตฆ/ntc, ๆด็ผ/v, ้ ๅ/n, ๅน้ขจ/vn, ๆขณ/v]
```
### ๅ
ถไปไฟกๆฏ
็จNLPTokenizer.analyzeไพๅ่ฉๅๆฏๆๆ็่ผธๅบ
| closed | 2018-12-25T06:25:13Z | 2018-12-25T19:54:56Z | https://github.com/hankcs/HanLP/issues/1059 | [
"improvement"
] | gunblues | 1 |
nteract/papermill | jupyter | 405 | Using papermill to test notebooks | Hi,
I am using papermill to check that some notebooks run without problems. I don't need to output any notebook. Is there a way to run a notebook without output? | open | 2019-07-26T06:54:23Z | 2021-03-11T22:54:23Z | https://github.com/nteract/papermill/issues/405 | [
"question"
] | argenisleon | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,376 | [Feature Request]: add support for stablediffusion.cpp inference. | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
stablediffusion.cpp works fast on cpu, use less memory than pytorch and support of quantized models which take much less space.
### Proposed workflow
1. Go to settings
2. set inference method to stablediffusion.cpp
### Additional information
_No response_ | open | 2024-08-13T03:48:05Z | 2024-08-13T03:48:05Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16376 | [
"enhancement"
] | sss123next | 0 |
gradio-app/gradio | data-visualization | 10,667 | gr.load_chat has no documentation on gradio.app | ### Describe the bug
The [How to Create a Chatbot with Gradio](https://www.gradio.app/guides/creating-a-chatbot-fast) guide references a URL that does not exist, this is the documentation for the `gr.load_chat` function.
https://www.gradio.app/docs/gradio/load_chat
https://github.com/gradio-app/gradio/blob/f0a920c4934880645fbad783077ae9c7519856ce/guides/05_chatbots/01_creating-a-chatbot-fast.md?plain=1#L27
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
https://www.gradio.app/docs/gradio/load_chat does not exist and will 404
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
n/a
```
### Severity
I can work around it | closed | 2025-02-24T17:46:19Z | 2025-02-25T00:49:49Z | https://github.com/gradio-app/gradio/issues/10667 | [
"bug",
"docs/website"
] | alexandercarruthers | 1 |
sgl-project/sglang | pytorch | 4,436 | [Feature] enable SGLang custom all reduce by default | ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
We need community users to help test these cases. After confirming that there are no issues, we will default to using the custom all reduce implemented in SGLang. You can reply with your test results below this issue. Thanks!
**GPU Hardware Options**:
- H100/H200/H20/H800/A100
**Model Configurations with Tensor Parallelism (TP) Settings**:
- Llama 8B with TP 1/2/4/8
- Llama 70B with TP 4/8
- Qwen 7B with TP 1/2/4/8
- Qwen 32B with TP 4/8
- DeepSeek V3 with TP 8/16
**Environment Variables**:
```
export USE_VLLM_CUSTOM_ALLREDUCE=0
export USE_VLLM_CUSTOM_ALLREDUCE=1
```
**Benchmarking Commands**:
```bash
python3 -m sglang.bench_one_batch --model-path model --batch-size --input 128 --output 8
python3 -m sglang.bench_serving --backend sglang
```
### Related resources
_No response_ | open | 2025-03-14T19:46:52Z | 2025-03-18T08:29:14Z | https://github.com/sgl-project/sglang/issues/4436 | [
"good first issue",
"help wanted",
"high priority",
"performance"
] | zhyncs | 5 |
autokey/autokey | automation | 659 | Add return codes to mouse.wait_for_click and keyboard.wait_for_keypress | ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
_No response_
### Which AutoKey version did you use?
N/A
### How did you install AutoKey?
N/A
### Can you briefly describe the issue?
I want to know how a wait for click or keypress completed so I can use that information for flow control in scripts.
E.g. a loop continues indefinitely until the mouse is clicked. This has to distinguish between a timeout and a click.
The same thing with a loop terminated by a keypress.
### Can the issue be reproduced?
N/A
### What are the steps to reproduce the issue?
N/A
### What should have happened?
These API calls should return 0 for success and one or more defined non-zero values to cover any alternatives. So far, 1 for timeout/failure is all that comes to mind.
### What actually happened?
AFAIK, they do not return any status code - which is equivalent to returning 0 no matter what happened.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | open | 2022-02-08T20:34:56Z | 2023-06-18T16:59:58Z | https://github.com/autokey/autokey/issues/659 | [
"enhancement",
"scripting",
"good first issue"
] | josephj11 | 21 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,792 | ะะตัะตััะฐะฝััะต ะดะตะปะฐัั ะธะท ะฝะพัะผะฐะปัะฝะพะณะพ ัะทัะบะฐ ััะฐะฝะบะตะฝััะตะนะฝะฐ ะบะฐะบะพะณะพ-ัะพ | ### Describe the bug
ะะตัะตััะฐะฝััะต ะดะตะปะฐัั ะธะท ะฝะพัะผะฐะปัะฝะพะณะพ ัะทัะบะฐ ััะฐะฝะบะตะฝััะตะนะฝะฐ ะบะฐะบะพะณะพ-ัะพ
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.2
### DBAPI (i.e. the database driver)
psycopg2
### Database Vendor and Major Version
PostgreSQL 15
### Python Version
3.11
### Operating system
OSX
### To Reproduce
```python
.
```
### Error
```
# Copy the complete stack trace and error message here, including SQL log output if applicable.
```
### Additional context
_No response_ | closed | 2023-12-26T11:54:45Z | 2023-12-26T12:00:04Z | https://github.com/sqlalchemy/sqlalchemy/issues/10792 | [] | undergroundenemy616 | 0 |
horovod/horovod | machine-learning | 3,091 | horovod installation: tensorflow not detected when using intel-tensorflow-avx512. | **Environment:**
1. Framework: TensorFlow
2. Framework version: intel-tensorflow-avx512==2.5.0
3. Horovod version: 0.22.1
4. MPI version: openmpi 4.0.3
5. CUDA version: N/A, cpu only
6. NCCL version: N/A, cpu only
7. Python version: 3.8
10. OS and version: Ubuntu focal
11. GCC version: 9.3.0
12. CMake version: 3.16.3
**Bug report:**
I'm trying to install horovod after installing intel-tensorflow-avx512. horovod fails to detect that version of tensorflow.
singularity buildfile is here:
https://github.com/kaufman-lab/build_containers/blob/8145f3c58d237e0c3953d45ff58cf750397bc781/geospatial_plus_ml_horovod4.1.0.def
in particular:
```
HOROVOD_WITH_TENSORFLOW=1 HOROVOD_WITH_MPI=1 HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITHOUT_MXNET=1 HOROVOD_CPU_OPERATIONS=MPI HOROVOD_WITHOUT_PYTORCH=1 pip install --no-cache-dir horovod[tensorflow]==0.22.1 --no-dependencies --force-reinstall
```
build log is here:
https://github.com/kaufman-lab/build_containers/runs/3268356819?check_suite_focus=true
in particular, note the successful installation of tensorflow (specifically the intel-tensorflow-avx512 variant)
```
+ python3 -m pip freeze
absl-py==0.13.0
astunparse==1.6.3
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.4
cloudpickle==1.6.0
flatbuffers==1.12
future==0.18.2
gast==0.4.0
GDAL==3.0.4
google-auth==1.34.0
google-auth-oauthlib==0.4.5
google-pasta==0.2.0
grpcio==1.34.1
h5py==3.1.0
idna==3.2
intel-tensorflow-avx512==2.5.0
keras-nightly==2.5.0.dev2021032900
Keras-Preprocessing==1.1.2
Markdown==3.3.4
numpy==1.19.5
oauthlib==3.1.1
opt-einsum==3.3.0
packaging==21.0
Pillow==8.3.1
protobuf==3.17.3
psutil==5.8.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
PyYAML==5.4.1
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
scipy==1.7.1
six==1.15.0
tensorboard==2.5.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow-estimator==2.5.0
termcolor==1.1.0
typing==3.7.4.3
typing-extensions==3.7.4.3
urllib3==1.26.6
Werkzeug==2.0.1
wrapt==1.12.1
```
and the message saying that tensorflow couldn't be found:
```
CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
Could NOT find Tensorflow (missing: Tensorflow_LIBRARIES) (Required is at
least version "1.15.0")
``` | open | 2021-08-07T07:31:46Z | 2021-08-09T15:37:07Z | https://github.com/horovod/horovod/issues/3091 | [
"bug"
] | myoung3 | 4 |
Significant-Gravitas/AutoGPT | python | 8,740 | Add `integer` to `NodeHandle` type list | The JSON schema type `integer` is not defined in the type list in `<NodeHandle>`, causing it to show up as `(any)` rather than `(integer)` on block inputs/outputs with that type.
[https://github.com/Significant-Gravitas/AutoGPT/blob/86535b5811f8d1cc0bdde2232693919c4b1115e3/autogpt_platform/frontend/src/components/NodeHandle.tsx#L22-L29](https://github.com/Significant-Gravitas/AutoGPT/blob/86535b5811f8d1cc0bdde2232693919c4b1115e3/autogpt_platform/frontend/src/components/NodeHandle.tsx#L22-L29)
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/3d4e9efe-7804-441e-83ef-53dab7c32832/d668d7b5-430e-4d76-bfc8-4fbc9bd0668d?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi8zZDRlOWVmZS03ODA0LTQ0MWUtODNlZi01M2RhYjdjMzI4MzIvZDY2OGQ3YjUtNDMwZS00ZDc2LWJmYzgtNGZiYzliZDA2NjhkIiwiaWF0IjoxNzMyMjEwNDc4LCJleHAiOjMzMzAyNzcwNDc4fQ.a8Bl1Ua7JVzcIWUSQI0DD9XOKYhydUZorNSpK6TLKXg " alt="image.png" width="348" height="231" /> | closed | 2024-11-21T17:34:39Z | 2024-12-10T17:46:17Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8740 | [
"platform/frontend"
] | Pwuts | 0 |
howie6879/owllook | asyncio | 70 | ่ฝไธ่ฝๆดๆฐไธdocker hub๏ผ | ๅฏนpythonๆ็นไธ็ๆ๏ผไธ็ดๆฒกๅผๅฅฝใ
docker hub้ฃ่พน็็ๆฌๆ็น่ไบ | closed | 2019-09-01T12:11:34Z | 2019-09-02T02:56:40Z | https://github.com/howie6879/owllook/issues/70 | [] | henri001 | 2 |
gradio-app/gradio | data-science | 10,046 | HTML component issue | 
There is only one small problem, and I believe this problem occurs in the front-end code. When both the container and show_label attributes are set to True at the same time, there will be an obvious conflict between the two.
I think the problem is that the labels used in the two are different. The label of the html component uses ```<span>```, while other components use ```<label>```


_Originally posted by @nuclearrockstone in https://github.com/gradio-app/gradio/issues/10014#issuecomment-2492948301_
| closed | 2024-11-27T06:06:52Z | 2024-11-28T19:13:57Z | https://github.com/gradio-app/gradio/issues/10046 | [
"bug"
] | nuclearrockstone | 0 |
airtai/faststream | asyncio | 1,414 | Bug: The Kafka consumer remains blocked indefinitely after commit failures, unable to recover | **Describe the bug**
In version 0.5.x, when using aiokafka with auto_commit=false, if Kafka rebalances causing consumer commit failures, the consumer remains indefinitely blocked, unable to resume normal consumption.
However, when I set auto_commit=true, or revert to version 0.4.7, the issue does not occur, and the consumer is able to quickly recover consumption after commit failures.
**How to reproduce**
My code be like:
```python
from fastapi import FastAPI
from faststream.kafka.fastapi import KafkaRouter, KafkaMessage
router = KafkaRouter(bootstrap_servers='10.0.3.61:9092')
subscriber = router.subscriber('in_topic', group_id='test', auto_commit=False)
publisher = router.publisher('out_topic')
@subscriber
async def handle(msg:dict, kafka_msg:KafkaMessage):
# do something...
await publisher.publish(msg)
app = FastAPI(lifespan=router.lifespan_context)
app.include_router(router)
```
And/Or steps to reproduce the behavior:
1. version 0.5.x
2. set auto_commit=false
3. When Kafka rebalances leading to consumer commit failures
**Expected behavior**
The consumer should be able to quickly recover consumption.
**Observed behavior**
the consumer remains indefinitely blocked, unable to resume normal consumption.
**Screenshots**
When Kafka rebalances leading to consumer commit failures

The consumer remains blocked thereafter until it goes offline.
**Environment**
faststream==0.5.x
**Additional context**
Provide any other relevant context or information about the problem here.
| closed | 2024-05-02T07:34:01Z | 2024-05-04T16:51:41Z | https://github.com/airtai/faststream/issues/1414 | [
"bug"
] | JohannT9527 | 0 |
InstaPy/InstaPy | automation | 6,000 | Setting timeout on join_pods function | Hi & Happy new year!
Can you please let me know if there is a way to stop `join_pods` interaction after some specified time?
The point is that currently it infinitely engages in interaction with the pods which results in Instagram blocking my activity.
I would therefore to set a time limit on that function.
Is this somehow possible currently?
Many thanks. | open | 2021-01-01T20:15:06Z | 2021-07-21T03:19:20Z | https://github.com/InstaPy/InstaPy/issues/6000 | [
"wontfix"
] | alinakhay | 1 |
d2l-ai/d2l-en | computer-vision | 1,737 | Search doesn't appear to work | Currently, the search page shows no results and just "Preparing search"...
http://d2l.ai/search.html?q=transformer

Possibly related to this error in the console:

| closed | 2021-04-26T22:16:35Z | 2021-05-17T03:12:17Z | https://github.com/d2l-ai/d2l-en/issues/1737 | [
"bug"
] | indigoviolet | 3 |
nolar/kopf | asyncio | 1,018 | Problem in walkthrough diff example | ### Long story short
In the [diff]() example, the example only works if the `labels` field already exists. As things are, `labels` has not been created at this point (and would likely be pruned if it was created and empty).
### Kopf version
1.36.0
### Kubernetes version
1.24.8
### Python version
3.10
### Code
```python
@kopf.on.field('ephemeralvolumeclaims', field='metadata.labels')
def relabel(diff, status, namespace, **kwargs):
labels_patch = {field[0]: new for op, field, old, new in diff}
pvc_name = status['create_fn']['pvc-name']
pvc_patch = {'metadata': {'labels': labels_patch}}
api = kubernetes.client.CoreV1Api()
obj = api.patch_namespaced_persistent_volume_claim(
namespace=namespace,
name=pvc_name,
body=pvc_patch,
)
```
### Logs
```none
/home/jsolbrig/anaconda3/envs/kopf/lib/python3.10/site-packages/kopf/_core/reactor/running.py:176: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2023-03-28 05:47:57,738] kopf._core.reactor.r [DEBUG ] Starting Kopf 1.36.0.
[2023-03-28 05:47:57,738] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2023-03-28 05:47:57,738] kopf.activities.auth [DEBUG ] Activity 'login_via_client' is invoked.
[2023-03-28 05:47:57,746] kopf.activities.auth [DEBUG ] Client is configured via kubeconfig file.
[2023-03-28 05:47:57,747] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2023-03-28 05:47:57,747] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2023-03-28 05:47:57,854] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2023-03-28 05:47:57,855] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for ephemeralvolumeclaims.v1.cira.colostate.edu cluster-wide.
[2023-03-28 05:48:07,429] kopf.objects [DEBUG ] [default/my-claim] Creation is in progress: {'apiVersion': 'cira.colostate.edu/v1', 'kind': 'EphemeralVolumeClaim', 'metadata': {'annotations': {'kubectl.kubernetes.io/last-applied-configuration': '{"apiVersion":"cira.colostate.edu/v1","kind":"EphemeralVolumeClaim","metadata":{"annotations":{},"name":"my-claim","namespace":"default"},"spec":{"size":"1G"}}\n'}, 'creationTimestamp': '2023-03-28T05:48:07Z', 'generation': 1, 'managedFields': [{'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'.': {}, 'f:kubectl.kubernetes.io/last-applied-configuration': {}}}, 'f:spec': {'.': {}, 'f:size': {}}}, 'manager': 'kubectl-client-side-apply', 'operation': 'Update', 'time': '2023-03-28T05:48:07Z'}], 'name': 'my-claim', 'namespace': 'default', 'resourceVersion': '15968803', 'uid': '73d3190c-c027-488a-9485-a1c7dea97d2e'}, 'spec': {'size': '1G'}}
[2023-03-28 05:48:07,429] kopf.objects [DEBUG ] [default/my-claim] Handler 'create_fn' is invoked.
[2023-03-28 05:48:07,430] root [INFO ] A handler is called with spec: {'size': '1G'}
[2023-03-28 05:48:07,450] kubernetes.client.re [DEBUG ] response body: {"kind":"PersistentVolumeClaim","apiVersion":"v1","metadata":{"name":"my-claim","namespace":"default","uid":"e5fefaa4-3f5a-49e7-9a10-716364e4deab","resourceVersion":"15968804","creationTimestamp":"2023-03-28T05:48:07Z","annotations":{"volume.beta.kubernetes.io/storage-class":"standard"},"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"OpenAPI-Generator","operation":"Update","apiVersion":"v1","time":"2023-03-28T05:48:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-class":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1G"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}
[2023-03-28 05:48:07,452] kopf.objects [INFO ] [default/my-claim] PVC child is created: {'api_version': 'v1',
'kind': 'PersistentVolumeClaim',
'metadata': {'annotations': {'volume.beta.kubernetes.io/storage-class': 'standard'},
'creation_timestamp': datetime.datetime(2023, 3, 28, 5, 48, 7, tzinfo=tzlocal()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': ['kubernetes.io/pvc-protection'],
'generate_name': None,
'generation': None,
'labels': None,
'managed_fields': [{'api_version': 'v1',
'fields_type': 'FieldsV1',
'fields_v1': {'f:metadata': {'f:annotations': {'.': {},
'f:volume.beta.kubernetes.io/storage-class': {}}},
'f:spec': {'f:accessModes': {},
'f:resources': {'f:requests': {'.': {},
'f:storage': {}}},
'f:volumeMode': {}}},
'manager': 'OpenAPI-Generator',
'operation': 'Update',
'subresource': None,
'time': datetime.datetime(2023, 3, 28, 5, 48, 7, tzinfo=tzlocal())}],
'name': 'my-claim',
'namespace': 'default',
'owner_references': None,
'resource_version': '15968804',
'self_link': None,
'uid': 'e5fefaa4-3f5a-49e7-9a10-716364e4deab'},
'spec': {'access_modes': ['ReadWriteOnce'],
'data_source': None,
'data_source_ref': None,
'resources': {'claims': None,
'limits': None,
'requests': {'storage': '1G'}},
'selector': None,
'storage_class_name': None,
'volume_mode': 'Filesystem',
'volume_name': None},
'status': {'access_modes': None,
'allocated_resources': None,
'capacity': None,
'conditions': None,
'phase': 'Pending',
'resize_status': None}}
[2023-03-28 05:48:07,454] kopf.objects [INFO ] [default/my-claim] Handler 'create_fn' succeeded.
[2023-03-28 05:48:07,454] kopf.objects [INFO ] [default/my-claim] Creation is processed: 1 succeeded; 0 failed.
[2023-03-28 05:48:07,454] kopf.objects [DEBUG ] [default/my-claim] Patching with: {'status': {'create_fn': {'pvc-name': 'my-claim'}}, 'metadata': {'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec":{"size":"1G"}}\n'}}}
[2023-03-28 05:48:07,565] kopf.objects [DEBUG ] [default/my-claim] Something has changed, but we are not interested (the essence is the same).
[2023-03-28 05:48:07,565] kopf.objects [DEBUG ] [default/my-claim] Handling cycle is finished, waiting for new changes.
[2023-03-28 05:48:21,344] kopf.objects [DEBUG ] [default/my-claim] Updating is in progress: {'apiVersion': 'cira.colostate.edu/v1', 'kind': 'EphemeralVolumeClaim', 'metadata': {'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec":{"size":"1G"}}\n', 'kubectl.kubernetes.io/last-applied-configuration': '{"apiVersion":"cira.colostate.edu/v1","kind":"EphemeralVolumeClaim","metadata":{"annotations":{},"name":"my-claim","namespace":"default"},"spec":{"size":"1G"}}\n'}, 'creationTimestamp': '2023-03-28T05:48:07Z', 'generation': 2, 'labels': {'key1': 'value1'}, 'managedFields': [{'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'f:kopf.zalando.org/last-handled-configuration': {}}}, 'f:status': {'.': {}, 'f:create_fn': {'.': {}, 'f:pvc-name': {}}}}, 'manager': 'kopf', 'operation': 'Update', 'time': '2023-03-28T05:48:07Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'.': {}, 'f:kubectl.kubernetes.io/last-applied-configuration': {}}}, 'f:spec': {'.': {}, 'f:size': {}}}, 'manager': 'kubectl-client-side-apply', 'operation': 'Update', 'time': '2023-03-28T05:48:07Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:key1': {}}}}, 'manager': 'kubectl-edit', 'operation': 'Update', 'time': '2023-03-28T05:48:21Z'}], 'name': 'my-claim', 'namespace': 'default', 'resourceVersion': '15968849', 'uid': '73d3190c-c027-488a-9485-a1c7dea97d2e'}, 'spec': {'size': '1G'}, 'status': {'create_fn': {'pvc-name': 'my-claim'}}}
[2023-03-28 05:48:21,345] kopf.objects [DEBUG ] [default/my-claim] Updating diff: (('add', ('metadata',), None, {'labels': {'key1': 'value1'}}),)
[2023-03-28 05:48:21,345] kopf.objects [DEBUG ] [default/my-claim] Handler 'update_fn' is invoked.
[2023-03-28 05:48:21,360] kubernetes.client.re [DEBUG ] response body: {"kind":"PersistentVolumeClaim","apiVersion":"v1","metadata":{"name":"my-claim","namespace":"default","uid":"e5fefaa4-3f5a-49e7-9a10-716364e4deab","resourceVersion":"15968850","creationTimestamp":"2023-03-28T05:48:07Z","annotations":{"volume.beta.kubernetes.io/storage-class":"standard"},"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"OpenAPI-Generator","operation":"Update","apiVersion":"v1","time":"2023-03-28T05:48:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-class":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1G"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}
[2023-03-28 05:48:21,362] kopf.objects [INFO ] [default/my-claim] PVC child is updated: {'api_version': 'v1',
'kind': 'PersistentVolumeClaim',
'metadata': {'annotations': {'volume.beta.kubernetes.io/storage-class': 'standard'},
'creation_timestamp': datetime.datetime(2023, 3, 28, 5, 48, 7, tzinfo=tzlocal()),
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': ['kubernetes.io/pvc-protection'],
'generate_name': None,
'generation': None,
'labels': None,
'managed_fields': [{'api_version': 'v1',
'fields_type': 'FieldsV1',
'fields_v1': {'f:metadata': {'f:annotations': {'.': {},
'f:volume.beta.kubernetes.io/storage-class': {}}},
'f:spec': {'f:accessModes': {},
'f:resources': {'f:requests': {'.': {},
'f:storage': {}}},
'f:volumeMode': {}}},
'manager': 'OpenAPI-Generator',
'operation': 'Update',
'subresource': None,
'time': datetime.datetime(2023, 3, 28, 5, 48, 7, tzinfo=tzlocal())}],
'name': 'my-claim',
'namespace': 'default',
'owner_references': None,
'resource_version': '15968850',
'self_link': None,
'uid': 'e5fefaa4-3f5a-49e7-9a10-716364e4deab'},
'spec': {'access_modes': ['ReadWriteOnce'],
'data_source': None,
'data_source_ref': None,
'resources': {'claims': None,
'limits': None,
'requests': {'storage': '1G'}},
'selector': None,
'storage_class_name': None,
'volume_mode': 'Filesystem',
'volume_name': None},
'status': {'access_modes': None,
'allocated_resources': None,
'capacity': None,
'conditions': None,
'phase': 'Pending',
'resize_status': None}}
[2023-03-28 05:48:21,363] kopf.objects [INFO ] [default/my-claim] Handler 'update_fn' succeeded.
[2023-03-28 05:48:21,364] kopf.objects [DEBUG ] [default/my-claim] Patching with: {'metadata': {'annotations': {'kopf.zalando.org/update_fn': '{"started":"2023-03-28T05:48:21.344858","stopped":"2023-03-28T05:48:21.363784","purpose":"update","retries":1,"success":true,"failure":false}', 'kopf.zalando.org/relabel.metadata.labels': '{"started":"2023-03-28T05:48:21.344870","purpose":"update","retries":0,"success":false,"failure":false}'}}, 'status': {'kopf': {'progress': {'update_fn': {'started': '2023-03-28T05:48:21.344858', 'stopped': '2023-03-28T05:48:21.363784', 'delayed': None, 'purpose': 'update', 'retries': 1, 'success': True, 'failure': False, 'message': None, 'subrefs': None}, 'relabel/metadata.labels': {'started': '2023-03-28T05:48:21.344870', 'stopped': None, 'delayed': None, 'purpose': 'update', 'retries': 0, 'success': False, 'failure': False, 'message': None, 'subrefs': None}}}}}
[2023-03-28 05:48:21,473] kopf.objects [DEBUG ] [default/my-claim] Updating is in progress: {'apiVersion': 'cira.colostate.edu/v1', 'kind': 'EphemeralVolumeClaim', 'metadata': {'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec":{"size":"1G"}}\n', 'kopf.zalando.org/relabel.metadata.labels': '{"started":"2023-03-28T05:48:21.344870","purpose":"update","retries":0,"success":false,"failure":false}', 'kopf.zalando.org/update_fn': '{"started":"2023-03-28T05:48:21.344858","stopped":"2023-03-28T05:48:21.363784","purpose":"update","retries":1,"success":true,"failure":false}', 'kubectl.kubernetes.io/last-applied-configuration': '{"apiVersion":"cira.colostate.edu/v1","kind":"EphemeralVolumeClaim","metadata":{"annotations":{},"name":"my-claim","namespace":"default"},"spec":{"size":"1G"}}\n'}, 'creationTimestamp': '2023-03-28T05:48:07Z', 'generation': 3, 'labels': {'key1': 'value1'}, 'managedFields': [{'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'.': {}, 'f:kubectl.kubernetes.io/last-applied-configuration': {}}}, 'f:spec': {'.': {}, 'f:size': {}}}, 'manager': 'kubectl-client-side-apply', 'operation': 'Update', 'time': '2023-03-28T05:48:07Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'f:kopf.zalando.org/last-handled-configuration': {}, 'f:kopf.zalando.org/relabel.metadata.labels': {}, 'f:kopf.zalando.org/update_fn': {}}}, 'f:status': {'.': {}, 'f:create_fn': {'.': {}, 'f:pvc-name': {}}, 'f:kopf': {'.': {}, 'f:progress': {'.': {}, 'f:relabel/metadata.labels': {'.': {}, 'f:failure': {}, 'f:purpose': {}, 'f:retries': {}, 'f:started': {}, 'f:success': {}}, 'f:update_fn': {'.': {}, 'f:failure': {}, 'f:purpose': {}, 'f:retries': {}, 'f:started': {}, 'f:stopped': {}, 'f:success': {}}}}}}, 'manager': 'kopf', 'operation': 'Update', 'time': '2023-03-28T05:48:21Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:key1': {}}}}, 'manager': 'kubectl-edit', 'operation': 'Update', 'time': '2023-03-28T05:48:21Z'}], 'name': 'my-claim', 'namespace': 'default', 'resourceVersion': '15968853', 'uid': '73d3190c-c027-488a-9485-a1c7dea97d2e'}, 'spec': {'size': '1G'}, 'status': {'create_fn': {'pvc-name': 'my-claim'}, 'kopf': {'progress': {'relabel/metadata.labels': {'failure': False, 'purpose': 'update', 'retries': 0, 'started': '2023-03-28T05:48:21.344870', 'success': False}, 'update_fn': {'failure': False, 'purpose': 'update', 'retries': 1, 'started': '2023-03-28T05:48:21.344858', 'stopped': '2023-03-28T05:48:21.363784', 'success': True}}}}}
[2023-03-28 05:48:21,473] kopf.objects [DEBUG ] [default/my-claim] Updating diff: (('add', ('metadata',), None, {'labels': {'key1': 'value1'}}),)
[2023-03-28 05:48:21,474] kopf.objects [DEBUG ] [default/my-claim] Handler 'relabel/metadata.labels' is invoked.
[2023-03-28 05:48:21,474] kopf.objects [ERROR ] [default/my-claim] Handler 'relabel/metadata.labels' failed with an exception. Will retry.
Traceback (most recent call last):
File "/home/jsolbrig/anaconda3/envs/kopf/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/home/jsolbrig/anaconda3/envs/kopf/lib/python3.10/site-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/home/jsolbrig/anaconda3/envs/kopf/lib/python3.10/site-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asyncio.shield(future) # slightly expensive: creates tasks
File "/home/jsolbrig/anaconda3/envs/kopf/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/local/home/jsolbrig/learning/kopf/ephemeral.py", line 94, in relabel
labels_patch = {field[0]: new for op, field, old, new in diff}
File "/local/home/jsolbrig/learning/kopf/ephemeral.py", line 94, in <dictcomp>
labels_patch = {field[0]: new for op, field, old, new in diff}
IndexError: tuple index out of range
[2023-03-28 05:48:21,475] kopf.objects [DEBUG ] [default/my-claim] Patching with: {'metadata': {'annotations': {'kopf.zalando.org/relabel.metadata.labels': '{"started":"2023-03-28T05:48:21.344870","delayed":"2023-03-28T05:49:21.475354","purpose":"update","retries":1,"success":false,"failure":false,"message":"tuple index out of range"}'}}, 'status': {'kopf': {'progress': {'relabel/metadata.labels': {'started': '2023-03-28T05:48:21.344870', 'stopped': None, 'delayed': '2023-03-28T05:49:21.475354', 'purpose': 'update', 'retries': 1, 'success': False, 'failure': False, 'message': 'tuple index out of range', 'subrefs': None}}}}}
[2023-03-28 05:48:21,485] kopf.objects [DEBUG ] [default/my-claim] Sleeping was skipped because of the patch, 59.999822 seconds left.
[2023-03-28 05:48:21,586] kopf.objects [DEBUG ] [default/my-claim] Updating is in progress: {'apiVersion': 'cira.colostate.edu/v1', 'kind': 'EphemeralVolumeClaim', 'metadata': {'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec":{"size":"1G"}}\n', 'kopf.zalando.org/relabel.metadata.labels': '{"started":"2023-03-28T05:48:21.344870","delayed":"2023-03-28T05:49:21.475354","purpose":"update","retries":1,"success":false,"failure":false,"message":"tuple index out of range"}', 'kopf.zalando.org/update_fn': '{"started":"2023-03-28T05:48:21.344858","stopped":"2023-03-28T05:48:21.363784","purpose":"update","retries":1,"success":true,"failure":false}', 'kubectl.kubernetes.io/last-applied-configuration': '{"apiVersion":"cira.colostate.edu/v1","kind":"EphemeralVolumeClaim","metadata":{"annotations":{},"name":"my-claim","namespace":"default"},"spec":{"size":"1G"}}\n'}, 'creationTimestamp': '2023-03-28T05:48:07Z', 'generation': 4, 'labels': {'key1': 'value1'}, 'managedFields': [{'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'.': {}, 'f:kubectl.kubernetes.io/last-applied-configuration': {}}}, 'f:spec': {'.': {}, 'f:size': {}}}, 'manager': 'kubectl-client-side-apply', 'operation': 'Update', 'time': '2023-03-28T05:48:07Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:annotations': {'f:kopf.zalando.org/last-handled-configuration': {}, 'f:kopf.zalando.org/relabel.metadata.labels': {}, 'f:kopf.zalando.org/update_fn': {}}}, 'f:status': {'.': {}, 'f:create_fn': {'.': {}, 'f:pvc-name': {}}, 'f:kopf': {'.': {}, 'f:progress': {'.': {}, 'f:relabel/metadata.labels': {'.': {}, 'f:delayed': {}, 'f:failure': {}, 'f:message': {}, 'f:purpose': {}, 'f:retries': {}, 'f:started': {}, 'f:success': {}}, 'f:update_fn': {'.': {}, 'f:failure': {}, 'f:purpose': {}, 'f:retries': {}, 'f:started': {}, 'f:stopped': {}, 'f:success': {}}}}}}, 'manager': 'kopf', 'operation': 'Update', 'time': '2023-03-28T05:48:21Z'}, {'apiVersion': 'cira.colostate.edu/v1', 'fieldsType': 'FieldsV1', 'fieldsV1': {'f:metadata': {'f:labels': {'.': {}, 'f:key1': {}}}}, 'manager': 'kubectl-edit', 'operation': 'Update', 'time': '2023-03-28T05:48:21Z'}], 'name': 'my-claim', 'namespace': 'default', 'resourceVersion': '15968857', 'uid': '73d3190c-c027-488a-9485-a1c7dea97d2e'}, 'spec': {'size': '1G'}, 'status': {'create_fn': {'pvc-name': 'my-claim'}, 'kopf': {'progress': {'relabel/metadata.labels': {'delayed': '2023-03-28T05:49:21.475354', 'failure': False, 'message': 'tuple index out of range', 'purpose': 'update', 'retries': 1, 'started': '2023-03-28T05:48:21.344870', 'success': False}, 'update_fn': {'failure': False, 'purpose': 'update', 'retries': 1, 'started': '2023-03-28T05:48:21.344858', 'stopped': '2023-03-28T05:48:21.363784', 'success': True}}}}}
[2023-03-28 05:48:21,586] kopf.objects [DEBUG ] [default/my-claim] Updating diff: (('add', ('metadata',), None, {'labels': {'key1': 'value1'}}),)
[2023-03-28 05:48:21,587] kopf.objects [DEBUG ] [default/my-claim] Sleeping for 59.888298 seconds for the delayed handlers.
[2023-03-28 05:48:48,343] kopf._core.reactor.r [INFO ] Signal SIGTERM is received. Operator is stopping.
[2023-03-28 05:48:48,343] kopf._core.reactor.r [DEBUG ] Credentials retriever is cancelled.
[2023-03-28 05:48:48,343] kopf._core.reactor.r [DEBUG ] Admission webhook server is cancelled.
[2023-03-28 05:48:48,343] kopf._core.reactor.r [DEBUG ] Admission validating configuration manager is cancelled.
[2023-03-28 05:48:48,343] kopf._core.reactor.r [DEBUG ] Poster of events is cancelled.
[2023-03-28 05:48:48,344] kopf._cogs.clients.w [DEBUG ] Stopping the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2023-03-28 05:48:48,344] kopf._core.reactor.r [DEBUG ] Admission mutating configuration manager is cancelled.
[2023-03-28 05:48:48,344] kopf._core.reactor.r [DEBUG ] Admission insights chain is cancelled.
[2023-03-28 05:48:48,344] kopf._core.reactor.r [DEBUG ] Namespace observer is cancelled.
[2023-03-28 05:48:48,345] kopf._cogs.clients.w [DEBUG ] Stopping the watch-stream for ephemeralvolumeclaims.v1.cira.colostate.edu cluster-wide.
[2023-03-28 05:48:48,345] kopf._core.reactor.r [DEBUG ] Daemon killer is cancelled.
[2023-03-28 05:48:48,346] kopf._core.reactor.r [DEBUG ] Resource observer is cancelled.
[2023-03-28 05:48:50,348] kopf._core.reactor.q [WARNING ] Unprocessed streams left for [(ephemeralvolumeclaims.v1.cira.colostate.edu, '73d3190c-c027-488a-9485-a1c7dea97d2e')].
[2023-03-28 05:48:50,349] kopf._core.reactor.o [DEBUG ] Streaming tasks are stopped: finishing normally; tasks left: set()
[2023-03-28 05:48:50,349] kopf._core.reactor.r [DEBUG ] Multidimensional multitasker is cancelled.
[2023-03-28 05:48:50,350] kopf._core.reactor.r [DEBUG ] Root tasks are stopped: finishing normally; tasks left: set()
[2023-03-28 05:48:50,350] kopf._core.reactor.r [DEBUG ] Hung tasks stopping is skipped: no tasks given.
```
### Additional information
This can be fixed using the following:
```python
@kopf.on.field("ephemeralvolumeclaims", field="metadata.labels")
def relabel(old, new, diff, status, namespace, logger, **kwargs):
logger.info(f"OLD: {old}, NEW: {new}, DIFF: {diff}")
for _, field, old, new in diff:
if not field:
labels_patch = new
else:
labels_patch = {field[0]: new}
pvc_name = status["create_fn"]["pvc-name"]
pvc_patch = {"metadata": {"labels": labels_patch}}
api = kubernetes.client.CoreV1Api()
obj = api.patch_namespaced_persistent_volume_claim(
pvc_name,
namespace,
body=pvc_patch,
)
``` | open | 2023-03-28T05:50:42Z | 2023-03-28T05:51:12Z | https://github.com/nolar/kopf/issues/1018 | [
"bug"
] | jsolbrig | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 897 | ๅจ่ฟ่กscripts/inference/inference_hf.pyๆถ๏ผๅจseq_len> self.max_seq_len_cached้จๅ๏ผไผๅบ็ฐRuntimeError: Boolean value of Tensor with more than one value is ambiguous | ### ๆไบคๅๅฟ
้กปๆฃๆฅไปฅไธ้กน็ฎ
- [X] ่ฏท็กฎไฟไฝฟ็จ็ๆฏไปๅบๆๆฐไปฃ็ ๏ผgit pull๏ผ๏ผไธไบ้ฎ้ขๅทฒ่ขซ่งฃๅณๅไฟฎๅคใ
- [X] ็ฑไบ็ธๅ
ณไพ่ต้ข็นๆดๆฐ๏ผ่ฏท็กฎไฟๆ็
ง[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)ไธญ็็ธๅ
ณๆญฅ้ชคๆง่ก
- [X] ๆๅทฒ้
่ฏป[FAQ็ซ ่](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/ๅธธ่ง้ฎ้ข)ๅนถไธๅทฒๅจIssueไธญๅฏน้ฎ้ข่ฟ่กไบๆ็ดข๏ผๆฒกๆๆพๅฐ็ธไผผ้ฎ้ขๅ่งฃๅณๆนๆก
- [X] ็ฌฌไธๆนๆไปถ้ฎ้ข๏ผไพๅฆ[llama.cpp](https://github.com/ggerganov/llama.cpp)ใ[text-generation-webui](https://github.com/oobabooga/text-generation-webui)ใ[LlamaChat](https://github.com/alexrozanski/LlamaChat)็ญ๏ผๅๆถๅปบ่ฎฎๅฐๅฏนๅบ็้กน็ฎไธญๆฅๆพ่งฃๅณๆนๆก
- [X] ๆจกๅๆญฃ็กฎๆงๆฃๆฅ๏ผๅกๅฟ
ๆฃๆฅๆจกๅ็[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md)๏ผๆจกๅไธๅฏน็ๆ
ๅตไธๆ ๆณไฟ่ฏๆๆๅๆญฃๅธธ่ฟ่ก
### ้ฎ้ข็ฑปๅ
ๆจกๅๆจ็
### ๅบ็กๆจกๅ
LLaMA-7B
### ๆไฝ็ณป็ป
Linux
### ่ฏฆ็ปๆ่ฟฐ้ฎ้ข
_No response_
### ไพ่ตๆ
ๅต๏ผไปฃ็ ็ฑป้ฎ้ขๅกๅฟ
ๆไพ๏ผ
```
# ่ฏทๅจๆญคๅค็ฒ่ดดไพ่ตๆ
ๅต
```
### ่ฟ่กๆฅๅฟๆๆชๅพ
```
# ่ฏทๅจๆญคๅค็ฒ่ดด่ฟ่กๆฅๅฟ
``` | closed | 2024-05-29T12:07:45Z | 2024-06-19T22:03:07Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/897 | [
"stale"
] | WWWWWWLLLL | 2 |
deezer/spleeter | tensorflow | 468 | Used conda to install, doesn't work | ````
Traceback (most recent call last):
File "C:\Users\admin\miniconda3\envs\py36\Scripts\spleeter-script.py", line 9, in <module>
sys.exit(entrypoint())
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\spleeter\__main__.py", line 40, in main
from .commands.separate import entrypoint
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\spleeter\commands\separate.py", line 15, in <module>
from ..separator import Separator
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\spleeter\separator.py", line 23, in <module>
from librosa.core import stft, istft
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\librosa\__init__.py", line 12, in <module>
from . import core
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\librosa\core\__init__.py", line 109, in <module>
from .time_frequency import * # pylint: disable=wildcard-import
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\librosa\core\time_frequency.py", line 10, in <module>
from ..util.exceptions import ParameterError
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\librosa\util\__init__.py", line 71, in <module>
from . import decorators
File "C:\Users\admin\miniconda3\envs\py36\lib\site-packages\librosa\util\decorators.py", line 9, in <module>
from numba.decorators import jit as optional_jit
ModuleNotFoundError: No module named 'numba.decorators'
````
I used conda to install spleeter, I installed librosa and numba, still doesn't work ๐ก
````
conda activate spleeter
Could not find conda environment: spleeter
You can list all discoverable environments with `conda info --envs`.
Invoke-Expression : Cannot bind argument to parameter 'Command' because it is an empty string.
At C:\Users\admin\miniconda3\shell\condabin\Conda.psm1:101 char:36
+ Invoke-Expression -Command $activateCommand;
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [Invoke-Expression], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,Microsoft.PowerShell.Commands.Invo
keExpressionCommand
````
````
conda install -c numba numba
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
````` | closed | 2020-08-08T13:38:48Z | 2020-08-29T19:49:44Z | https://github.com/deezer/spleeter/issues/468 | [
"bug",
"invalid"
] | ghost | 1 |
sktime/sktime | data-science | 7,596 | [ENH] Interface `TiDE` from `darts` library | **Is your feature request related to a problem? Please describe.**
`TiDE` is similar to Transformers, but attempts to provide better performance in time series forecasting at lower computational cost by introducing multilayer perceptron (MLP)-based encoder-decoders without attention.
References:
https://arxiv.org/pdf/2304.08424
https://cloud.google.com/blog/products/ai-machine-learning/vertex-ai-forecasting
Currently it is being implemented as an API by `darts`:
https://unit8co.github.io/darts/generated_api/darts.models.forecasting.tide_model.html
**Describe the solution you'd like**
This could be interfaced in `sktime`as an addition to the existing suite of forecasting models. If not, a new implementation can be made.
| open | 2025-01-03T05:56:58Z | 2025-01-06T13:51:05Z | https://github.com/sktime/sktime/issues/7596 | [
"interfacing algorithms",
"module:forecasting",
"enhancement"
] | PranavBhatP | 1 |
aio-libs/aiopg | sqlalchemy | 411 | Broken compatibility with new release of SQLAlchemy 1.2.0 | Hello,
Yesterday released new version of SQLAlchemy (1.2.0) and new release incompatible with aiopg:
```
mymodule.py:42: in fetchone
result = await conn.execute(query)
.tox/py36-tests/lib/python3.6/site-packages/aiopg/utils.py:72: in __await__
resp = yield from self._coro
.tox/py36-tests/lib/python3.6/site-packages/aiopg/sa/connection.py:116: in _execute
return ResultProxy(self, cursor, self._dialect, result_map)
.tox/py36-tests/lib/python3.6/site-packages/aiopg/sa/result.py:234: in __init__
self._metadata = ResultMetaData(self, cursor.description)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <aiopg.sa.result.ResultMetaData object at 0x7f0417f6a550>
result_proxy = <aiopg.sa.result.ResultProxy object at 0x7f0417f6a5c0>
metadata = (Column(name='column_0', type_code=23, display_size=None, internal_size=4, precision=None, scale=None, null_ok=None), ...Column(name='column_2', type_code=3802, display_size=None, internal_size=-1, precision=None, scale=None, null_ok=None))
def __init__(self, result_proxy, metadata):
self._processors = processors = []
result_map = {}
if result_proxy._result_map:
result_map = {elem[0]: elem[3] for elem in
result_proxy._result_map}
# We do not strictly need to store the processor in the key mapping,
# though it is faster in the Python version (probably because of the
# saved attribute lookup self._processors)
self._keymap = keymap = {}
self.keys = []
dialect = result_proxy.dialect
> typemap = dialect.dbapi_type_map
E AttributeError: 'PGDialect_psycopg2' object has no attribute 'dbapi_type_map'
``` | closed | 2017-12-28T06:15:27Z | 2018-01-03T20:13:36Z | https://github.com/aio-libs/aiopg/issues/411 | [] | Gr1N | 0 |
sgl-project/sglang | pytorch | 4,421 | [Bug] Docker run lmsysorg/sglang:v0.4.4.post1-rocm630 Error: no TensileLibrary_lazy_gfx90a.dat file. | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Hi dear developers and community members, I'm running `python3 -m sglang.launch_server --model-path /models/DeepSeek-R1-Distill-Qwen-7B/ --host 0.0.0.0 --port 30000` using [lmsysorg/sglang:v0.4.4.post1-rocm630](https://hub.docker.com/layers/lmsysorg/sglang/v0.4.4.post1-rocm630/images/sha256-655fe497a319987617b43008385a1470127115a7be3698ba801d0ea3fc0cfb18) on AMD MI210 with the host rocm version being 6.3.4.
Here is the raised error:
> Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:06<00:06, 6.21s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:14<00:00, 7.53s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:14<00:00, 7.33s/it]
>
> [2025-03-14 07:43:29 TP0] Load weight end. type=Qwen2ForCausalLM, dtype=torch.bfloat16, avail mem=49.31 GB, mem usage=14.50 GB.
[2025-03-14 07:43:29 TP0] KV Cache is allocated. #tokens: 779916, K size: 20.83 GB, V size: 20.83 GB
[2025-03-14 07:43:29 TP0] Memory pool end. avail mem=6.15 GB
>
> rocblaslt error: Cannot read /opt/rocm/lib/hipblaslt/library/TensileLibrary_lazy_gfx90a.dat: No such file or directory
>
> rocblaslt error: Could not load /opt/rocm/lib/hipblaslt/library/TensileLibrary_lazy_gfx90a.dat
[2025-03-14 07:43:29 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1748, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 218, in __init__
self.tp_worker = TpWorkerClass(
^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 166, in __init__
self.initialize(min_per_gpu_memory)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 205, in initialize
self.init_cublas()
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 798, in init_cublas
c = a @ b
~~^~~
RuntimeError: CUDA error: HIPBLAS_STATUS_INVALID_VALUE when calling `hipblasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
>
> [2025-03-14 07:43:29] Received sigquit from a child process. It usually means the child failed.
Killed
In the container, the mentioned file `TensileLibrary_lazy_gfx90a.dat` doesn't exist. However, it exists in the host.
> root@server-02:/sgl-workspace# ll /opt/rocm/lib/hipblaslt/library/ | grep lazy
-rw-r--r-- 1 root root 348628 Mar 6 03:22 TensileLibrary_lazy_gfx942.dat
Is the problem related to the `GPU_ARCHS=gfx942` defined in [sglang/docker/Dockerfile.rocm](https://github.com/sgl-project/sglang/blob/main/docker/Dockerfile.rocm#L60)?
In the host, files listed in the aforementioned directory is:
> root@server-02:~# ll /opt/rocm/lib/hipblaslt/library/ | grep lazy
-rw-r--r-- 1 root root 29476 Mar 4 11:19 TensileLibrary_lazy_gfx1100.dat
-rw-r--r-- 1 root root 34430 Mar 4 11:19 TensileLibrary_lazy_gfx1101.dat
-rw-r--r-- 1 root root 76911 Mar 4 11:21 TensileLibrary_lazy_gfx1200.dat
-rw-r--r-- 1 root root 76911 Mar 4 11:19 TensileLibrary_lazy_gfx1201.dat
-rw-r--r-- 1 root root 32333 Mar 4 11:19 TensileLibrary_lazy_gfx908.dat
-rw-r--r-- 1 root root 55365 Mar 4 11:21 TensileLibrary_lazy_gfx90a.dat
-rw-r--r-- 1 root root 206837 Mar 4 11:19 TensileLibrary_lazy_gfx942.dat
Thanks very much for your time. Waiting for the kind reply from developers and community!
Thanks for all your support!
### Reproduction
I'm running `python3 -m sglang.launch_server --model-path /models/DeepSeek-R1-Distill-Qwen-7B/ --host 0.0.0.0 --port 30000` using [lmsysorg/sglang:v0.4.4.post1-rocm630](https://hub.docker.com/layers/lmsysorg/sglang/v0.4.4.post1-rocm630/images/sha256-655fe497a319987617b43008385a1470127115a7be3698ba801d0ea3fc0cfb18) on AMD MI210 with the host rocm version being 6.3.4.
### Environment
root@server-02:/sgl-workspace# python3 -m sglang.check_env
Successfully preprocessed all matching files.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/torch/utils/cpp_extension.py", line 2209, in _run_ninja_build
subprocess.run(
File "/usr/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 306, in <module>
check_env()
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 285, in check_env
env_info.update(get_package_versions(PACKAGE_LIST))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 62, in get_package_versions
module = importlib.import_module(package_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/usr/local/lib/python3.12/dist-packages/xgrammar/__init__.py", line 1, in <module>
from . import testing
File "/usr/local/lib/python3.12/dist-packages/xgrammar/testing.py", line 11, in <module>
from .matcher import GrammarMatcher, bitmask_dtype
File "/usr/local/lib/python3.12/dist-packages/xgrammar/matcher.py", line 13, in <module>
from .kernels import apply_token_bitmask_inplace_kernels
File "/usr/local/lib/python3.12/dist-packages/xgrammar/kernels/__init__.py", line 12, in <module>
from .apply_token_bitmask_inplace_cuda import apply_token_bitmask_inplace_cuda
File "/usr/local/lib/python3.12/dist-packages/xgrammar/kernels/apply_token_bitmask_inplace_cuda.py", line 54, in <module>
_load_torch_ops()
File "/usr/local/lib/python3.12/dist-packages/xgrammar/kernels/apply_token_bitmask_inplace_cuda.py", line 42, in _load_torch_ops
torch.utils.cpp_extension.load_inline(
File "/usr/local/lib/python3.12/dist-packages/torch/utils/cpp_extension.py", line 1723, in load_inline
return _jit_compile(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/cpp_extension.py", line 1798, in _jit_compile
_write_ninja_file_and_build_library(
File "/usr/local/lib/python3.12/dist-packages/torch/utils/cpp_extension.py", line 1926, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/usr/local/lib/python3.12/dist-packages/torch/utils/cpp_extension.py", line 2225, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'xgrammar': [1/3] /opt/rocm/bin/hipcc -DWITH_HIP -DTORCH_EXTENSION_NAME=xgrammar -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /usr/local/lib/python3.12/dist-packages/torch/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /usr/include/python3.12 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -O3 -Wno-switch-bool -fPIC -D__HIP_PLATFORM_AMD__=1 -DUSE_ROCM=1 -DHIPBLAS_V2 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O3 -std=c++17 --threads 4 -use_fast_math --offload-arch=gfx90a --offload-arch=gfx942 -fno-gpu-rdc -c /root/.cache/torch_extensions/py312_cpu/xgrammar/hip.hip -o hip.cuda.o
FAILED: hip.cuda.o
/opt/rocm/bin/hipcc -DWITH_HIP -DTORCH_EXTENSION_NAME=xgrammar -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /usr/local/lib/python3.12/dist-packages/torch/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /usr/include/python3.12 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -O3 -Wno-switch-bool -fPIC -D__HIP_PLATFORM_AMD__=1 -DUSE_ROCM=1 -DHIPBLAS_V2 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O3 -std=c++17 --threads 4 -use_fast_math --offload-arch=gfx90a --offload-arch=gfx942 -fno-gpu-rdc -c /root/.cache/torch_extensions/py312_cpu/xgrammar/hip.hip -o hip.cuda.o
clang++: error: unknown argument '--threads'; did you mean '-mthreads'?
clang++: error: no such file or directory: '4'
failed to execute:/opt/rocm/lib/llvm/bin/clang++ --offload-arch=gfx90a --offload-arch=gfx942 -DWITH_HIP -DTORCH_EXTENSION_NAME=xgrammar -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /usr/local/lib/python3.12/dist-packages/torch/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /usr/include/python3.12 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -O3 -Wno-switch-bool -fPIC -D__HIP_PLATFORM_AMD__=1 -DUSE_ROCM=1 -DHIPBLAS_V2 -DCUDA_HAS_FP16=1 -D__HIP_NO_HALF_OPERATORS__=1 -D__HIP_NO_HALF_CONVERSIONS__=1 -O3 -std=c++17 --threads 4 -use_fast_math -fno-gpu-rdc -c -x hip /root/.cache/torch_extensions/py312_cpu/xgrammar/hip.hip -o "hip.cuda.o"
[2/3] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=xgrammar -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1016\" -isystem /usr/local/lib/python3.12/dist-packages/torch/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.12/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THC -isystem /usr/local/lib/python3.12/dist-packages/torch/include/THH -isystem /opt/rocm/include -isystem /usr/include/python3.12 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -O3 -Wno-switch-bool -c /root/.cache/torch_extensions/py312_cpu/xgrammar/main.cpp -o main.o -fPIC -D__HIP_PLATFORM_AMD__=1 -DUSE_ROCM=1 -DHIPBLAS_V2
ninja: build stopped: subcommand failed. | open | 2025-03-14T08:58:46Z | 2025-03-20T06:31:30Z | https://github.com/sgl-project/sglang/issues/4421 | [
"high priority"
] | luciaganlulu | 4 |
explosion/spaCy | machine-learning | 13,057 | Equals TypeError |
## How to reproduce the behaviour
nlp = spacy.load("en_core_web_lg")
text = "The quick brown fox jumps over the lazy dog"
doc = nlp(text)
token = doc[0]
span = doc[0:1]
print(span == token)
Actual Result: `TypeError: Argument 'other' has incorrect type (expected spacy.tokens.span.Span, got spacy.tokens.token.Token)`
Expected Result: `True` or `False`
## Info about spaCy
- **spaCy version:** 3.6.1
- **Platform:** Linux-6.2.0-34-generic-x86_64-with-glibc2.35
- **Python version:** 3.10.10
- **Pipelines:** en_core_web_lg (3.6.0)
| closed | 2023-10-11T03:07:47Z | 2023-11-12T00:02:23Z | https://github.com/explosion/spaCy/issues/13057 | [
"bug",
"feat / doc"
] | TristynAlxander | 2 |
tensorly/tensorly | numpy | 274 | API Typo? | For the API reference for non_negative_parafac, I believe it's not the same as calling parafac(non_negative = True) anymore because it (non_negative) doesn't seem to be one of the fields of parafac anymore.
| closed | 2021-05-27T01:45:48Z | 2021-06-02T20:23:37Z | https://github.com/tensorly/tensorly/issues/274 | [] | VoliCrank | 1 |
onnx/onnx | pytorch | 6,590 | cumprod operation | ### System information
ONNX version: 1.17.0
### Notes
I just encountered while trying to serialize a torch model into ONNX that this operation is not yet supported. Like, is it such a strange operation?
I did a workaround by `x.log().cumsum().exp()` but it's hella slower.
Also how is it possible that cumsum is supported but cumprod isn't?
Thank you so much. | open | 2024-12-19T16:39:56Z | 2025-02-19T17:33:26Z | https://github.com/onnx/onnx/issues/6590 | [
"topic: operator",
"topic: enhancement"
] | claverru | 6 |
huggingface/transformers | tensorflow | 35,981 | Docs: return type of `get_default_model_and_revision` might be incorrectly documented? | The return type here is documented as `Union[str, Tuple[str, str]]`
https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/pipelines/base.py#L385-L387
The docstring just says `str`
https://github.com/huggingface/transformers/blob/d7188ba600e36d3fd191b12e19f1b3bb81a8404f/src/transformers/pipelines/base.py#L404
But I think that only `Tuple[str, str]` might be correct?
For example, if I run
```python
from transformers import Pipeline
# from pair_classification import PairClassificationPipeline
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers.pipelines import PIPELINE_REGISTRY
from transformers import pipeline
from transformers.utils import direct_transformers_import, is_tf_available, is_torch_available
import numpy as np
def softmax(outputs):
maxes = np.max(outputs, axis=-1, keepdims=True)
shifted_exp = np.exp(outputs - maxes)
return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True)
class PairClassificationPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "second_text" in kwargs:
preprocess_kwargs["second_text"] = kwargs["second_text"]
return preprocess_kwargs, {}, {}
def preprocess(self, text, second_text=None):
return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework)
def _forward(self, model_inputs):
return self.model(**model_inputs)
def postprocess(self, model_outputs):
logits = model_outputs.logits[0].numpy()
probabilities = softmax(logits)
best_class = np.argmax(probabilities)
label = self.model.config.id2label[best_class]
score = probabilities[best_class].item()
logits = logits.tolist()
return {"label": label, "score": score, "logits": logits}
PIPELINE_REGISTRY.register_pipeline(
"custom-text-classification",
pipeline_class=PairClassificationPipeline,
pt_model=AutoModelForSequenceClassification if is_torch_available() else None,
tf_model=TFAutoModelForSequenceClassification if is_tf_available() else None,
default={"pt": ("hf-internal-testing/tiny-random-distilbert", "2ef615d")},
type="text",
)
assert "custom-text-classification" in PIPELINE_REGISTRY.get_supported_tasks()
_, task_def, _ = PIPELINE_REGISTRY.check_task("custom-text-classification")
classifier = pipeline('custom-text-classification')
```
then I get
```python
ValueError Traceback (most recent call last)
<ipython-input-6-0cc5199a8521> in <cell line: 53>()
51 _, task_def, _ = PIPELINE_REGISTRY.check_task("custom-text-classification")
52
---> 53 classifier = pipeline('custom-text-classification')
/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
898 if model is None:
899 # At that point framework might still be undetermined
--> 900 model, default_revision = get_default_model_and_revision(targeted_task, framework, task_options)
901 revision = revision if revision is not None else default_revision
902 logger.warning(
ValueError: too many values to unpack (expected 2)
```
It looks like `pipeline` expects a tuple, not a string
---
Looks like this may have just been forgotten during #17667? | closed | 2025-01-31T10:34:48Z | 2025-02-13T10:59:16Z | https://github.com/huggingface/transformers/issues/35981 | [] | MarcoGorelli | 1 |
jupyterlab/jupyter-ai | jupyter | 326 | generate fails if self.serverapp.root_dir not writable | Hello and thank you for this great extension ๐
## Description
We are facing a `Permission denied`-issue when jupyter-ai is asked to generate a notebook.
It tries to generate the file in the directory set by `self.serverapp.root_dir` which is not writable.
https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai%20root_dir&type=code
## Reproduce
If one starts a JupyterLab on a multi-user-system and wants to be able to browse all files this is set to
`c.ServerApp.root_dir = '/'`
Of course this root_dir is not writable and so `jupyter-ai` fails with `Permission denied`.
## Possible solution
Instead of `self.serverapp.root_dir` the current directory of the filebrowser could be used.
(perhaps [similar to the jupyterlab-git extension](https://github.com/jupyterlab/jupyterlab-git/blob/v0.41.0/src/cloneCommand.ts#L60)?)
| closed | 2023-08-09T09:19:56Z | 2025-03-03T20:41:54Z | https://github.com/jupyterlab/jupyter-ai/issues/326 | [
"bug",
"status:triaged"
] | jhgoebbert | 2 |
MaxHalford/prince | scikit-learn | 151 | prince.PCA vs. sklearn.decomposition.PCA? | I'm comparing the PCA functionality from sklearn ([sklearn.decomposition.PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)) with the one from prince. I cannot fully understand why the results differ.
**Example:**
```python
import prince
data = prince.datasets.load_energy_mix(year=2019, normalize=True)
from sklearn.decomposition import PCA
pca_sklearn = PCA(n_components=2)
pca_sklearn = pca_sklearn.fit(data)
pca_prince = prince.PCA(n_components=2)
pca_prince = pca_prince.fit(data)
```
I was looking at `pca_sklearn.singular_values_` or `pca_sklearn.explained_variance_ratio_`, but could not relate those values to `pca_prince.eigenvalues_` or `pca_prince.cumulative_percentage_of_variance_`.
Aren't these methods supposed to be equivalent? | closed | 2023-05-30T00:38:37Z | 2023-05-30T18:34:12Z | https://github.com/MaxHalford/prince/issues/151 | [] | normanius | 3 |
strawberry-graphql/strawberry-django | graphql | 559 | prefetch_related and filtering in custom resolver | I need to filter related models. Query optimizer works fine, but I cannot get it working with filtering inside custom resolver (without using @strawberry.django.filter)
1. When I define my own Prefetch, I am getting double prefetch queries. One is mine, the other is from optimizer.
```python
@strawberry.django.field(
prefetch_related=[
lambda info: Prefetch(
"downloadables",
queryset=Downloadable.objects.filter(is_published=True).all(),
to_attr="downloadables_prefetched",
)
],
)
def downloadables(self) -> List[Annotated["DownloadableType", strawberry.lazy("vfxtricks.common.schema")]]:
return self.downloadables_prefetched
```
2. When I dont define my own Prefetch with custom name, then optimizer does make a Prefetch query, but query in my resolver does not take advantage of it. Meaning, I end up with way too many queries.
```python
@strawberry.django.field()
def downloadables(self) -> List[Annotated["DownloadableType", strawberry.lazy("vfxtricks.common.schema")]]:
return self.downloadables.filter(is_published=True)
```
thank you | closed | 2024-06-15T04:51:31Z | 2025-03-20T15:57:32Z | https://github.com/strawberry-graphql/strawberry-django/issues/559 | [
"enhancement"
] | tasiotas | 4 |
keras-team/autokeras | tensorflow | 1,078 | Enable limiting model size based on Keras Tuner | ### Bug Description
ImageRegressor training stops at random when training on dual RTX Titan GPUs. Error Message:
ResourceExhaustedError: OOM when allocating tensor with shape[32,1280,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node model/separable_conv2d_15/separable_conv2d (defined at C:\Anaconda3\envs\automl\lib\site-packages\autokeras\engine\tuner.py:71) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[Op:__inference_distributed_function_169665]
Function call stack:
distributed_function
### Bug Reproduction
Code for reproducing the bug:
model = ak.ImageRegressor(metrics=['mae', 'mape'],
max_trials=20)
model.fit(x_train, y_train, epochs=200)
Data used by the code:
custom image dataset.
x.shape = (715, 128, 128, 3)
y.shape = (715,)
Did a 80:20 train-test-split:
x_train = (572, 128, 128, 3).
y_train = (572,)
### Expected Behavior
Training to continue until 20 trials completed.
### Setup Details
Include the details about the versions of:
- OS type and version: Windows 10 Pro 64-bit
- Python: 3.7.6
- autokeras: 1.0.2
- keras-tuner: 1.0.1
- scikit-learn: 0.22.1
- numpy: 1.18.1
- pandas: 1.0.1
- tensorflow-gpu: 2.1.0
### Additional context
Tried AutoModel as well but same OOM message appears. I have not been able to train beyond 5 trials without running into error either on ImageRegressor or AutoModel at this stage. Is there a way to limit AK from fitting networks too large to fit in GPU memory?
| closed | 2020-04-01T23:06:53Z | 2020-04-29T00:19:13Z | https://github.com/keras-team/autokeras/issues/1078 | [
"feature request",
"pinned"
] | ghost | 7 |
getsentry/sentry | python | 87,044 | Set Up Code Mapping fails when the file is at the top of the source repository | [On this event](https://demo.sentry.io/issues/6395093418/events/latest/?project=4508969830973440&query=is%3Aunresolved%20issue.priority%3A%5Bhigh%2C%20medium%5D&referrer=latest-event&sort=date&stream_index=1), I clicked on:


I left a [feedback in Sentry](https://sentry.sentry.io/feedback/?alert_rule_id=15210908&alert_type=issue&feedbackSlug=javascript%3A6396716556¬ification_uuid=af8c61f6-3d09-4457-9bef-7005521df3db&project=11276&referrer=slack&statsPeriod=90d)
Replay [at that point in time](https://sentry.sentry.io/replays/de7f9f80e83a4cfcb4f2fdd4bb238aad/?referrer=%2Freplays%2F%3AreplaySlug%2F&t=392&t_main=breadcrumbs)
Note, I had just added mappings on the [GitHub config manually](https://demo.sentry.io/settings/integrations/github/179466/?tab=codeMappings) | open | 2025-03-13T21:34:23Z | 2025-03-19T14:04:36Z | https://github.com/getsentry/sentry/issues/87044 | [
"Product Area: Issues"
] | bruno-garcia | 6 |
vitalik/django-ninja | rest-api | 341 | applying renderers and parsers to TestClient | Hey, how can I apply my own renderer and parser to TestClient? The API itself works correctly, but the TestClient only accepts and returns json format. It should be added to this that when using a custom renderer (for example, xml), the generated swager still sends the content type equal to app/json in the headers | closed | 2022-01-28T10:59:23Z | 2022-10-01T17:04:21Z | https://github.com/vitalik/django-ninja/issues/341 | [] | VityasZV | 1 |
littlecodersh/ItChat | api | 522 | ๅ
ณไบ็พคๆถๆฏๅ
ไธญActualNickName็่ทๅ | ๅจ่ทๅ็พคๆถๆฏๆถitchatๅจๆถๆฏๅ
ไธญๅ ๅ
ฅไบไธไธช้ฎๅผ
```
isAt: ๅคๆญๆฏๅฆ@ๆฌๅท
ActualNickName: ๅฎ้
NickName
Content: ๅฎ้
Content
```
ๅจๆชๅไฟกๆฏ็ๆถๅๆๅ็ฐActualNickName่ฟไธช้ฎๅผๅพๅคๆถๅไปไน้ฝ่ทๅไธๅฐ๏ผๅ ไธบๆฏๅๆฅๅ ็ๆไปฅๆ่ฎคไธบ่ฟไธชๅ่ฝๅบ่ฏฅๆฏๅจitchatๆบ็ ไธญๅค็่ไธๆฏๅพฎไฟกๆถๆฏ็ๅๅงๆฐๆฎใ
้ไธไธชๆๆๅๅฐ็็พคๆถๆฏๅ
```
msg = {
'MsgId': '3059713934041007946',
'FromUserName': '@1df538f516955a2d80a095506964426d',
'ToUserName': '@@68a508917eba302fc4c8c5a5a300a5fefdac7329ed6b55d2822a6c7f5b3cb0b4',
'MsgType': 1,
'Content': 'ๆต่ฏ',
'Status': 3,
'ImgStatus': 1,
'CreateTime': 1506328942,
'VoiceLength': 0,
'PlayLength': 0,
'FileName': '',
'FileSize': '',
'MediaId': '',
'Url': '',
'AppMsgType': 0,
'StatusNotifyCode': 0,
'StatusNotifyUserName': '',
'RecommendInfo': {'UserName': '',
'NickName': '',
'QQNum': 0,
'Province': '',
'City': '',
'Content': '',
'Signature': '',
'Alias': '',
'Scene': 0,
'VerifyFlag': 0,
'AttrStatus': 0,
'Sex': 0,
'Ticket': '',
'OpCode': 0},
'ForwardFlag': 0,
'AppInfo': {'AppID': '', 'Type': 0},
'HasProductId': 0,
'Ticket': '',
'ImgHeight': 0,
'ImgWidth': 0,
'SubMsgType': 0,
'NewMsgId': 3059713934041007946,
'OriContent': '',
'ActualNickName': '',
'IsAt': False,
'ActualUserName': '@1df538f516955a2d80a095506964426d',
'User': {'Chatroom': {'UserName': '@@68a508917eba302fc4c8c5a5a300a5fefdac7329ed6b55d2822a6c7f5b3cb0b4',
'MemberList': ''}},
'Type': 'Text',
'Text': 'ๆต่ฏ'}
```
ๅฆๆ็็กฎๆฏitchat็ๆๅ
bugๅฏไปฅ่ฎฉๆ็ฅ้้ฎ้ขๅบๅจๅชๅ๏ผๆฏ็ซไปFromUserNameๅๆ่ฟๆบ้บป็ฆ็๏ผ่ฟๅพๅ่ทๅไธ้็จๆทๅ่กจ | closed | 2017-09-25T09:00:53Z | 2019-07-03T03:09:37Z | https://github.com/littlecodersh/ItChat/issues/522 | [
"question"
] | HardGaming01 | 5 |
ansible/awx | automation | 15,016 | duplicate key value violates unique constraint "pg_type_typname_nsp_index" | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Hi,
it seems there is a regression in 24.0.0.
I have multiple schedules (for the same template with different limits set) that run at the same time. This used to work fine until the recent 24.0.0 update.
Now I seem to get an error:
```
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
raise ex.with_traceback(None)
psycopg.errors.UniqueViolation: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL: Key (typname, typnamespace)=(main_jobevent_20240320_22, 2200) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 499, in run
self.pre_run_hook(self.instance, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 1066, in pre_run_hook
super(RunJob, self).pre_run_hook(job, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 427, in pre_run_hook
create_partition(instance.event_class._meta.db_table, start=instance.created)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/utils/common.py", line 1154, in create_partition
cursor.execute(
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
raise ex.with_traceback(None)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL: Key (typname, typnamespace)=(main_jobevent_20240320_22, 2200) already exists.
```
Is this a regression from the changes in #14910 ?
I will try to work around the issue by putting the schedules a minute apart.
Greetings
Klaas
### AWX version
24.0.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
awx-ee 24.0.0
### Operating system
RHEL8
### Web browser
Firefox
### Steps to reproduce
Have two schedules that start at the same time with the same template (job needs to allow concurrent runs). I am not sure if the "same template" is important, but in my usecase it's always the same template
### Expected results
Works
### Actual results
```
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
raise ex.with_traceback(None)
psycopg.errors.UniqueViolation: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL: Key (typname, typnamespace)=(main_jobevent_20240321_13, 2200) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 499, in run
self.pre_run_hook(self.instance, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 1066, in pre_run_hook
super(RunJob, self).pre_run_hook(job, private_data_dir)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/tasks/jobs.py", line 427, in pre_run_hook
create_partition(instance.event_class._meta.db_table, start=instance.created)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/utils/common.py", line 1154, in create_partition
cursor.execute(
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/db/backends/utils.py", line 87, in _execute
return self.cursor.execute(sql)
File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/psycopg/cursor.py", line 723, in execute
raise ex.with_traceback(None)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL: Key (typname, typnamespace)=(main_jobevent_20240321_13, 2200) already exists.
```
### Additional information
_No response_ | closed | 2024-03-21T13:51:33Z | 2024-03-27T12:48:52Z | https://github.com/ansible/awx/issues/15016 | [
"type:bug",
"component:api",
"needs_triage",
"community"
] | Klaas- | 10 |
harry0703/MoneyPrinterTurbo | automation | 577 | ไฟฎๆน้
็ฝฎๆไปถ็ๆถๅ๏ผๅช่ฝ้
็ฝฎOpenAI็APIๅ๏ผDeepSeek็APIๅฏไธๅฏไปฅ | ### ๆฏๅฆๅทฒๅญๅจ็ฑปไผผ้ฎ้ข๏ผ
- [ ] ๆๅทฒๆ็ดข็ฐๆ้ฎ้ข
### ๅฝๅ่กไธบ
ๆ้
็ฝฎๅฎOpenAI็APIไนๅ้ฝ้
็ฝฎๅ้ข็ไธ่ฅฟไบ๏ผๆๅ็ฐๆ้
็ฝฎ็้ฃไธฒ็ ๆฏๆต่ฏ็ ใใใใ่ฆ็ๆ็ๆญฃ็็ ่ฆ ไบๅ ๏ผๆไปฅๅฐฑๆณ้ฎDeepSeek็ๅฏไปฅๅ๏ผ
### ้ขๆ่กไธบ
ๆไน่งฃๅณไธไธ
### ้็ฐๆญฅ้ชค
ๆ
### ๅ ๆ ่ฟฝ่ธช/ๆฅๅฟ
ๆ
### Python ็ๆฌ
v3.12.0
### ๆไฝ็ณป็ป
macOS 12.7.6
### MoneyPrinterTurbo ็ๆฌ
wu
### ๅ
ถไปไฟกๆฏ
_No response_ | closed | 2025-01-25T12:13:16Z | 2025-02-05T06:52:30Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/577 | [
"bug"
] | heizhijin | 1 |
xlwings/xlwings | automation | 1,642 | api.merge() freezes the program | #### OS Windows 10
#### Versions of xlwings, Excel and Python (0.23.4, Office 365, Python 3.9.6)
Hi,
I am trying to merge a range of cells using api.merge() and the program freezes and terminates after a few minutes. No tracebacks. If I remove api.merge() it just works.
```python
app = xw.App(visible=False)
res_wbook = app.books.add()
a_sheet = report_wbook.sheets.add('Test')
# head_start and head_end are the cell values derived from other variables
a_sheet.range(f'{head_start}:{head_end}').api.merge()
``` | closed | 2021-07-02T06:20:51Z | 2021-07-09T10:09:27Z | https://github.com/xlwings/xlwings/issues/1642 | [] | kameaplli | 1 |
snarfed/granary | rest-api | 150 | Atom titles when parsing microformats, 'note' type | So, this is more a support request. I'm trying to integrate with fediverse and want to expose an atom feed from my microformats page.
https://granary.io/url?url=https://realize.be/timeline&input=html&output=atom
It's almost good, apart from two things I can't put my finger on:
1. the title for entries which are 'note' posts. They do not include a 'p-name' class as that shouldn't be there normally. That's something that Aaron told me. However, when looking at his page, for his notes, he does include the p-name class, but on the same wrapper where there's also e-content, so maybe I should start doing that as well, because now, what you see, is that the entry titles in the atom feed contain too much garbage. It includes content which seems to be extracted from the node__meta class inside my feed. I remember I've seen this in the logs with brid.gy as well when publishing, but it's ok there. Is there a workaround for this that would only affect the atom feed by not the microformats parsing (e.g. http://xray.p3k.io/parse?expect=feed&url=https%3A%2F%2Frealize.be%2Ftimeline is fine)
2. The main title of the feed. For Aaron, the nicely says 'User feed for Aaron Parecki', mine uses the title from the first entry it seems. So I wonder what I missing here.
Feel free to ping me on IRC on one of the indieweb channels if I'm there, it's probably easier, and I can do live tests then as well :) | closed | 2018-05-25T10:08:40Z | 2018-05-25T18:13:46Z | https://github.com/snarfed/granary/issues/150 | [] | swentel | 4 |
mwaskom/seaborn | pandas | 3,697 | Split violin plots not working | Hi,
I am trying to re-run a previous code that worked very well to create grouped asymmetrical violin plots. I am getting several errors that were not happening (maybe 6 months ago) and now am I trying to constrain the errors. I think one of the issues is that I am not providing an x= value (because when I run [Seaborn's example](https://seaborn.pydata.org/examples/grouped_violinplots.html), it works, albeit the deprecation warnings).
The code is rather complicated because it's two violin plots with another tiny one zoomed into a range I want to show. The error is happening very early, when I try to run the sns.violinplot. This is the full code:
```
# Applying the custom configurations
plt.rcParams.update(plotpars_1x2)
# Create a figure with two subplots
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
# First violin plot for age_median_gyr
sns.violinplot(ax=axes[0], y='age_median_gyr', hue='Type', data=catplot_bpsm_01,
split=True,
inner="quart",
palette={"Observed": palette[1], "Sim01": palette[-1]},
bw_method=.3, cut=1, linewidth=1., alpha=alpha, saturation=saturation)
axes[0].set_ylabel(r"$\langle t_{\star} \rangle$ (Gyr)")
axes[0].set_title(r"Before PSM - $\langle t_{\star} \rangle$ (Gyr)")
axes[0].get_legend().remove()
axes[0].set_xticks([])
# Customize the legend for the first plot
leg = axes[0].legend(title=r"Dataset", loc="lower left")
new_labels = [r"Gaia-ESO", 'Simulation 01']
for t, l in zip(leg.get_texts(), new_labels):
t.set_text(l)
# Second violin plot for FEH
sns.violinplot(ax=axes[1], y='FEH', hue='Type', data=catplot_bpsm_01,
split=True,
inner="quart",
palette={"Observed": palette[1], "Sim01": palette[-1]},
bw_method=.3, cut=1, linewidth=1., alpha=alpha, saturation=saturation)
axes[1].set_ylabel(r"[Fe/H]")
axes[1].set_title(r"Before PSM - [Fe/H]")
axes[1].get_legend().remove()
axes[1].set_xticks([])
axins = inset_axes(axes[1], width="35%", height="35%", loc=4)
sns.violinplot(y='FEH', hue='Type', data=catplot_bpsm_01,
split=True,
inner="quart",
palette={"Observed": palette[1], "Sim01": palette[-1]},
bw_method=.3, cut=1, linewidth=1., alpha=alpha, saturation=saturation)
axins.set_ylim([-1.1, 0.7])
# axins.set_ylabel(r"[Fe/H]")
axins.set_ylabel("")
axins.get_legend().remove()
axins.set_yticks([0.5, 0., -0.5, -1])
axins.set_yticklabels(axins.get_yticks(), fontsize=14)
axins.tick_params(axis='y', which='major', labelsize=14)
plt.tight_layout(w_pad=2.)
plt.show()
```
The error is happening here already:
```
# First violin plot for age_median_gyr
sns.violinplot(ax=axes[0], y='age_median_gyr', hue='Type', data=catplot_bpsm_01,
split=True,
inner="quart",
palette={"Observed": palette[1], "Sim01": palette[-1]},
bw_method=.3, cut=1, linewidth=1., alpha=alpha, saturation=saturation)
```
When I simplify this with:
`sns.violinplot(data=catplot_bpsm_01, y="age_median_gyr", hue="Type", inner="quart", split=True)`
I am not getting a split violin, I am getting a regular violin. It is completely ignoring the split part. When I add the palette part, `palette={"Observed": palette[1], "Sim01": palette[-1]}`, it gives me this message:
> ---------------------------------------------------------------------------
> TypeError Traceback (most recent call last)
> Cell In[110], line 2
> 1 plt.rcParams.update(plotpars_1x1)
> ----> 2 sns.violinplot(data=catplot_bpsm_01, y="age_median_gyr", hue="Type", inner="quart", split=True, palette={"Observed": palette[1], "Sim01": palette[-1]})
> 3 plt.show()
>
> TypeError: 'NoneType' object is not subscriptable
I have no idea why this is happening. This is the image I was previously generating with the original code above:

Also, this is the shape of the data I am using:

**Current Seaborn version: 0.12.2** | closed | 2024-05-25T19:14:32Z | 2024-05-29T22:18:29Z | https://github.com/mwaskom/seaborn/issues/3697 | [] | mlldantas | 7 |
marcomusy/vedo | numpy | 1,055 | Error Encountered While Decimating Mesh with Default Function (Quadric) | Hi @marcomusy ,
I hope you remember me from the POLBIAS 2023 conference in Dresden last year. I work with @jo-mueller and @haesleinhuepf.
I am encountering an error when attempting to decimate my mesh using the default function, which employs quadric decimation. Below is the traceback of the error:
AttributeError: 'vtkmodules.vtkFiltersCore.vtkQuadricDecimation' object has no attribute 'MapPointDataOn'
This happened yesterday after I upgraded to the latest version of Vedo. Below is the line of code that throws the error:
decimated_mesh = mesh.decimate(n=10000)
Any insights or suggestions on resolving this issue would be greatly appreciated.
Best,
Maleeha
| closed | 2024-02-19T12:14:16Z | 2024-03-09T14:12:51Z | https://github.com/marcomusy/vedo/issues/1055 | [] | maleehahassan | 8 |
huggingface/transformers | tensorflow | 36,272 | Device Movement Error with 4-bit Quantized LLaMA 3.1 Model Loading | ### System Info
```shell
I'm running into a persistent issue when trying to load the LLaMA 3.1 8B model with 4-bit quantization. No matter what configuration I try, I get this error during initialization:
pgsql
Copy
CopyValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Environment:
Python: 3.10
Transformers: Latest version
PyTorch: Latest version
GPU: 85.05 GB memory available
CUDA: Properly installed and available
What I've tried:
Loading with a BitsAndBytesConfig:
python
Copy
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
llm_int8_has_fp16_weight=True
)
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
quantization_config=bnb_config,
trust_remote_code=True,
use_cache=True,
device_map='auto',
max_memory={0: "24GiB"}
)
Loading without device mapping:
python
Copy
model_kwargs = {
"trust_remote_code": True,
"load_in_4bit": True,
"torch_dtype": torch.float16,
"use_cache": True
}
### Expected behavior
```shell
Clearing CUDA cache and running garbage collection beforehand.
Experimenting with different device mapping strategies.
Even with an ample GPU memory (85.05 GB) and confirmed CUDA availability, I still can't seem to get the model to load without running into this device movement error. Other models load fine when using quantization, so I'm not sure what's special about this setup.
Any ideas on how to resolve this or work around the error? Thanks in advance for your help!
```
### Checklist
- [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ ] I checked if a related official extension example runs on my machine. | open | 2025-02-19T07:33:39Z | 2025-03-13T13:29:01Z | https://github.com/huggingface/transformers/issues/36272 | [] | Pritidhrita | 2 |
modoboa/modoboa | django | 3,130 | Cannot enable DKIM on new domains. | # Impacted versions
* OS Type: Ubuntu
* OS Version: 22.04.3 LTS
* Database Type: MySQL
* Database version: mariadb Ver 15.1
* Modoboa: 2.2.2
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
Upgraded a while back, attempted to add a new domain - but I cannot enable DKIM. I found that I can create them if DKIM is disabled, but when I edit the domain to enable it - I get a 500 server error on the post.
# Current behavior
Can't enable DKIM at creation of domain, or editing an existing domain
# Expected behavior
it works
# Video/Screenshot link (optional)
<img width="531" alt="image" src="https://github.com/modoboa/modoboa/assets/310899/958a3810-3cc0-4fe3-a044-66afb60e8f28">
| closed | 2023-12-02T23:45:00Z | 2024-01-28T23:31:32Z | https://github.com/modoboa/modoboa/issues/3130 | [] | stutteringp0et | 2 |
tqdm/tqdm | pandas | 971 | TypeError with Iterators using the GUI | - [ ] I have marked all applicable categories:
+ [X] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
+ I searched for "gui", "TypeError", "Iterator", and various combinations.
- [X] I have mentioned version numbers, operating system and
environment, where applicable:
```python
>>> import tqdm, sys
>>> print(tqdm.__version__, sys.version, sys.platform)
4.42.0 3.8.1 (default, Jan 8 2020, 15:55:49) [MSC v.1916 64 bit (AMD64)] win32
```
Hopefully I searched enough that this isn't a duplicate issue. I'm getting a `TypeError` when I try to run `tqdm_gui` with iterators.
```python
TypeError: 'NoneType' object cannot be interpreted as an integer
```
The offending line is here: https://github.com/tqdm/tqdm/blob/master/tqdm/gui.py#L56
I fixed it by adding a try/except and the gui worked fine after that. The `len` function will raise a `TypeError` if it gets something that can't be interpreted as an integer.
```python
try:
total = len(self)
except TypeError:
total = None
```
It looks like the intent was to support iterators as there is an `if total is None`, but using an unguarded `len` may have been an oversight? Is it worth it to issue a PR with the gui being experimental?
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| closed | 2020-05-14T22:40:45Z | 2020-06-28T22:25:09Z | https://github.com/tqdm/tqdm/issues/971 | [
"p0-bug-critical โข",
"submodule โ",
"to-merge โฐ",
"c1-quick ๐"
] | rwhitt2049 | 1 |
babysor/MockingBird | deep-learning | 487 | ้ขๅค็ppgๆจกๅๆถๅบ้ | Globbed 891 wav files.
Loaded encoder "pretrained_bak_5805000.pt" trained to step 5805001
Preprocessing: 0%| | 0/891 [00:00<?, ?wav/s]multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\CHOPY\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "Z:\deeplearing_project\MockingBird-main\ppg2mel\preprocess.py", line 72, in preprocess_one
wav = resampy.resample(wav, sr, SAMPLE_RATE)
File "C:\Users\CHOPY\AppData\Local\Programs\Python\Python39\lib\site-packages\resampy\core.py", line 97, in resample
raise ValueError('Input signal length={} is too small to '
ValueError: Input signal length=2 is too small to resample from 44100->16000
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Z:\deeplearing_project\MockingBird-main\pre4ppg.py", line 49, in <module>
preprocess_dataset(**vars(args))
File "Z:\deeplearing_project\MockingBird-main\ppg2mel\preprocess.py", line 96, in preprocess_dataset
list(tqdm(job, "Preprocessing", len(wav_file_list), unit="wav"))
File "C:\Users\CHOPY\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\_tqdm.py", line 1017, in __iter__
for obj in iterable:
File "C:\Users\CHOPY\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 870, in next
raise value
ValueError: Input signal length=2 is too small to resample from 44100->16000
็่ตทๆฅๅฅฝๅๆฏ้ๆ ท็็ๅๅ ไฝๆๅไปๆ ผๅผๅทฅๅ็ไบ ๆฏ44100็้ๆ ท็๏ผๅฟ
้กป่ฆ่ฝฌๆขๆ16000ๆ่ฝ่ฎญ็ปๅ๏ผไฝๆฏๅฅฝๅ่ฝฌๆขไธไบ16000็

| closed | 2022-04-04T03:49:02Z | 2022-09-21T09:36:54Z | https://github.com/babysor/MockingBird/issues/487 | [
"help wanted"
] | Chopin68 | 8 |
streamlit/streamlit | streamlit | 10,193 | Version information doesn't show in About dialog in 1.41 | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
We used to show which Streamlit version is running in the About dialog, but apparently that's broken in 1.41:

### Reproducible Code Example
_No response_
### Steps To Reproduce
Run any Streamlit app, go on app menu > About.
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-01-15T19:31:52Z | 2025-01-16T15:26:20Z | https://github.com/streamlit/streamlit/issues/10193 | [
"type:bug",
"status:awaiting-user-response"
] | jrieke | 3 |
scikit-optimize/scikit-optimize | scikit-learn | 658 | Interaction with Python logging module | When setting
> verbose = True
I would like to have the print outs go to my Python logger. Is there an easy way to do this? | open | 2018-04-09T08:50:22Z | 2018-04-10T10:07:15Z | https://github.com/scikit-optimize/scikit-optimize/issues/658 | [] | bstemper | 1 |
Nemo2011/bilibili-api | api | 728 | [ๆ้ฎ] Bilibili APIๅฐๅบๅผUID่ฝฌๅไฝฟ็จopen_id็ๆฏๆ้ฎ้ข | **Python ็ๆฌ๏ผ** 3.11.5
**ๆจกๅ็ๆฌ๏ผ** bilibili-api-python 16.2.0
**่ฟ่ก็ฏๅข๏ผ** Windows
---
ๅฟ๏ผ็ปดๆคbilibili-api-python็ๆๅไปฌ๏ผ
็ๅฐBilibili็APIๆดๆฐไบ๏ผ4ๆ25ๆฅๅUIDๅฐฑ่ฆ้ๅบๅๅฒ่ๅฐ๏ผopen_id่ฆๅผๅงๅคงๅฑๆณ่ไบใๆไธ็ดๅจ็จไฝ ไปฌ็ๅบๆฅๅๅผๅ๏ผ็ฎๅๅบ้ไผผไน่ฟๆฒกๆๅฐopen_id็้้
ใ
ๅฆๆไฝ ไปฌๅทฒ็ปๆๆดๆฐ่ฎกๅ๏ผ้ฃๅคชๅฅฝไบ๏ผ็ดๆฅๆๆ็issueๅ
ณไบๅฝๆๅฅ้ฝๆฒก่ฏดใๅฆๆ่ฟๅจ่ฎกๅไธญ๏ผๆๅชๆณ่ทไฝ ไปฌๆไธช้ใไธไธๆๆดๆฐ็ๆถ้ด่กจ๏ผ้้ฒไธไบไนๆฏๆๅฅฝ็๏ผ่ฟๆ ทๆไน่ฝๅๆญฅๆ็ๅผๅ่ๅฅใ
ๆ่ฐขๅฆ๏ผๆๅพ
ๅ้ณ๏ผ
็ผ็ ๅฟซไนๅฆ๏ผ
ๅๅฉๅๅฉ -(ใ-ใ)ใคใญ ไนพๆฏ~
ๅฎๆน้็ฅ้พๆฅ๏ผhttps://www.bilibili.com/opus/911220549299994641
 | closed | 2024-03-24T11:30:38Z | 2024-03-31T01:03:41Z | https://github.com/Nemo2011/bilibili-api/issues/728 | [
"question",
"need update"
] | oldip | 4 |
oegedijk/explainerdashboard | plotly | 75 | Sorting of variables in FeatureInputComponent | How is it working? Would it be possible to sort it manually?
In my current use case I have two normal categorical variables A & B and one multivalued, i.e. 0-1-encoded, variable C featured as C_1, C_2, ... . A and B are somewhere in the middle and the first row of the FeatureInputComponent is C_23, C_25, C_10, C_1, C_20, C_19 ...
Not sure if a special category for multivalued (categorical) variables is worth the work? | closed | 2021-01-29T10:46:09Z | 2021-02-03T12:49:47Z | https://github.com/oegedijk/explainerdashboard/issues/75 | [] | hkoppen | 5 |
recommenders-team/recommenders | deep-learning | 1,264 | How can I add exclude items while using BPR | ### Adding exclude_items list when using BPR.recommend or recommend_all
So, I have a need where I must exclude certain list of items for recommendation when using recommend or recommend_all. Mainly due to time constraint.
How do I implement this?
I though about doing top_k * 3, then filtering on my own side but that method is very ugly and might not even work, maybe all top_k * 3 recommendation might be in exclude list.
| open | 2020-12-24T08:18:58Z | 2020-12-24T08:24:31Z | https://github.com/recommenders-team/recommenders/issues/1264 | [
"help wanted"
] | bipinkc19 | 1 |
deepinsight/insightface | pytorch | 2,072 | insightface.data get_image() function tries to fetch images from library directories | 
| open | 2022-08-10T09:49:54Z | 2022-08-10T13:22:46Z | https://github.com/deepinsight/insightface/issues/2072 | [] | usmancheema89 | 1 |
littlecodersh/ItChat | api | 921 | ๆ็บฟ | ๆบๅจไบบ่ชๅจๆ็บฟใใใ | closed | 2020-06-01T06:59:02Z | 2020-07-20T02:43:58Z | https://github.com/littlecodersh/ItChat/issues/921 | [] | 2905683882 | 1 |
plotly/dash | flask | 2,971 | Add a function to directly retrieve component property values in callbacks | For example, I currently have a dcc.Store component in the application. The Store component stores the data that is required for most callbacks. In the current application, I have to add the Store component to the State in each callback.
Just like below
```
@app.callback(
Output(...),
Input(...),
State('store', 'data')
)
def callback1(...):
...
@app.callback(
Output(...),
Input(...),
State('store', 'data')
)
def callback2(...):
...
@app.callback(
Output(...),
Input(...),
State('store', 'data')
)
def callback3(...):
...
```
If there is a get_props function, I can encapsulate a universal function that directly retrieves the data of the Store component for processing, without the need to retrieve the data of the Store component through the State in each callback.
Just like below
```
def global_deal_func():
stroe_data = dash.get_props('store', 'data')
...
@app.callback(
Output(...),
Input(...)
)
def callback1(...):
global_deal_func()
...
@app.callback(
Output(...),
Input(...)
)
def callback2(...):
global_deal_func()
...
@app.callback(
Output(...),
Input(...)
)
def callback3(...):
global_deal_func()
...
```
This is just a tentative feature request, perhaps there will be a better solution, thank you very much. | open | 2024-08-29T01:37:06Z | 2024-09-12T14:09:39Z | https://github.com/plotly/dash/issues/2971 | [
"feature",
"P3"
] | insistence | 8 |
ydataai/ydata-profiling | jupyter | 1,633 | Add new metrics or report capability for descriptive, predictive and prescriptive | ### Missing functionality
No Descriptive Analysis, Predictive Analysis , Prescriptive Analysis possible for creating a combined report.
### Proposed feature
IDEA :- If we can do descriptive, predictive & prescriptive analysis also with exploratory data analysis, so it can make ydata very useful & generate a whole combined report for all these.
### Alternatives considered
0
### Additional context
0 | open | 2024-07-30T13:48:19Z | 2024-08-01T10:17:33Z | https://github.com/ydataai/ydata-profiling/issues/1633 | [
"feature request ๐ฌ"
] | rohanot | 0 |
NullArray/AutoSploit | automation | 916 | Divided by zero exception292 | Error: Attempted to divide by zero.292 | closed | 2019-04-19T16:03:19Z | 2019-04-19T16:37:02Z | https://github.com/NullArray/AutoSploit/issues/916 | [] | AutosploitReporter | 0 |
nolar/kopf | asyncio | 730 | An alternative way to use indexes without propagating them through the call stack |
## Problem
I'm using in-memory indexes and overall I think they work really well. One thing that's been nagging me though is that you can only get to them through the kwargs injected in handlers. My pain point with this approach is that while practical, it gets ugly when working with nested indices.
Take this slightly changed example from the docs:
```python
@kopf.index("pods")
def primary(namespace, name, spec, **_):
container_names = {container["name"] for container in spec["containers"]}
return {(namespace, name): container_names}
@kopf.index("pods")
def secondary(namespace, name, **_):
return {namespace: name}
def get_value(
primary: kopf.Index,
secondary: kopf.Index,
namespace: str
):
...
@kopf.timer(...)
async def handler(
namespace,
primary: kopf.Index,
secondary: kopf.Index,
# other args ..
):
value = get_value(
primary,
secondary,
# some other lookup arguments
)
...
```
In practice you tend to have descriptive names so, for example:
- `primary` might turn into `containers_by_namespace_and_pod_name`.
- `secondary` might turn into `pod_name_by_namespace`.
- `get_value` might turn into `get_monitored_containers`.
.. which makes everything repetitive, verbose and arguably harder to read.
## Proposal
Provide an alternative way of accessing indexes while running in the context of a handler without having to propagate all needed indexes through the call stack. One thing that comes to mind could be to access the index similarly to how you would access a `contextvar` that is set in the context of the handler. With this in place the above could be rewritten as:
```python
@kopf.index("pods")
def containers_by_namespace_and_pod_name(namespace, name, spec, **_):
container_names = {container["name"] for container in spec["containers"]}
return {(namespace, name): container_names}
@kopf.index("pods")
def pods_by_namespace(namespace, name, **_):
return {namespace: name}
def get_monitored_containers(
namespace: str
):
primary = kopf.indexes.get("containers_by_namespace_and_pod_name")
secondary = kopf.indexes.get("pods_by_namespace")
# or maybe:
# primary = containers_by_namespace_and_pod_name.get_index()
# secondary = pods_by_namespace.get_index()
# use primary and secondary
@kopf.timer(...)
async def handler(
namespace,
# other args ..
):
value = get_monitored_containers(namespace)
...
```
With this approach:
- The verbosity is hidden away in the function that makes use of the index (`get_monitored_containers` in this case).
- Repetition is decreased because you don't have to pass the indexes through the call stack.
- The handler is easier to read because of decreased verbosity and repetition.
What are your thoughts on this?
## Checklist
- [x] Many users can benefit from this feature, it is not a one-time case
- [x] The proposal is related to the K8s operator framework, not to the K8s client libraries | open | 2021-04-02T12:15:38Z | 2021-07-12T19:08:18Z | https://github.com/nolar/kopf/issues/730 | [
"enhancement"
] | zoopp | 3 |
gunthercox/ChatterBot | machine-learning | 2,055 | Integrating this python chatbot on a PHP website | Hello,
The bot I've built works fine and I want to integrate it on my PHP website.
What I'm doing as of now is requesting the user a question through PHP, passing this question as a parameter to the python chatbot script, getting a response and displaying on the website.
Even though it works, there isn't much flexibility with what the bot can do. For example, If I want to display an element from my website, let's say the current time on the website (I know about the time logic adapter, just taking this as an example) I can't do that with the bot.
Is there a way to completely integrate my chatbot with my PHP website like it is in Django? | closed | 2020-10-13T10:30:59Z | 2025-02-25T23:15:45Z | https://github.com/gunthercox/ChatterBot/issues/2055 | [] | Siddikulus | 2 |
allenai/allennlp | data-science | 5,448 | Predictor.from_path('coref-spanbert-large-2021.03.10.tar.gz') downloads model into cache though I provide a local copy of the model | I am trying to load a local copy of the `coref-spanbert` model using `Predictor.from_path` but it starts downloading the model again into cache/huggingface. How do I fix this.
>>> from allennlp.predictors import Predictor
>>> coref_model = Predictor.from_path('coref-spanbert-large-2021.03.10.tar.gz')
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 414/414 [00:00<00:00, 436kB/s]
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 213k/213k [00:00<00:00, 239kB/s]
Downloading: 34%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | closed | 2021-10-26T06:29:24Z | 2021-11-24T16:09:48Z | https://github.com/allenai/allennlp/issues/5448 | [
"question",
"stale"
] | irshadbhat | 3 |
numba/numba | numpy | 9,835 | Large overhead when launching kernel with torch tensors | When launching a CUDA kernel using a torch array as input, there is a significant overhead in the `as_cuda_array` call. Using this example (with `torch==2.5.1 numba==0.60.0`):
```python
import numba.cuda
import torch
from tqdm import tqdm
N_RUNS = 10_000
@numba.cuda.jit(
numba.void(
numba.types.Array(numba.uint8, 3, "C"),
numba.types.Array(numba.uint8, 3, "C", readonly=True),
numba.types.Array(numba.boolean, 2, "C", readonly=True),
numba.int32,
numba.int32,
),
fastmath=True,
)
def get_masked_crop(out, frame, mask, ymin, xmin):
i, j = numba.cuda.grid(2)
fi, fj = ymin + i, xmin + j
if mask[fi, fj]:
out[i, j, 0] = frame[fi, fj, 0]
out[i, j, 1] = frame[fi, fj, 1]
out[i, j, 2] = frame[fi, fj, 2]
def main() -> None:
frame = torch.ones((1080, 1920, 3), dtype=torch.uint8, device="cuda:0")
mask = torch.ones((1080, 1920), dtype=torch.bool, device="cuda:0")
crop = torch.zeros((300, 300, 3), dtype=torch.uint8, device="cuda:0")
# frame = numba.cuda.as_cuda_array(frame, sync=False)
# mask = numba.cuda.as_cuda_array(mask, sync=False)
# crop = numba.cuda.as_cuda_array(crop, sync=False)
threads_per_block = (32, 32)
blocks_per_grid = (
math.ceil(crop.shape[0] / threads_per_block[0]),
math.ceil(crop.shape[1] / threads_per_block[1]),
)
for _ in tqdm(range(N_RUNS)):
get_masked_crop[blocks_per_grid, threads_per_block](crop, frame, mask, 50, 50)
torch.cuda.synchronize()
if __name__ == "__main__":
main()
```
When profiling with `nsys`, I get that each iteration takes around 350ยตs:


If I remove the commented lines and do the conversion once before the loop, each iteration takes ~110ยตs:


Looking at the profiling trace it seems that most of the time is spent in the call to `as_cuda_array`. | closed | 2024-12-09T10:00:51Z | 2024-12-30T14:13:57Z | https://github.com/numba/numba/issues/9835 | [
"needtriage",
"CUDA"
] | materight | 2 |
dgtlmoon/changedetection.io | web-scraping | 1,684 | [feature] RSS feeds include BaseURL not set | **Version and OS**
- Change detection: v0.43.2 on Synology NAS Docker
**Is your feature request related to a problem? Please describe.**
I use FreshRSS (Docker on Synology NAS) for my feeds and Change detection creates those RSS feed for websites which has no feeds.
As it can be seen on the screenshot the link refers not to https://git-fork.com/relesenoteswin. Link refers to `https://changedetection.io/<base-url-env-var-not-set>` -> 404 error after clicking on it.


It would be great if it could then refer to https://git-fork.com/relesenoteswin | closed | 2023-07-08T16:22:32Z | 2023-09-14T12:32:07Z | https://github.com/dgtlmoon/changedetection.io/issues/1684 | [
"enhancement"
] | update-freak | 11 |
activeloopai/deeplake | computer-vision | 2,932 | [FEATURE] Upgrade pillow >= 10.3.0 | ### Description
Consider updating to Pillow >= 10.3.0 due to [CVE-2024-28219](https://github.com/advisories/GHSA-44wm-f244-xhp3)
https://github.com/python-pillow/Pillow/releases/tag/10.3.0
### Use Cases
_No response_ | closed | 2024-08-26T08:42:44Z | 2024-09-17T17:25:51Z | https://github.com/activeloopai/deeplake/issues/2932 | [
"enhancement"
] | daniel-code | 1 |
ultralytics/yolov5 | machine-learning | 12,996 | The accuracy of the .pt model will decrease after being converted to .engine model. | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
The results obtained by my inference using the .pt model and the .engine model are different.
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-05-10T06:09:07Z | 2024-07-01T00:26:31Z | https://github.com/ultralytics/yolov5/issues/12996 | [
"bug",
"Stale"
] | arkerman | 6 |
AirtestProject/Airtest | automation | 1,028 | ไฝฟ็จ็ธๅฏน่ทฏๅพ้ๆฉๅพ็ๅฎไฝๆถ, logtohtmlๆนๆณๆฅ้ | win10 ,
airtest ็ๆฌ1.2.4 ,
python3.9 ,
ไฝฟ็จpycharm่ฟ่ก ,
ๅจๆไฝ็ๆถๅๅฆๆไฝฟ็จไบ็ธๅฏน่ทฏๅพ , ๅจ็ๆๆฅๅ็ๆถๅๅฐฑไผๅบ้
ไฝฟ็จ็ปๅฏน่ทฏๅพๅฏไปฅๆญฃๅธธ็ๆๆฅๅ


| open | 2022-02-24T08:44:19Z | 2022-02-24T08:44:19Z | https://github.com/AirtestProject/Airtest/issues/1028 | [] | helei0411 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.