repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ivy-llc/ivy | tensorflow | 28,495 | Fix Frontend Failing Test: paddle - math.paddle.diff | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-06T21:57:49Z | 2024-04-02T09:25:04Z | https://github.com/ivy-llc/ivy/issues/28495 | [
"Sub Task"
] | ZJay07 | 0 |
albumentations-team/albumentations | deep-learning | 1,509 | PadIfNeeded seems to not correctly work with ReplayCompose in certain cases | ## 🐛 Bug
Using ReplayCompose with PadIfNeeded seems to not reproduce the transform if a non-default value of "position" is used
## To Reproduce
Steps to reproduce the behavior:
```
import numpy as np
from albumentations import ReplayCompose
from albumentations.augmentations.geometric.transforms import PadIfNeeded
IM_HEIGHT = 124
SQUARE_SIZE = 256
img = np.random.rand(IM_HEIGHT, SQUARE_SIZE, 3)
transform_pad_br = ReplayCompose([
PadIfNeeded(
min_height=SQUARE_SIZE,
min_width=SQUARE_SIZE,
position=PadIfNeeded.PositionType.BOTTOM_RIGHT, # <--- default is CENTER
border_mode=0, # AKA cv2.BORDER_CONSTANT
value=0,
mask_value=0,
),
])
# original
test_pad = transform_pad_br(image=img)
# replay
replay_data = test_pad['replay']
test_replay_pad = ReplayCompose.replay(replay_data, image=img)
```
## Expected behavior
Original pad has a top bar (BOTTOM_RIGHT position)
```
import matplotlib.pyplot as plt
plt.imshow(test_pad['image'])
plt.show()
```

But, the replayed pad has top/bottom bars (default CENTER position)
```
plt.imshow(replay_test_pad['image'])
plt.show()
```

## Environment
- Albumentations version (e.g., 0.1.8): 1.3.0 (also tested 1.3.1)
- Python version (e.g., 3.7): 3.7.10
- OS (e.g., Linux): Ubuntu 22.04.3 LTS
- How you installed albumentations (`conda`, `pip`, source): pip
- Any other relevant information:
## Additional context
hacking 'position' back in seems to help
```
# fix
import copy
new_replay_data = copy.deepcopy(replay_data)
new_replay_data['transforms'][0]['position'] = PadIfNeeded.PositionType.BOTTOM_RIGHT
fixed_replay_pad = ReplayCompose.replay(new_replay_data, image=img)
assert(np.all(fixed_replay_pad['image'] == test_pad['image']))
# note:
# maybe a fix is to update PadIfNeeded.get_transform_init_args_names() to also return 'position'
# but I'm not sure what else this effects
```
| closed | 2024-01-20T02:17:45Z | 2024-03-21T00:39:03Z | https://github.com/albumentations-team/albumentations/issues/1509 | [
"bug"
] | fhung65 | 4 |
ipython/ipython | data-science | 14,697 | Avoid warning on `ipython.utils.text.dedent`? | Currently, ipython code itself uses `ipython.utils.text.dedent` at a few places, for example https://github.com/ipython/ipython/blob/main/IPython/core/magic_arguments.py#L92
But the function raises a warning.
I think it's best if the function does not raise a warning until the function is completely unused within the IPython code itself, otherwise the user will see the warning without being able to do anything about it.
Related https://github.com/ipython/ipython/issues/14280 | closed | 2025-01-30T11:44:53Z | 2025-02-26T10:54:42Z | https://github.com/ipython/ipython/issues/14697 | [] | user202729 | 2 |
allure-framework/allure-python | pytest | 339 | Allure displays date and time as unknown in summary overview page, if all testcase skiped or failed, not passed | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
when my all tesecase failed or skiped, there are not any passed testcases , the allure displays date and time as unknown in summary overview page
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
set all testcases is skip or write failed testcase
#### What is the expected behavior?
It should display date and time correctly in overview page
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
| Allure version | 2.2.0 |
| --- | --- |
| Test framework | pytest@3.6.3 |
| Allure adaptor | allure-pytest@2.5.0 |
| Generate report using | allure-commandline@2.6.0 |
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
<!-- Love allure-report? Please consider supporting our collective:
👉 https://opencollective.com/allure-report/donate --> | closed | 2018-08-13T12:14:36Z | 2019-02-20T15:05:41Z | https://github.com/allure-framework/allure-python/issues/339 | [
"bug",
"theme:pytest",
"work:backlog"
] | yili1992 | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,139 | Feedback button during chat | ERROR: type should be string, got "\r\nhttps://github.com/LAION-AI/Open-Assistant/assets/95025816/d7ed34ab-e706-491b-a790-9379edfc3c72\r\n\r\nHard to explain by words so here is a video that demonstrates the issue, same thing happens with voting down." | closed | 2023-05-12T20:20:32Z | 2023-05-13T19:56:34Z | https://github.com/LAION-AI/Open-Assistant/issues/3139 | [
"bug",
"website",
"UI/UX"
] | sryu1 | 0 |
itamarst/eliot | numpy | 86 | Convert eliot output into kcachegrind-readable format | Eliot logging can be used for performance measurement, since it has start and stop (wall clock) timing for actions. kcachegrind is a useful tool for visualizing profiling data, and quite possibly Eliot output could be loaded into it.
| closed | 2014-05-25T23:24:38Z | 2019-05-09T18:18:28Z | https://github.com/itamarst/eliot/issues/86 | [
"enhancement"
] | itamarst | 1 |
aiortc/aiortc | asyncio | 1,207 | kind = "audio" How to use queue output for Audio? Audio 如何用队列输出? | The sound problem generated when creating virtual digital humans/在做虚拟数字人时 时时生成的声音问题
How to use queue output for Audio? Audio 如何用队列输出?

```python
# All code
import asyncio
import json
import os
import uuid
import numpy as np
import math
import numpy as numpy
import cv2
from av import AudioFrame, VideoFrame #单独处理 音频 与 视频
import queue
import threading
from pydub import AudioSegment
#
from aiortc import MediaStreamTrack, RTCPeerConnection, RTCSessionDescription, VideoStreamTrack
from aiortc.contrib.media import MediaPlayer, MediaRelay
#
import uvicorn
from fastapi import FastAPI, Request
#
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
app = FastAPI()
# 创建两个队列,一个用于视频帧,一个用于音频样本
video_queue = queue.Queue(maxsize=50000)
audio_queue = queue.Queue(maxsize=50000)
# 模拟音频帧生成
def load_audio(audio_path):
# 加载音频文件
audio = AudioSegment.from_file(audio_path)#("yongen_diy.wav", format="wav") #("1.mp3", format="mp3")
# 转换为 numpy 数组
samples = np.array(audio.get_array_of_samples())
frame_rate = audio.frame_rate # 44100 意思是:每秒采样44100次
frame_size=frame_rate//25 #一秒声音分担到25张图片上。 # 44100 意思是:每秒采样44100次
# 分割音频为帧
for i in range(0, len(samples), frame_size): #输出0-len(samples)的数据,间隔步长为:frame_size
frame = samples[i:i + frame_size]
# print( str(i)+':'+ str(i+frame_size) ) # 0:600 600:1200 1200:1800 1800:2400
if len(frame) == frame_size: # 确保每个帧的长度一致 600 600
audio_queue.put(frame)
# time.sleep(0.02) # 模拟音频帧率 50fps
# 模拟视频帧生成 - 每秒25帧
def load_video(video_path):
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
if not video_queue.full():
video_queue.put(frame) # 将视频帧放入队列
# time.sleep(0.04) # 模拟音频帧率 25fps
# 假设我们有一个视频文件和一个音频文件
audio_path = 'yongen_diy.wav'
video_path = '1.mp4'
# 创建线程来加载数据
video_thread = threading.Thread(target=load_video, args=(video_path,))
audio_thread = threading.Thread(target=load_audio, args=(audio_path,))
video_thread.start()
audio_thread.start()
# 自定义视频 video 配合队列 - is ok
class diyKindVideo(VideoStreamTrack):
kind = "video"
def __init__(self):
super().__init__() # 必须初始化 VideoStreamTrack
self.cap = video_queue #cv2.VideoCapture(0, cv2.CAP_DSHOW) #本地摄像头
async def recv(self):
# video_frame = self.cap.get(timeout=1) #.read()主要是为了读取上面打开的视频的,以后一行可以用队列实现
video_frame = video_queue.get(timeout=1) # 从视频队列中取帧
print(f"------视频帧 Video frame shape: {video_frame.shape}")
# 1、推流视频
# # edges 图像识别修改 - 哈哈,视频二次推流搞定了,哈哈..~~
# frame2 = cv2.cvtColor(cv2.Canny(video_frame, 100, 200), cv2.COLOR_GRAY2BGR)
new_frame = VideoFrame.from_ndarray(video_frame, format="bgr24")
pts, time_base = await self.next_timestamp()
new_frame.pts = pts #int(self._index * self._time_base / self._frame_rate)
new_frame.time_base = time_base #'1/' + str(time_base)
return new_frame
# 自定义声音 audio 配合队列 - ---------------------------------- not ok , why?????????????????????????????????????????
class diyKindAudio(MediaStreamTrack):
kind = "audio"
def __init__(self):
super().__init__() #必须初始化 VideoStreamTrack
self.cap = audio_queue #cv2.VideoCapture(0, cv2.CAP_DSHOW) #本地摄像头
async def recv(self):
audio_frame = audio_queue.get(timeout=1) # 从音频队列中取样本
print(f"------音频帧 Audio data shape: {audio_frame.shape}")
# 2、推流声音
frame = audio_frame.astype(np.int16)
new_frame = AudioFrame(format='s16', layout='mono', samples=frame.shape[0]) # 创建一个单声道,采样率为44100Hz的AudioFrame
new_frame.planes[0].update(frame.tobytes()) # __video_frame_buffer=frame.tobytes()
new_frame.sample_rate=24000 #原来:16000
return new_frame
#####################################################################################
# 异步回调处理 - 从 JSON 请求中提取参数、创建 RTCPeerConnection 、生成唯一 ID
@app.post("/offer")
async def offer(request: Request):
fid = RTCPeerConnection() #浏览器生成 fid 连接
fids.add(fid) #原生python 向集合 fids 中添加新的fid连接
print('-----------现在一共有连接数fids----------:',fids)
# RTC心跳
@fid.on("connectionstatechange")
async def on_connectionstatechange():
print("Connection state is %s" % fid.connectionState)
if fid.connectionState == "failed":
await fid.close()
fids.discard(fid)
fid.addTrack(diyKindVideo()) #视频Ok
fid.addTrack(diyKindAudio()) #音频Ok
# ----- 继续 -------
params = await request.json()
# 假设我们已经收到了远程描述
_desc = RTCSessionDescription(sdp=params["sdp"], type=params["type"])
await fid.setRemoteDescription(_desc)
# 创建一个 answer
answer = await fid.createAnswer()
await fid.setLocalDescription(answer)
return {"sdp": fid.localDescription.sdp, "type": fid.localDescription.type}
# 方式一、添加不分离模版
# 渲染 HTML 模板的路径 - 不知道为什么,只有用python渲染的html才可以访问, 直了前后端分离方式就不行
@app.get("/")
def index(request: Request):
return templates.TemplateResponse("index_fastapi_dev2.html", {"request": request, "name": "World"})
static_dir = os.path.join(os.path.dirname(__file__), "static")
app.mount("/",StaticFiles(directory=static_dir, html=True),name="static")
# 方式二、分离方式
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8080,log_level='info',loop="asyncio")
``` | closed | 2025-01-13T10:32:44Z | 2025-01-29T11:50:50Z | https://github.com/aiortc/aiortc/issues/1207 | [] | gg22mm | 0 |
praw-dev/praw | api | 1,951 | Docs: Font color of method names is unreasonably white on a white background when using dark theme | ### Describe the Documentation Issue
Hey Praw maintainers, thanks for the great work.
I'm about to use this API and I'm really happy with what I've found so far.
The only sad part is I'll have to read the documentation on light theme. This is because of the issue in the title, pictured below, or [directly in the site but turn on **dark mode**](https://praw.readthedocs.io/en/stable/code_overview/reddit_instance.html#praw.Reddit.request):

### Attributes
- [X] Yes
### Location of the issue
https://praw.readthedocs.io/en/stable/code_overview/reddit_instance.html#praw.Reddit.request
### What did you expect to see?
method names a bit easier to read
### What did you actually see?
method names hard to read
### Proposed Fix
Gotta be a code color somewhere or a css rule to fix it
### Operating System/Web Browser
_No response_
### Anything else?
_No response_ | closed | 2023-04-04T20:36:28Z | 2023-07-04T17:45:36Z | https://github.com/praw-dev/praw/issues/1951 | [] | vitorcodesalittle | 4 |
learning-at-home/hivemind | asyncio | 281 | Try to migrate from CircleCI to GitHub Actions | Seems easy, but testing needed.
Check before that benchmarks feel good in GitHub Actions | closed | 2021-06-17T14:23:56Z | 2021-06-24T21:15:06Z | https://github.com/learning-at-home/hivemind/issues/281 | [
"ci"
] | yhn112 | 0 |
chainer/chainer | numpy | 7,681 | The average function for float16 can overflow | The average function for float16 [can overflow](https://github.com/chainer/chainer/blob/6fef53f3f9fcae9de0643d677b89349d58f20cad/chainer/functions/math/average.py#L62) due to the same reason described in #6702. | closed | 2019-07-03T06:07:48Z | 2019-11-29T05:19:21Z | https://github.com/chainer/chainer/issues/7681 | [
"stale",
"prio:low"
] | gwtnb | 4 |
huggingface/peft | pytorch | 1,596 | TypeError: ChatGLMForConditionalGeneration.forward() got an unexpected keyword argument 'decoder_input_ids' | ### System Info
python 3.10.8
peft 0.7.1
transformers 4.38.1
datasets 2.18.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```
tokenized_dataset = dataset.map(
preprocess_function, batched=True, remove_columns=["content", "summary"]
)
print(f"Keys of tokenized dataset: {list(tokenized_dataset['train'].features)}")
# save datasets to disk for later easy loading
# tokenized_dataset["train"].save_to_disk("data/train")
# tokenized_dataset["test"].save_to_disk("data/eval")
# load model from the hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_dir)
# Define LoRA Config
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q", "v"],
lora_dropout=0.1,
bias="none",
task_type=TaskType.SEQ_2_SEQ_LM,
)
# prepare int-8 model for training
# model = prepare_model_for_int8_training(model)
# add LoRA adaptor
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# trainable params: 18874368 || all params: 11154206720 || trainable%: 0.16921300163961817
# we want to ignore tokenizer pad token in the loss
label_pad_token_id = -100
# Data collator
data_collator = DataCollatorForSeq2Seq(
tokenizer,
# model=model,
label_pad_token_id=label_pad_token_id,
pad_to_multiple_of=8,
# batch_size=8,
)
# Define training args
training_args = Seq2SeqTrainingArguments(
output_dir=output_dir,
auto_find_batch_size=True,
learning_rate=1e-3, # higher learning rate
num_train_epochs=10,
# per_device_train_batch_size=8,
logging_dir=f"{output_dir}/logs",
logging_strategy="steps",
logging_steps=500,
save_strategy="no",
report_to="tensorboard",
lr_scheduler_type="constant",
# using only a portion of the dataset
# train = 30,
# max_eval_samples = 30,
)
# Create Trainer instance
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset["train"],
)
model.config.use_cache = (
False # silence the warnings. Please re-enable for inference!
)
# train model
trainer.train()
# Save our LoRA model & tokenizer results
trainer.model.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
```
error:
```
trainable params: 1,949,696 || all params: 6,245,533,696 || trainable%: 0.031217444255383614
0%| | 0/530 [00:00<?, ?it/s]Traceback (most recent call last):
File "/root/test01/test01.py", line 164, in <module>
trainer.train()
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1624, in train
return inner_training_loop(
File "/root/miniconda3/lib/python3.10/site-packages/accelerate/utils/memory.py", line 136, in decorator
return function(batch_size, *args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 1961, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2902, in training_step
loss = self.compute_loss(model, inputs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/trainer.py", line 2925, in compute_loss
outputs = model(**inputs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 1249, in forward
return self.base_model(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 103, in forward
return self.model.forward(*args, **kwargs)
TypeError: ChatGLMForConditionalGeneration.forward() got an unexpected keyword argument 'decoder_input_ids'
0%| | 0/530 [00:00<?, ?it/s]
```
### Expected behavior
When I was using peft+lora to fine tune the chatGLM3-6B model using task type SEQ_2_SEQ-LM, I encountered a TypeError: ChatGLMForConditionalGeneration. forward() got an unexpected keyword argument 'decoded_input_ids' error. I am not sure where the problem occurred, but it was normal when I was performing CAUAL-LM and FEATURTExtraction. | closed | 2024-03-27T14:39:54Z | 2024-03-28T10:09:09Z | https://github.com/huggingface/peft/issues/1596 | [] | 12915494174 | 4 |
keras-team/autokeras | tensorflow | 1,924 | Bug: trouble loading and using tensorflow savedmodel with custom input layer in keras, has been working fine for 2+ years and now currently looking for the solutions with my team | ### Bug Description
<!---
A clear and concise description of what the bug is.
-->
### Bug Reproduction
Code for reproducing the bug:
Data used by the code:
### Expected Behavior
<!---
If not so obvious to see the bug from the running results,
please briefly describe the expected behavior.
-->
### Setup Details
Include the details about the versions of:
- OS type and version:
- Python:
- autokeras: <!--- e.g. 0.4.0, 1.0.2, master-->
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
### Additional context
<!---
If applicable, add any other context about the problem.
-->
Sure, here's a concise version of your question, ready to send on GitHub:
---
Title: Trouble Loading and Using a TensorFlow SavedModel with Custom Input Layer in Keras
Body:
Hi everyone,
I'm currently working on a project where I need to load a TensorFlow SavedModel and use it with a custom input layer in Keras. However, I'm running into issues when trying to perform inference with the model. Here’s what I’ve done so far:
1. Loaded the SavedModel as an inference-only layer:
import tensorflow as tf
from keras.layers import TFSMLayer
from keras import Input
from keras.models import Model
base_model = TFSMLayer("path/to/savedmodel", call_endpoint='serving_default')
input_layer = Input(shape=(32,), dtype='float64')
output_layer = base_model(input_layer)
model = Model(inputs=input_layer, outputs=output_layer)
2. Converted test data to the required float64 dtype:
import pandas as pd
test_data = pd.read_csv("path/to/testdata.csv")
test_data_float64 = tf.cast(test_data.values, tf.float64)
3. Attempted to use the model for inference:
predictions = model(test_data_float64)
However, I’m encountering issues with the input data type and shape compatibility.
### My Questions:
1. Data Type Compatibility: How can I ensure that the input data is correctly formatted and compatible with the expected input dtype of the TFSMLayer?
2. Shape Issues: Are there any common pitfalls or best practices when dealing with custom input layers in Keras models that load TensorFlow SavedModels?
3. Inference with Custom Layers: Is there a better approach to modify the input layer of a pre-trained TensorFlow SavedModel for inference in Keras?
Any guidance or suggestions on how to resolve these issues would be greatly appreciated. Thank you!
---
Feel free to post this question on GitHub, and it should be ready for others to understand and provide help without any proprietary information being leaked.
| open | 2024-06-27T21:21:55Z | 2024-06-27T23:05:43Z | https://github.com/keras-team/autokeras/issues/1924 | [
"bug report"
] | IOIntInc | 1 |
openapi-generators/openapi-python-client | fastapi | 1,125 | Incorrect generation of `_parse_response ` | Lovely package BTW!
**Describe the bug**
Code generation for parsing API response does not generate the code that creates the Python model after the response is completed. This happens when using a `$ref` to define the response instead of just defining the response body inside the path definition.
**OpenAPI Spec File**
```yml
openapi: 3.0.3
info:
title: Example
description: |
Example API definition.
version: 1.0.0
security:
- BearerAuth: []
paths:
/v2/task/{task_id}:
get:
summary: Get the summary of a task.
responses:
'200':
$ref: '#/components/responses/task_summary'
'400':
$ref: '#/components/responses/bad_request'
'403':
$ref: '#/components/responses/not_authorized'
parameters:
- $ref: '#/components/parameters/task_id'
components:
securitySchemes:
BearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
parameters:
task_id:
name: task_id
in: path
description: The unique identifier of the task targeted by the request.
required: true
schema:
$ref: '#/components/schemas/TaskId'
responses:
task_summary:
description: Summary of the task at the time the request was made.
content:
application/json:
schema:
$ref: '#/components/schemas/TaskSummary'
bad_submit_task_request:
description: |
The request is invalid. This may indicate an error when parsing a parameter, or an error when parsing or validating the request body. The response body may contain a JSON array with a list of validation errors.
content:
application/json:
schema:
type: object
properties:
validation_errors:
type: array
minItems: 1
items:
type: string
not_authorized:
description: |
User is not authorized to perform this task.
bad_request:
description: |
The request is invalid. This may indicate an error when parsing a parameter.
schemas:
TaskId:
type: string
minLength: 1
maxLength: 256
pattern: "^[a-zA-Z0-9][a-zA-Z0-9-]*$"
example: '80cf75f2-4700-4006-9203-4376b091ee4e'
description: A unique identifier for a task. Must not be an empty string.
TaskSummary:
type: "object"
description: Summary of existing task, including task status.
properties:
task_status:
$ref: '#/components/schemas/TaskStatus'
required:
- task_status
TaskStatus:
type: string
enum:
- Created
- PayloadProcessing
- Scheduled
- Completed
- Cancelled
- Failed
description: |
Reports status of the task. Lifecycle of a normal task is Created -> PayloadProcessing -> Scheduled -> Completed.
where PayloadProcessing - the task is going trough additional compilation/validation/optimization,
Scheduled - the task is in the qpu queue, Completed - the task has been executed.
```
the code that gets generated here is:
```python
def _parse_response(*, client: Union[AuthenticatedClient, Client], response: httpx.Response) -> Optional[Any]:
if response.status_code == HTTPStatus.OK:
return None
if response.status_code == HTTPStatus.BAD_REQUEST:
return None
if response.status_code == HTTPStatus.FORBIDDEN:
return None
if client.raise_on_unexpected_status:
raise errors.UnexpectedStatus(response.status_code, response.content)
else:
return None
```
which is not correct. The additional context shows the expected behavior.
**Desktop (please complete the following information):**
- OS: macOS: 14.4.1
- Python Version: 3.12.1
- openapi-python-client: 0.21.5 [e.g. 0.1.0]
**Additional context**
Note if I modify the path:
```yml
/v2/task/{task_id}:
get:
summary: Get the summary of a task.
responses:
'200':
description: Summary of the task at the time the request was made.
content:
application/json:
schema:
$ref: '#/components/schemas/TaskSummary'
'400':
$ref: '#/components/responses/bad_request'
'403':
$ref: '#/components/responses/not_authorized'
parameters:
- $ref: '#/components/parameters/task_id'
```
it generates the correct code:
```python
def _parse_response(
*, client: Union[AuthenticatedClient, Client], response: httpx.Response
) -> Optional[Union[Any, TaskSummary]]:
if response.status_code == HTTPStatus.OK:
response_200 = TaskSummary.from_dict(response.json())
return response_200
if response.status_code == HTTPStatus.BAD_REQUEST:
response_400 = cast(Any, None)
return response_400
if response.status_code == HTTPStatus.FORBIDDEN:
response_403 = cast(Any, None)
return response_403
if client.raise_on_unexpected_status:
raise errors.UnexpectedStatus(response.status_code, response.content)
else:
return None
```
| open | 2024-09-24T16:44:01Z | 2024-09-24T16:52:54Z | https://github.com/openapi-generators/openapi-python-client/issues/1125 | [] | weinbe58 | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 408 | [BUG] tiktok mstoken生成失败 | 我本地网络能正常访问tiktok 但是调试项目的时候生成tiktok mstoken总是超时

| closed | 2024-05-23T06:39:03Z | 2024-05-26T05:38:17Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/408 | [
"BUG"
] | meepolove | 4 |
taverntesting/tavern | pytest | 647 | Clarify API boundaries | Move most of the code into a `_tavern` folder and moving any existing test helpers into a top level `helpers` file to clarify where the API boundary is (as well as general cleanup around folders - for example 'testutil' is a really unhelpful name)
| closed | 2021-02-20T11:59:55Z | 2021-10-03T12:37:01Z | https://github.com/taverntesting/tavern/issues/647 | [] | michaelboulton | 0 |
stitchfix/hamilton | numpy | 19 | Enhancement: Add capability to use a DataFrame as a template for the target output. | Currently if the caller has a DataFrame structure that they are targeting then they need to ensure they match the names of the columns correctly and manually convert the Series types. If the `output_columns` or other parameter of the `execute` function took a DataFrame as a template then the output columns would match the data columns and each series can be delivered using `astype` conversion.
You will probably need something like a `DictionaryError` for the scenario where there is a column in the DataFrame template that is not in the data columns available.
There is also the option to be able to process compound column names from the DataFrame to map into a more structured DataFrame. This would involve having a join character e.g. `_`.
| closed | 2021-10-23T13:47:33Z | 2022-03-24T04:05:58Z | https://github.com/stitchfix/hamilton/issues/19 | [
"enhancement",
"product idea"
] | straun | 2 |
scrapy/scrapy | web-scraping | 6,185 | `FEED_EXPORT_BATCH_ITEM_COUNT` not working | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
Scrapy setting `FEED_EXPORT_BATCH_ITEM_COUNT` not working when value is specified higher than the yielded items. For example: FEED_EXPORT_BATCH_ITEM_COUNT is 5 and scraped yielded items are 3, then this won't store scraped items on FEED_URI (specified file). This means no file would get generate, instead it should consider this as a condition that if yielded items are < than specified batch it should dump yielded items.
### Steps to Reproduce
1. FEED_EXPORT_BATCH_ITEM_COUNT = 5, FEED_FORMAT = csv, FEED_URI = ***.csv
2. Total scraped items: 3
3. **BUG:** No file gets generate
**Expected behavior:** Even though records are lesser than `batch_count`, it should export items into a File.
**Actual behavior:** Even though records are lesser than `batch_count`, it should export items into a File.
**Reproduces how often:** Everytime
### Versions
2.11.0
| closed | 2023-12-23T14:36:25Z | 2024-01-12T12:58:40Z | https://github.com/scrapy/scrapy/issues/6185 | [
"not reproducible",
"needs more info"
] | Mhassanniazi | 3 |
healthchecks/healthchecks | django | 1,135 | Alert if `/start` signal does not arrive at expected time (with a separate grace time setting for it) | Currently, the set 'grace time' is used to both measure the time between schedule and initial ping AND between start and finish ping.
This means that if you send start pings for your timers (enabling you to see the run time of each run) you have to set the grace time to the max of the two values: maximum expected delay from schedule, and maximum run time.
We have quite a few timers that run for a long time, but we don't expect their start time to deviate from their respective schedule too much, and we'd rather know earlier that they haven't started yet.
Would it be possible to specify the expected run time separately? | closed | 2025-03-20T12:37:53Z | 2025-03-20T13:31:34Z | https://github.com/healthchecks/healthchecks/issues/1135 | [
"feature"
] | Riscky | 4 |
pywinauto/pywinauto | automation | 566 | How to get certain value of cell in a table if the rows are dynamic. | Hi,
I am running into a problem which has a table and it contains columns and rows. Number of columns are fixed but number of rows are dynamic. I need to find out certain value of the cell if it is exists.
---------------------------------------------------------------------------------------------
Name | LastName | OrderID
-- | -- | --
smita1 | |
Smita2 | |
**Smita3** | |
I need to find out cell value 'Smita3' which some times display at row 3 or row 4 or row 5. How can I do that.
-------------------------------------------------------------------------------
I tried following code...
```python
cTable = MyApp1.child_window(title="orderTable",control_type="Table").wrapper_object()
dItem = pywinauto.findwindows.find_windows(title_re ="Name.*",control_type = "DataItem",found_index=0)
nName = int(len(cTable.descendants(title="Name",control_type="DataItem")))
print(nName)
for i in range(nName):
cell = MyApp1.cTable.dItem
print (cell.iface_value.CurrentValue)
i += i+1
```
when I run this I get following error traceback...
```python
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 89, in __init__
process_id)
pywinauto.remote_memory_block.AccessDenied: ('[WinError 5] Access is denied.process: %d', 14884)
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 232, in 'calling callback function'
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 138, in enum_window_proc
if control_type is not None and control_type != element.control_type:
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 235, in control_type
return self.__get_control_type(full=False)
File "C:\Python37\lib\site-packages\pywinauto\win32_element_info.py", line 216, in __get_control_type
remote_mem = RemoteMemoryBlock(self, size=length*2)
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 105, in __init__
if hex(self.mem_address) == '0xffffffff80000000' or hex(self.mem_address).upper() == '0xFFFFFFFF00000000':
TypeError: 'NoneType' object cannot be interpreted as an integer
3
Smita1
Smita1
Smita1
Exception ignored in: <function RemoteMemoryBlock.__del__ at 0x0000021C61266AE8>
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 163, in __del__
self.CleanUp()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 140, in CleanUp
self.CheckGuardSignature()
File "C:\Python37\lib\site-packages\pywinauto\remote_memory_block.py", line 279, in CheckGuardSignature
ctypes.c_void_p(self.mem_address + self.size),
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
Thanks
Smita
| open | 2018-09-12T18:30:33Z | 2023-03-16T11:45:39Z | https://github.com/pywinauto/pywinauto/issues/566 | [
"question"
] | smitagodbole | 17 |
koxudaxi/datamodel-code-generator | fastapi | 2,179 | --snake-case-field etc should check ConfigDict on specified base-class | **Is your feature request related to a problem? Please describe.**
Certain options like --snake-case-field and --allow-extra-fields don't check whether the base class has set them in its ConfigDict.
**Describe the solution you'd like**
I am supplying a custom base class via --base-class, which has a ConfigDict which supplies an alias_generator. Using --snake-case-fieldcauses each field to have a value of Field(..., alias="..."). I'd like --snake-case-field to check for the presence of an alias_generator on the base before adding the per-field alias. This could also apply to options such as --allow-extra-fields. If specified in a custom base class's ConfigDict, don't explicitly add.
**Describe alternatives you've considered**
Adding my own custom behaviour
**Additional context**
| closed | 2024-11-21T22:01:59Z | 2024-11-22T18:04:57Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2179 | [] | nickyoung-github | 1 |
scikit-learn/scikit-learn | data-science | 30,413 | Identical branches in the conditional statement in "svm.cpp" | ### Describe the bug
File svm/src/libsvm/svm.cpp, lines 1895-1903 contain the same statements. Is it correct?
### Steps/Code to Reproduce
if(fabs(alpha[i]) > 0)
{
++nSV;
if(prob->y[i] > 0)
{
if(fabs(alpha[i]) >= si.upper_bound[i])
++nBSV;
}
else
{
if(fabs(alpha[i]) >= si.upper_bound[i])
++nBSV;
}
}
### Expected Results
none
### Actual Results
none
### Versions
```shell
1.5.2
```
| closed | 2024-12-05T12:01:22Z | 2025-01-27T14:16:19Z | https://github.com/scikit-learn/scikit-learn/issues/30413 | [
"Bug"
] | ayv19 | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 631 | Another problem... | 
There is a gap between words... and also a weird noise
The audio: https://drive.google.com/file/d/1Qm2Y_zt2bJJqZVGsB70yA1K0Judkn4Mp/view?usp=sharing
| closed | 2021-01-18T19:22:11Z | 2021-01-22T21:08:07Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/631 | [] | notluke27 | 5 |
akfamily/akshare | data-science | 5,656 | AKShare 接口问题报告 | AKShare Interface Issue Report stock_zh_a_hist似乎有限制? | stock_zh_a_hist
akshare 版本 1.15.96
有时候会出现进度条,有时候又没有,循环抓取十几个之后就返回none了,但是再开个进程又可以取了,似乎是有什么限制 | closed | 2025-02-17T07:20:37Z | 2025-02-17T07:25:22Z | https://github.com/akfamily/akshare/issues/5656 | [
"bug"
] | caihua | 1 |
recommenders-team/recommenders | machine-learning | 1,249 | Multinomial VAE - performance | I've noticed some differences in training time and performance between this tensorflow 2 implementation and the original version:
- lower ratings should be removed
> user-to-movie interactions with rating <=3.5 are filtered out
but they are used to generate test_data_te_ratings, val_data_te_ratings that are used to compute the metrics for the model. Could I ask why?
- huge spike in memory consumption, using dataset 20m when the training is started will gobble up ~25gb RAM while the initial version will only use ~4gb RAM. Is this is a bug?
Thank you
| open | 2020-11-25T07:44:32Z | 2021-01-31T21:11:29Z | https://github.com/recommenders-team/recommenders/issues/1249 | [
"help wanted"
] | PaulCristina | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 829 | UVR | ValueError: zero-size array to reduction operation maximum which has no identity
If this error persists, please contact the developers with the error details.
Would you like to open the error log for more details?
| open | 2023-09-29T04:02:58Z | 2023-09-29T04:02:58Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/829 | [] | Fay0857 | 0 |
pallets/flask | flask | 4,520 | Flask(1.0.x) and Jinja2(3.1.1) are not compatible | <!--
This issue tracker is a tool to address bugs in Flask itself. Please use
Pallets Discord or Stack Overflow for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version: python3.7
- Flask version: flask 1.0.2
it will throw error like this:
`importerror: cannot import name 'markup' from 'jinja2'`
| closed | 2022-04-05T14:38:13Z | 2022-04-20T00:05:41Z | https://github.com/pallets/flask/issues/4520 | [] | neozhao98 | 2 |
statsmodels/statsmodels | data-science | 9,427 | sm.Logit and sm.GLM do not handle alpha the same way | To get same results when using
```
model = sm.GLM(y, X, family=sm.families.Binomial())
results = model.fit_regularized(alpha=alpha_glm, ...)
```
and
```
model = sm.Logit(y, X)
results = model.fit_regularized(alpha=alpha_logit)
```
one needs to set `alpha_logit = alpha_glm * len(X)` because scaling is not done the same way.
This should not be the case. | open | 2024-11-19T15:00:54Z | 2024-11-19T15:21:41Z | https://github.com/statsmodels/statsmodels/issues/9427 | [] | louisabraham | 1 |
mitmproxy/pdoc | api | 192 | Fix simple typo: beloing -> belonging | There is a small typo in pdoc/doc.py.
Should read `belonging` rather than `beloing`.
| closed | 2019-12-14T10:53:52Z | 2020-04-12T21:21:25Z | https://github.com/mitmproxy/pdoc/issues/192 | [] | timgates42 | 0 |
JaidedAI/EasyOCR | deep-learning | 1,204 | Easyocr terminates without any error and the readtext function doesn't work. | While trying to read text from an image, the program terminates without any error and without any output.
Code:
import easyocr
import cv2
img = cv2.imread('abc.jpg')
reader = easyocr.Reader(['hi', 'en'], gpu=False)
results = reader.readtext(img, detail = 1, paragraph=False)
print(results)
Output:
Using CPU. Note: This module is much faster with a GPU
| open | 2024-01-25T10:12:37Z | 2025-01-15T02:21:33Z | https://github.com/JaidedAI/EasyOCR/issues/1204 | [] | ArqamNisar | 3 |
numpy/numpy | numpy | 27,654 | BUG: AttributeError: module 'numpy' has no attribute '_SupportsBuffer' | ### Describe the issue:
Trying to use `_SupportsBuffer` for type hinting results in the following cryptic error message: `AttributeError: module 'numpy' has no attribute '_SupportsBuffer'`.
This is strange since hinting succeeds seamlessly in Pylance, but on closer inspection one can realize that `_SupportsBuffer` is defined in a **`__init__.pyi`** stub instead of a regular **`.py`** file.
I'm quite new to type hinting so these kind of subtleties are completely lost on me. Searching the internet has come up empty and no idea of quite where to start working around this.
### Reproduce the code example:
```python
import numpy as np
def read(buffer: np._SupportsBuffer):
return np.frombuffer(buffer)
```
### Error message:
```shell
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\gonca\Projects\github\harp-tech\harp-python\example.py", line 3, in <module>
def read(buffer: np._SupportsBuffer):
^^^^^^^^^^^^^^^^^^
File "C:\Users\gonca\Projects\github\harp-tech\harp-python\.venv\Lib\site-packages\numpy\__init__.py", line 428, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute '_SupportsBuffer'
```
### Python and NumPy Versions:
2.1.2
3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)]
### Runtime Environment:
_No response_
### Context for the issue:
Sadly this single type hinting failure is compromising the typing of an entire API since it affects a single function at the very bottom of the stack. At least documenting a workaround or reason for the issue would be extremely valuable. | closed | 2024-10-28T14:16:53Z | 2024-10-30T02:10:28Z | https://github.com/numpy/numpy/issues/27654 | [
"00 - Bug"
] | glopesdev | 4 |
onnx/onnxmltools | scikit-learn | 588 | ValueError: Unable to create node 'TreeEnsembleClassifier' with name='WrappedLightGbmBoosterClassifier'. | I am trying to convert LightGBM model into ONNX.
I am using the following code. It worked few months back, but throwing
"ValueError: Unable to create node 'TreeEnsembleClassifier' with name='WrappedLightGbmBoosterClassifier'." error now.
Please let me know where this is going wrong.
https://github.com/Bhuvanamitra/LightGBMToONNX/tree/main
I have also attached the error trace in above github link
| closed | 2022-10-03T11:50:36Z | 2022-10-13T09:15:32Z | https://github.com/onnx/onnxmltools/issues/588 | [] | Bhuvanamitra | 0 |
lk-geimfari/mimesis | pandas | 1,471 | generic.pyi has incorrect attribute for providers.Cryptographic | # Bug report
<!--
Hi, thanks for submitting a bug. We appreciate that.
But, we will need some information about what's wrong to help you.
-->
## What's wrong
<!-- Describe what is not working. Please, attach a traceback. -->
generic.pyi has incorrect attribute for providers.Cryptographic
https://github.com/lk-geimfari/mimesis/blob/346f62471345186164f06dee6b83020a7ac3c8ba/mimesis/providers/generic.pyi#L21
## How is that should be
```python
cryptographic: providers.Cryptographic
```
<!-- Describe how it should work. -->
## System information
<!-- Describe system information -->
| closed | 2024-01-23T16:22:18Z | 2024-01-24T08:25:50Z | https://github.com/lk-geimfari/mimesis/issues/1471 | [] | MarcelWilson | 1 |
taverntesting/tavern | pytest | 160 | Module already imported so cannot be rewritten: tavern | Hello. i have this warning at the end of test. Where can be problem? Or is there some way to ignore warnings at least? Thank you. | closed | 2018-07-26T14:09:39Z | 2018-07-26T16:00:20Z | https://github.com/taverntesting/tavern/issues/160 | [] | zurek11 | 4 |
kennethreitz/responder | flask | 50 | Lean on Starlette. | Opening this issue first for discussion rather than just jumping in, because I want to understand where @kennethreitz would like to draw the boundaries on this.
There's currently quite a lot of duplication between Starlette and Responder. Starting with one class to consider there's [`responder.Response`](https://github.com/kennethreitz/responder/blob/master/responder/models.py#L103)...
* `headers` - Exposes a case-insensitive dict. Starlette already has a `request.headers` which is a case-insensitive multidict.
* `method` - Exposes the request method, lowercased. Starlette already exposes this, uppercased, as `request.method`.
* `.full_url`, `.url` - Exposes the str-URL and parsed URL. Starlette already exposes a str-like interface that also allows accessing parsed components on it. `request.url`, `request.url.path`, etc...
* `.params` - Exposes a QueryDict. Starlette already exposes a query param multidict, as `request.query_params`.
(For reference here's Starlette's request docs... https://www.starlette.io/requests/)
Ideally we'd try to minimize reproducing too much stuff in differing interfaces. Or at least work towards having the data-structures be Starlette's low-level stuff, while Responder uses whatever high-level `request` interface it wants to stitch those together. (Ie. try to ensure that Starlette is to Responder as Werkzeug is to Flask.)
Initial thoughts? | closed | 2018-10-15T11:19:56Z | 2018-10-16T10:39:00Z | https://github.com/kennethreitz/responder/issues/50 | [] | tomchristie | 1 |
piskvorky/gensim | data-science | 3,360 | KeyedVectors.load_word2vec_format() can't load GoogleNews-vectors-negative300.bin | #### Problem description
KeyedVectors.load_word2vec_format() can't load GoogleNews-vectors-negative300.bin,
This is the my code.
```
from gensim.models.keyedvectors import KeyedVectors
gensim_model = KeyedVectors.load_word2vec_format(
'./GoogleNews-vectors-negative300.bin', binary=True, limit=300000)
```
This is the error code.
#### Versions
```
Traceback (most recent call last):
File "D:\desktop\2\word2vec.py", line 4, in <module>
gensim_model = KeyedVectors.load_word2vec_format(
File "C:\Users\admin\anaconda3\envs\dl\lib\site-packages\gensim\models\keyedvectors.py", line 1723, in load_word2vec_format
return _load_word2vec_format(
File "C:\Users\admin\anaconda3\envs\dl\lib\site-packages\gensim\models\keyedvectors.py", line 2063, in _load_word2vec_format
vocab_size, vector_size = [int(x) for x in header.split()] # throws for invalid file format
ValueError: not enough values to unpack (expected 2, got 0)
```
Please provide the output of:
python 3.9.1
gensim 4.2.0
| closed | 2022-07-01T12:54:46Z | 2022-07-02T06:30:12Z | https://github.com/piskvorky/gensim/issues/3360 | [] | xwz-19990627 | 2 |
Farama-Foundation/Gymnasium | api | 858 | [Question] Want some help in implementing sampling with masking for Box spaces? | ### Question
I am willing to contribute to implementing masking in sampling for `gymnasium.spaces.Box` and was curious if this is something on the roadmap. Also, would very much appreciate any help/advice on how to tackle this.
| closed | 2023-12-22T17:53:45Z | 2023-12-25T20:44:08Z | https://github.com/Farama-Foundation/Gymnasium/issues/858 | [
"question"
] | fracapuano | 3 |
pyppeteer/pyppeteer | automation | 124 | How to solve the ValueError: too many file descriptors in select() error of asyncio in Windows? Excuse me | How to solve the ValueError: too many file descriptors in select() error of asyncio in Windows? Excuse me | open | 2020-06-03T06:23:05Z | 2020-06-03T07:52:38Z | https://github.com/pyppeteer/pyppeteer/issues/124 | [
"waiting for info",
"can't reproduce"
] | pythonlw | 1 |
vvbbnn00/WARP-Clash-API | flask | 46 | 能否支持v2rayA | 家里软路由装的v2rayA,这个项目能否支持一下 | closed | 2024-02-22T06:31:38Z | 2024-02-28T06:25:41Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/46 | [
"enhancement"
] | sillypy | 6 |
joeyespo/grip | flask | 232 | Suggestion for how to enter field values in ~/.grip/settings.py | Regarding https://github.com/joeyespo/grip#configuration may be helpful to explicitly instruct user to add field values inside single quotes.
Can be done by either linking to https://github.com/joeyespo/grip/blob/master/grip/settings.py for example syntax, or providing an example, e.g.
```
USERNAME = 'thisismyusername'
PASSWORD = '1234512345'
```
This is sort of obvious if you use Python but may not be obvious to everyone. | closed | 2017-03-15T20:09:48Z | 2017-09-24T15:54:04Z | https://github.com/joeyespo/grip/issues/232 | [
"readme"
] | erikr | 2 |
saulpw/visidata | pandas | 1,804 | First sheet seen should be first arg on CLI | `a.json`:
```json
[{"a": 1, "b": 1}]
```
`b.json`:
```json
[{"b": 2, "c": 2}]
```
Then
`vd a.json b.json`.
the 1st table will be `b.json` and 2nd table will be `a.json`. However, in command line arguments, `a.json` is first and `b.json` is second. Why not make the order same? | closed | 2023-03-12T07:29:10Z | 2023-03-12T21:56:10Z | https://github.com/saulpw/visidata/issues/1804 | [
"By Design"
] | Freed-Wu | 1 |
InstaPy/InstaPy | automation | 5,923 | You have too few comments, please set at least 10 distinct comments to avoid looking suspicious. |
## Expected Behavior
to just run normally
## Current Behavior
`
ERROR [2020-11-24 18:05:01] [my account name] You have too few comments, please set at least 10 distinct comments to avoid looking suspicious.
`
## Possible Solution (optional)
nothing worked with me
## InstaPy configuration
```py
photo_comments = ['Nice shot! @{}',
'I love your profile! @{}',
'Your feed is an inspiration :thumbsup:',
'Just incredible :open_mouth:',
'What camera did you use @{}?',
'Love your posts @{}',
'Looks awesome @{}',
'Getting inspired by you @{}',
':raised_hands: Yes!',
'I can feel your passion @{} :muscle:',
'niceeeee @{}',
'well this is pretty interesting @{}',
'amazing']
# let's go! :>
with smart_run(session):
# settings
session.set_user_interact(amount=3, randomize=True, percentage=100,
media='Photo')
session.set_relationship_bounds(enabled=True,
potency_ratio=None,
delimit_by_numbers=True,
max_followers=3000,
max_following=900,
min_followers=50,
min_following=50)
session.set_simulation(enabled=False)
session.set_do_like(enabled=True, percentage=100)
session.set_ignore_users([])
session.set_comments(photo_comments)
session.set_do_comment(enabled=True, percentage=35)
session.set_do_follow(enabled=True, percentage=25, times=1)
session.set_ignore_if_contains([])
session.set_action_delays(enabled=True, like=40)
# activity
session.interact_user_followers([], amount=340)
""" Joining Engagement Pods...
"""
session.join_pods(topic='entertainment', engagement_mode='no_comments')
```
## i tried using other templates and nothing worked
i dunno what to do now ...
| open | 2020-11-24T16:27:23Z | 2022-03-08T18:07:30Z | https://github.com/InstaPy/InstaPy/issues/5923 | [
"wontfix"
] | AdhamHisham | 7 |
neuml/txtai | nlp | 469 | Add PyTorch ANN Backend | Add ANN backend that uses a PyTorch array. | closed | 2023-05-02T19:41:37Z | 2023-05-03T11:35:17Z | https://github.com/neuml/txtai/issues/469 | [] | davidmezzetti | 0 |
modoboa/modoboa | django | 2,517 | Display domain alarms in new UI | Domain alarms are currently not displayed in the new UI. | closed | 2022-05-18T15:33:31Z | 2022-06-13T07:38:35Z | https://github.com/modoboa/modoboa/issues/2517 | [
"enhancement",
"new-ui"
] | tonioo | 0 |
PrefectHQ/prefect | automation | 17,444 | Show task logs when clicking on task node in UI | ### Describe the current behavior
In the UI, currently if you click on a task node, the log window still only shows the flow run logs.
### Describe the proposed behavior
It would be convenient if the log window would show the logs for the task
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-03-11T14:50:56Z | 2025-03-11T16:26:13Z | https://github.com/PrefectHQ/prefect/issues/17444 | [
"enhancement",
"ui"
] | cBournhonesque | 0 |
PokeAPI/pokeapi | api | 566 | MissingNo. Pokemon 0 | <!--
Please search existing issues to avoid creating duplicates.
Describe the feature you'd like.
Thank you!
-->
Pokemon #0: MissingNo.
I know it may not be the point, but an inclusion of MissingNo would be a nice little detail to include the glitched pokemon
| closed | 2021-01-28T19:17:59Z | 2021-02-18T15:21:36Z | https://github.com/PokeAPI/pokeapi/issues/566 | [] | NathanBorchelt | 4 |
schemathesis/schemathesis | pytest | 2,159 | [BUG] curl code samples omit non-printable characters | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
I had a FastAPI endpoint which accidentally leaked the header definition for `X-Forwarded-For` into the OpenAPI docs and that caused a Schemathesis run to fail, emitting a curl sample like this:
```sh
curl -X GET -H 'x-forwarded-for: 0' http://localhost:8000/search/latest-magazines
```
That worked without error when I tried to reproduce, but I knew it was failing based on my service's logs. I added `--code-sample-style=python` and the cause became clear:
```python
requests.get('http://localhost:8000/search/latest-magazines', headers={'x-forwarded-for': '0\x1f'})
```
### To Reproduce
1. Start a run against a service which will trigger an error on the presence of an unprintable character (in my case, the ASGI runner itself triggered the error so it wasn't something my application knows about when generating the OpenAPI docs but that's obviously not a prerequisite).
Please include a minimal API schema causing this issue:
```json
{
"openapi": "3.1.0",
"paths": {
"/search/latest-magazines": {
"get": {
"summary": "Get a list of the latest magazine issues",
"operationId": "getLatestMagazines",
"parameters": [
{
"name": "x-forwarded-for",
"in": "header",
"required": false,
"schema": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"title": "X-Forwarded-For"
}
}
],
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SearchResponse"
}
}
}
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ValidationError"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"SearchResponse": {},
"ValidationError": {}
}
}
}
```
### Expected behavior
Ideally the curl examples would either include the escaped values — but this introduces shell-specific behaviour since you'd need to use something like `curl -H "X-Forwarded-For: $(printf '\01f')" …` so it might be effective to simply alert the user with a text message that the curl sample is incomplete. In my case, the fact that it printed without a warning and with what appears to be a complete string was somewhat confusing
### Environment
```
- OS: macOS
- Python version: 3.12
- Schemathesis version: 3.27.1
- Spec version: OpenAPI 3.1.0
```
| open | 2024-05-07T17:46:44Z | 2024-05-09T19:13:49Z | https://github.com/schemathesis/schemathesis/issues/2159 | [
"Type: Bug",
"Status: Needs Triage"
] | acdha | 3 |
dgtlmoon/changedetection.io | web-scraping | 1,827 | [feature] add audit log | we need an audit log
- date/time, exception if any, code, result, time, was change detected
this would probably help in the future for debugging/adding new plugins/methods too | open | 2023-09-29T23:00:41Z | 2023-10-27T08:25:33Z | https://github.com/dgtlmoon/changedetection.io/issues/1827 | [
"enhancement"
] | dgtlmoon | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,722 | unable to run react app in react-rest-api | ### Environment
Flask-Appbuilder version: 3.3.3
npm version: 6.14.15
pip freeze output:
apispec==3.3.0
attrs==19.1.0
Babel==2.6.0
backcall==0.2.0
chardet==4.0.0
click==8.0.1
colorama==0.4.1
decorator==5.1.0
defusedxml==0.5.0
dnspython==1.16.0
email-validator==1.0.5
et-xmlfile==1.1.0
Flask==1.1.1
Flask-AppBuilder==3.3.3
Flask-Babel==1.0.0
Flask-Excel==0.0.7
Flask-JWT-Extended==3.18.0
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-SQLAlchemy==2.4.0
Flask-WTF==0.14.2
idna==2.9
importlib-metadata==4.8.1
ipython==7.28.0
itsdangerous==1.1.0
jedi==0.18.0
Jinja2==2.10.1
jsonschema==3.0.1
lml==0.1.0
MarkupSafe==1.1.1
marshmallow==3.5.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.0
matplotlib-inline==0.1.3
numpy==1.21.2
openpyxl==3.0.9
pandas==1.3.3
parso==0.8.2
pickleshare==0.7.5
Pillow==8.3.2
prison==0.2.1
prompt-toolkit==3.0.20
pyexcel==0.6.7
pyexcel-io==0.6.4
pyexcel-webio==0.1.4
Pygments==2.10.0
PyJWT==1.7.1
pyrsistent==0.14.11
python-dateutil==2.8.0
python3-openid==3.1.0
pytz==2018.9
PyYAML==5.1
six==1.12.0
SQLAlchemy==1.3.1
SQLAlchemy-Utils==0.33.9
texttable==1.6.4
traitlets==5.1.0
typing-extensions==3.10.0.2
wcwidth==0.2.5
Werkzeug==0.15.5
WTForms==2.2.1
zipp==3.6.0
### Describe the expected results
Tell us what should happen. react app should start successfully
```python
Paste a minimal example that causes the problem.
```
### Describe the actual results
Tell us what happens instead.
after running
- npm install
- npm start
-
```pytb
Paste the full traceback if there was an exception.
```
npm start
> react-fab@1.0.0 start D:\Flask-AppBuilder-master\examples\react-rest-api\app\static
> npm start
There might be a problem with the project dependency tree.
It is likely not a bug in Create React App, but something you need to fix locally.
The react-scripts package provided by Create React App requires a dependency:
"babel-eslint": "9.0.0"
### Steps to reproduce
running below commands in react-rest-api\app\static
- npm install
- npm start
| closed | 2021-10-23T01:26:55Z | 2022-04-28T14:41:19Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1722 | [
"stale"
] | forestlzj | 1 |
holoviz/colorcet | plotly | 30 | Users Guide Palettes showing non-continuous artifacts | Several of the images in the users guide are showing strange non-continuous colors within parts of the palette:


Let me know if what I'm saying isn't visually apparent. | closed | 2019-04-18T20:41:02Z | 2019-04-22T15:03:39Z | https://github.com/holoviz/colorcet/issues/30 | [] | flutefreak7 | 3 |
miguelgrinberg/Flask-Migrate | flask | 41 | alembic stuck on database upgrade | I'm running a database migration, and Alembic complains with
`alembic.util.CommandError: Target database is not up to date.`
So and I then run a database upgrade
`python manage.py db upgrade`
Here is the output:
`INFO [alembic.migration] Context impl PostgresqlImpl.`
`INFO [alembic.migration] Will assume transactional DDL.`
`INFO [alembic.migration] Running upgrade 604497a5e5 -> 431e33e6fce, empty message`
The issue is that Alembic never finishes the upgrade (never quits the terminal) and it appears stuck on the third step, I have tried severally, even leaving it for several minutes but it never completes the upgrade.
I have tried running this locally and on the server and it doesn't work (it's not an issue with the connection).
| closed | 2015-01-26T09:06:21Z | 2020-06-17T18:23:31Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/41 | [] | mattgathu | 12 |
jina-ai/serve | machine-learning | 6,225 | The read operation timed out | **Describe the bug**
I am using dify ai and using jina as rereank model in dify. Earlier it was working fine i changed nothing. Suddenly it had stopped working and giving me this error
"message": "[jina] Bad Request Error, The read operation timed out",
I have added tokens as well but still its crashing.
**Environment**
**Screenshots**


| closed | 2025-01-15T10:52:29Z | 2025-01-15T11:08:09Z | https://github.com/jina-ai/serve/issues/6225 | [] | qadeerikram-art | 8 |
lepture/authlib | django | 168 | Support for prepared requests | **Is your feature request related to a problem? Please describe.**
I would like to fire [prepared requests](https://requests.kennethreitz.org/en/master/user/advanced/#prepared-requests) using an `OAuth2Session`, but I currently have to inject an access_token myself; here is an example using the openId connect password flow:
```python
requests = [requests.Request('GET', 'https://example.org/api/item', params={'id': i}) for i in range(10)]
with OAuth2Session(client_id='my-client-id', client_secret='my-client-secret', scope=['openid', 'email']) as session:
session.fetch_token(url='https://example.org/token', username='my-user', password='my-pass')
for req in requests:
session.send(session.prepare_request(req))
```
Requests sent to the API fail with a 401 response status, unless I inject the `session.token_auth` into each request (even if the session itself is responsible for preparing and sending these requests)
**Describe the solution you'd like**
`OAuth2Session` could overrive `requests.Session.prepare_request` method, and inject access token. I experimented 2 different working solutions, but I do not know which one is preferable:
solution 1:
```python
class OAuth2Session(OAuth2Client, Session):
# ...
def prepare_request(self, request):
# Solution 1 : set auth to request before preparing it
if self.token and request.auth is None:
request.auth = self.token_auth
return super(OAuth2Session, self).prepare_request(request)
```
solution 2:
```python
class OAuth2Session(OAuth2Client, Session):
# ...
def prepare_request(self, request):
prepared_request = super(OAuth2Session, self).prepare_request(request)
# Solution 2 : prepare_auth on prepared_request when a token is there
if self.token:
prepared_request.prepare_auth(self.token_auth)
return prepared_request
```
**Ready to contribute**
Let me know if this additional feature looks reasonable to you, I'll submit a PR
Thank you
| closed | 2019-11-15T22:53:25Z | 2020-05-23T05:37:33Z | https://github.com/lepture/authlib/issues/168 | [] | galak75 | 4 |
sinaptik-ai/pandas-ai | data-science | 1,469 | How to get the output of a large model instead of through the response | How to get the output of a large model instead of through the response | closed | 2024-12-11T02:28:59Z | 2024-12-13T15:38:19Z | https://github.com/sinaptik-ai/pandas-ai/issues/1469 | [] | lwdnxu | 8 |
TheAlgorithms/Python | python | 12,495 | Project Euler prohibits sharing answers to problems 101 and after | ### What would you like to share?
> I learned so much solving problem XXX, so is it okay to publish my solution elsewhere?
>
> It appears that you have answered your own question. There is nothing quite like that "Aha!" moment when you finally beat a problem which you have been working on for some time. It is often through the best of intentions in wishing to share our insights so that others can enjoy that moment too. Sadly, that will rarely be the case for your readers. Real learning is an active process and seeing how it is done is a long way from experiencing that epiphany of discovery. Please do not deny others what you have so richly valued yourself.
>
> However, the rule about sharing solutions outside of Project Euler does not apply to the first one-hundred problems, as long as any discussion clearly aims to instruct methods, not just provide answers, and does not directly threaten to undermine the enjoyment of solving later problems. Problems 1 to 100 provide a wealth of helpful introductory teaching material and if you are able to respect our requirements, then we give permission for those problems and their solutions to be discussed elsewhere.
https://projecteuler.net/about
Therefore, the code to problem 101 and after should be deleted.
### Additional information
_No response_ | open | 2025-01-03T02:26:08Z | 2025-01-03T02:26:08Z | https://github.com/TheAlgorithms/Python/issues/12495 | [
"awaiting triage"
] | hidesato-fujii | 0 |
microsoft/qlib | deep-learning | 1,075 | How to run HIST model by using alpha158 | ## ❓ Questions and Help
anyone can give me an introduction?I want to use HIST model on alpha158 | closed | 2022-04-25T12:53:36Z | 2022-08-10T21:01:59Z | https://github.com/microsoft/qlib/issues/1075 | [
"question",
"stale"
] | stockcoder | 4 |
sczhou/CodeFormer | pytorch | 326 | Face Inpainting is not working with custom masked datasets? | The Face Inpainting is working perfectly with provided examples, but it is not at all working with custom made datasets.
As mentioned in earlier comments that the Face Inpainting checkpoint is not released. Is it still the case as of now?
| open | 2023-12-01T09:00:31Z | 2025-02-13T09:55:22Z | https://github.com/sczhou/CodeFormer/issues/326 | [] | Mr-Nobody-dey | 4 |
csurfer/pyheat | matplotlib | 3 | Feature request: Scrollable heatmap for longer scripts | Longer (hundreds LOC) scripts will get squished by this. The solution would be breaking heatmap into manageable chunks and introducing GUI that would let you scroll through. Preferably with a minimap (a la Sublime Text) that shows where you are and overall heatmap of the script. | closed | 2017-02-06T14:14:49Z | 2018-12-06T05:25:55Z | https://github.com/csurfer/pyheat/issues/3 | [
"enhancement"
] | dmitrii-ubskii | 4 |
httpie/cli | rest-api | 660 | Ignore stdin when STDIN is closed | I ran into the bug reported in #150 and subsequently worked around in https://github.com/jakubroztocil/httpie/commit/f7b703b4bf365e5ba930649f7ba29901477e62b6 but under different circumstances.
In my case, `http` was being invoked as part of a BsdMakefile script. `bmake` helpfully buffers IO when performing a parallel make (`make -jX`) so that lines from various rule outputs are not intermixed. httpie was detecting this as `stdin` and reporting an error about conflicting input streams (command line and STDIN).
Before looking up #150 or even Googling the issue, I correctly surmised that was the case, and attempted to work around it by explicitly closing `STDIN`:
```sh
http POST "https://neosmart.net/xxxx" authcode="xxxx" version="5.5.2" 0<&-
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,ssl3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: Request body (from stdin or a file) and request data (key=value) cannot be mixed.
```
Is it possible for httpie to distinguish between a closed and redirected stream? | closed | 2018-03-01T23:03:58Z | 2020-05-23T18:52:51Z | https://github.com/httpie/cli/issues/660 | [] | mqudsi | 3 |
flairNLP/flair | nlp | 3,097 | [Bug]: Multitask evaluation (and therefore training) fails on current master. | ### Describe the bug
The Multitask training fails when evaluating, as it now gets a `str` for the label_type, while still expecting a dictionary.
### To Reproduce
```python
from flair.embeddings import TransformerWordEmbeddings
from flair.trainers import ModelTrainer
from flair.models import SequenceTagger, RelationExtractor, MultitaskModel
corpus = create_corpus() # any corpus with multiple tasks annotated should work.
embeddings = TransformerWordEmbeddings(
model="bert-base-cased",
)
ner_model = SequenceTagger(
embeddings=embeddings,
tag_dictionary=corpus.make_label_dictionary("ner", add_unk=False),
tag_type="ner",
)
rel_model = RelationExtractor(
embeddings=embeddings,
label_dictionary=corpus.make_label_dictionary("rel", add_unk=True),
entity_label_type="ner",
label_type="rel",
)
model = MultitaskModel([ner_model, rel_model])
trainer = ModelTrainer(model, corpus)
trainer.train("training-path")
```
### Expected behaivor
A training should run trough without any issues and produce a trained model
### Logs and Stack traces
```stacktrace
File "/mnt/raid/bfuchs/.cache/pypoetry/virtualenvs/xDHSzyCG-py3.8/lib/python3.8/site-packages/clearml/binding/hydra_bind.py", line 173, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
File "/mnt/raid/bfuchs/train.py", line 203, in main
results = trainer.fine_tune(
File "/mnt/raid/bfuchs/.cache/pypoetry/virtualenvs/xDHSzyCG-py3.8/lib/python3.8/site-packages/flair/trainers/trainer.py", line 946, in fine_tune
return self.train(
File "/mnt/raid/bfuchs/.cache/pypoetry/virtualenvs/xDHSzyCG-py3.8/lib/python3.8/site-packages/flair/trainers/trainer.py", line 643, in train
dev_eval_result = self.model.evaluate(
File "/mnt/raid/bfuchs/.cache/pypoetry/virtualenvs/xDHSzyCG-py3.8/lib/python3.8/site-packages/flair/models/multitask_model.py", line 160, in evaluate
+ self.label_type.get(task_id)
```
### Screenshots
_No response_
### Additional Context
A simple hotfix is to go to the `evaluate` method in the `multitask_model.py` and replace all occurances of `gold_label_type[task_id]` with `self.tasks[task_id].label_type`. However this might not be the cleanest solution, as this lead to the `gold_label_type` parameter to be ignored.
### Environment
#### Versions:
##### Flair
0.11.3
##### Pytorch
1.12.0+cu113
##### Transformers
4.21.1
#### GPU
True | closed | 2023-02-10T13:45:48Z | 2023-02-14T09:56:12Z | https://github.com/flairNLP/flair/issues/3097 | [
"bug"
] | helpmefindaname | 0 |
lux-org/lux | jupyter | 49 | Lux Errors when `set_index` | ```
df = pd.read_csv("../../lux/data/state_timeseries.csv")
df["Date"] = pd.to_datetime(df["Date"])
df.set_index(["Date"])
```
This is happening because the executor expects a flat table and pre_aggregate is inferred as False for this table. | closed | 2020-07-28T03:49:22Z | 2021-01-09T12:13:45Z | https://github.com/lux-org/lux/issues/49 | [
"bug",
"priority"
] | dorisjlee | 2 |
JoeanAmier/XHS-Downloader | api | 211 | 同一笔记的下载内容的增加(如第一次下载的时候没有下载实况图,下载完以后又选择了这个选项重新)会被判定为已下载而跳过 | 第二个问题,我无法删除下载记录,输入id后再下载任会显示存在记录 | closed | 2025-01-02T13:43:55Z | 2025-01-04T14:01:52Z | https://github.com/JoeanAmier/XHS-Downloader/issues/211 | [
"功能异常(bug)"
] | peoplechinapower | 1 |
Teemu/pytest-sugar | pytest | 29 | Missing tests | closed | 2014-02-06T16:51:12Z | 2020-08-25T18:29:00Z | https://github.com/Teemu/pytest-sugar/issues/29 | [] | Teemu | 3 | |
flasgger/flasgger | flask | 427 | How to change UI language? | Hi!
How to change UI language?
I commented out:
```
<script src='lang/translator.js' type='text/javascript'></script>
<script src='lang/ru.js' type='text/javascript'></script>
```
In all index.html files in flasgger package folders, but nothing happens. | open | 2020-08-21T08:21:44Z | 2020-08-21T08:21:44Z | https://github.com/flasgger/flasgger/issues/427 | [] | nxbx | 0 |
home-assistant/core | python | 141,044 | After reboot HA, Tado failed to setup: Login failed for unknown reason with status code 403 | ### The problem
After a reboot of HA, I suddenly got notified that the Tado integration failed to setup.
It showed a configure button to configure the fallback method, but this didn't change anything.
Tried to reconfigure by entering my password again, failed with "unexpected error"
When I check in the logs I first find this log:
> 2025-03-21 07:55:29.472 ERROR (MainThread) [homeassistant.util.logging] Exception in <function _register_repairs_platform at 0x7fd54ab43a60> when processing platform 'repairs': (<HomeAssistant RUNNING>, 'tado', <module 'homeassistant.components.tado.repairs' from '/usr/src/homeassistant/homeassistant/components/tado/repairs.py'>)
> Traceback (most recent call last):
> File "/usr/src/homeassistant/homeassistant/components/repairs/issue_handler.py", line 118, in _register_repairs_platform
> raise HomeAssistantError(f"Invalid repairs platform {platform}")
> homeassistant.exceptions.HomeAssistantError: Invalid repairs platform <module 'homeassistant.components.tado.repairs' from '/usr/src/homeassistant/homeassistant/components/tado/repairs.py'>
> s6-rc: info: service legacy-services: stopping
a bit further the following:
> 2025-03-21 07:58:15.190 ERROR (MainThread) [homeassistant.components.tado.config_flow] Unexpected exception
> Traceback (most recent call last):
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
> await validate_input(self.hass, user_input)
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
> tado = await hass.async_add_executor_job(
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
> result = self.fn(*self.args, **self.kwargs)
> File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
> self._http = Http(
> ~~~~^
> username=username,
> ^^^^^^^^^^^^^^^^^^
> ...<2 lines>...
> debug=debug,
> ^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
> self._id, self._token_refresh = self._login()
> ~~~~~~~~~~~^^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
> raise TadoException(
> f"Login failed for unknown reason with status code {response.status_code}"
> )
> PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
> 2025-03-21 07:58:55.684 ERROR (MainThread) [homeassistant.components.tado.config_flow] Unexpected exception
> Traceback (most recent call last):
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
> await validate_input(self.hass, user_input)
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
> tado = await hass.async_add_executor_job(
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
> result = self.fn(*self.args, **self.kwargs)
> File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
> self._http = Http(
> ~~~~^
> username=username,
> ^^^^^^^^^^^^^^^^^^
> ...<2 lines>...
> debug=debug,
> ^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
> self._id, self._token_refresh = self._login()
> ~~~~~~~~~~~^^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
> raise TadoException(
> f"Login failed for unknown reason with status code {response.status_code}"
> )
> PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
> 2025-03-21 07:59:02.994 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:00:23.117 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:01:43.644 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:01:49.793 ERROR (MainThread) [homeassistant.components.tado.config_flow] Unexpected exception
> Traceback (most recent call last):
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
> await validate_input(self.hass, user_input)
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
> tado = await hass.async_add_executor_job(
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
> result = self.fn(*self.args, **self.kwargs)
> File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
> self._http = Http(
> ~~~~^
> username=username,
> ^^^^^^^^^^^^^^^^^^
> ...<2 lines>...
> debug=debug,
> ^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
> self._id, self._token_refresh = self._login()
> ~~~~~~~~~~~^^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
> raise TadoException(
> f"Login failed for unknown reason with status code {response.status_code}"
> )
> PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
> 2025-03-21 08:03:03.977 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:04:24.316 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:05:44.813 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:07:05.072 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:08:25.370 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:09:45.577 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:09:58.427 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:10:03.572 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:10:14.034 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> 2025-03-21 08:10:19.377 ERROR (MainThread) [homeassistant.components.tado.config_flow] Unexpected exception
> Traceback (most recent call last):
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 131, in async_step_reconfigure
> await validate_input(self.hass, user_input)
> File "/usr/src/homeassistant/homeassistant/components/tado/config_flow.py", line 52, in validate_input
> tado = await hass.async_add_executor_job(
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Tado, data[CONF_USERNAME], data[CONF_PASSWORD]
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
> result = self.fn(*self.args, **self.kwargs)
> File "/usr/local/lib/python3.13/site-packages/PyTado/interface/interface.py", line 46, in __init__
> self._http = Http(
> ~~~~^
> username=username,
> ^^^^^^^^^^^^^^^^^^
> ...<2 lines>...
> debug=debug,
> ^^^^^^^^^^^^
> )
> ^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 153, in __init__
> self._id, self._token_refresh = self._login()
> ~~~~~~~~~~~^^
> File "/usr/local/lib/python3.13/site-packages/PyTado/http.py", line 333, in _login
> raise TadoException(
> f"Login failed for unknown reason with status code {response.status_code}"
> )
> PyTado.exceptions.TadoException: Login failed for unknown reason with status code 403
> 2025-03-21 08:10:34.331 DEBUG (MainThread) [homeassistant.components.tado] Setting up Tado connection
> s6-rc: info: service legacy-services: stopping
> Found 3 non-daemonic threads.
I see with debug logging enable that the connections seems to be trying but not working.
I can see on my firewall that there is a successful connection to IP 18.239.69.94 URL auth.tado.com/
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tado
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/tado
### Diagnostics information
[home-assistant_tado_2025-03-21T08-00-54.053Z.log](https://github.com/user-attachments/files/19384593/home-assistant_tado_2025-03-21T08-00-54.053Z.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-21T08:02:39Z | 2025-03-22T10:36:10Z | https://github.com/home-assistant/core/issues/141044 | [
"integration: tado"
] | Kraganov | 2 |
AutoGPTQ/AutoGPTQ | nlp | 647 | Why doesn't AutoGPTQ quantize lm_head layer? | Is there a paper/article/blog post explaining such decision? Or is it just simply a feature that not being supported at the moment? | open | 2024-04-25T10:56:58Z | 2024-08-21T03:39:01Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/647 | [] | XeonKHJ | 6 |
flasgger/flasgger | rest-api | 595 | Unable to use import in yaml definitions with relative path | Hi,
I'm having trouble using the `import: "some.yaml"` function in yaml files. I'm using a setup where the API descriptions are in separate files and I use Blueprints to define the API itself. I'd like to remove a lot of redundancy in parameters by using `$ref` and it would be great to use the `import` option in the yaml files so I can easily reference the parameters, allowed values across the API.
After a lot of debugging I ended up with there must be some issues around how `root_path` is used when reading the yaml files and attempting to parse and include the imported yamls.
I ended up changing this line to use the `obj.root_path` whenever exists and fallback to the `get_root_path` function if it does not. With this change now I can successfully import yamls files by using relative paths.
https://github.com/flasgger/flasgger/blob/3c16b776f4848813209f2704b18cba81762ac030/flasgger/utils.py#L637
Do you think it makes sense to add this change to the code, or there is a well-known reason behind the current implementation? Alternatively I might miss some configuration that would solve this issue instantly...
Thanks a lot!
--Attila | open | 2023-09-14T12:15:35Z | 2023-09-14T12:15:35Z | https://github.com/flasgger/flasgger/issues/595 | [] | aokros | 0 |
pytest-dev/pytest-qt | pytest | 570 | Fatal Python error: Aborted | I run into the following error when trying to run pytest with qtbot:
```shell
tests/test_core.py Fatal Python error: Aborted
Current thread 0x00007fa369115740 (most recent call first):
File "/home/username/project/venv/lib/python3.10/site-packages/pytestqt/plugin.py", line 76 in qapp
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 898 in call_fixture_func
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 1140 in pytest_fixture_setup
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 1091 in execute
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 617 in _get_active_fixturedef
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 532 in getfixturevalue
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/fixtures.py", line 697 in _fillfixtures
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/python.py", line 1630 in setup
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 514 in setup
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 160 in pytest_runtest_setup
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 242 in <lambda>
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 341 in from_call
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 241 in call_and_report
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 126 in runtestprotocol
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/runner.py", line 113 in pytest_runtest_protocol
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/main.py", line 362 in pytest_runtestloop
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/main.py", line 337 in _main
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/main.py", line 283 in wrap_session
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/main.py", line 330 in pytest_cmdline_main
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/home/username/project/venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 175 in main
File "/home/username/project/venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 201 in console_main
File "/home/username/project/venv/bin/pytest", line 8 in <module>
Extension modules: PyQt5.QtCore, PyQt5.QtGui, PyQt5.QtWidgets, PyQt5.QtTest, numpy._core._multiarray_umath, numpy.linalg._umath_linalg, PyQt5.QtSvg, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing (total: 56)
Aborted (core dumped)
```
<del>It looks somewhat similar to #557, but </del>I'm running on linux / wayland using Python 3.10 in a venv with PyQt5 5.15.11 via [QtPy](https://github.com/spyder-ide/qtpy).
MWE:
```python
def test_case(qtbot):
pass
```
Any idea what might be causing this or how to troubleshoot further? | closed | 2024-10-05T15:43:29Z | 2024-10-05T17:56:16Z | https://github.com/pytest-dev/pytest-qt/issues/570 | [
"question :question:"
] | bimac | 2 |
reloadware/reloadium | flask | 131 | Reloadium support for ARM64 | ## Describe the bug*
Realoding support for arm architecture
## Screenshots

Would like to get reloadium support for ARM architecture, which is great, looking forward for this to resolve asap | open | 2023-03-30T05:02:46Z | 2024-11-20T11:45:35Z | https://github.com/reloadware/reloadium/issues/131 | [
"enhancement"
] | raaghulr | 15 |
ContextLab/hypertools | data-visualization | 200 | ImportError: cannot import name UMAP | Hi! I encountered a strange problem:
In [1]: import hypertools
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-48afb9e37bd3> in <module>()
----> 1 import hypertools
/root/Desktop/hypertools/hypertools/hypertools/__init__.py in <module>()
1 #!/usr/bin/env python
2 from .config import __version__
----> 3 from .plot.plot import plot
4 from .tools.load import load
5 from .tools.analyze import analyze
/root/Desktop/hypertools/hypertools/hypertools/plot/plot.py in <module>()
14 from .._shared.helpers import *
15 from .._shared.params import default_params
---> 16 from ..tools.analyze import analyze
17 from ..tools.cluster import cluster as clusterer
18 from ..tools.df2mat import df2mat
/root/Desktop/hypertools/hypertools/hypertools/tools/__init__.py in <module>()
1 #!/usr/bin/env python
2 from .align import align
----> 3 from .reduce import reduce
4 from .missing_inds import missing_inds
5 from .cluster import cluster
/root/Desktop/hypertools/hypertools/hypertools/tools/reduce.py in <module>()
7 from sklearn.decomposition import PCA, FastICA, IncrementalPCA, KernelPCA, FactorAnalysis, TruncatedSVD, SparsePCA, MiniBatchSparsePCA, DictionaryLearning, MiniBatchDictionaryLearning
8 from sklearn.manifold import TSNE, MDS, SpectralEmbedding, LocallyLinearEmbedding, Isomap
----> 9 from umap import UMAP
10 from ..tools.df2mat import df2mat
11 from .._shared.helpers import *
ImportError: cannot import name UMAP
Ubuntu 16.04,other information do you still need?
Thank you! | closed | 2018-04-29T07:10:07Z | 2020-09-30T11:54:20Z | https://github.com/ContextLab/hypertools/issues/200 | [] | RobinYang125 | 9 |
huggingface/datasets | numpy | 6,624 | How to download the laion-coco dataset | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | closed | 2024-01-28T03:56:05Z | 2024-02-06T09:43:31Z | https://github.com/huggingface/datasets/issues/6624 | [] | vanpersie32 | 1 |
dunossauro/fastapi-do-zero | pydantic | 282 | Atualizar fastapi para ultima versão | closed | 2025-01-23T20:32:37Z | 2025-01-29T05:36:40Z | https://github.com/dunossauro/fastapi-do-zero/issues/282 | [] | dunossauro | 0 | |
ultralytics/yolov5 | pytorch | 12,540 | Yolov5-7.0 steps to enable amp mixing accuracy | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
Hello author, are there any steps to enable amp to train the model in YOLOv5-7.0
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2023-12-22T00:52:09Z | 2024-10-20T19:35:11Z | https://github.com/ultralytics/yolov5/issues/12540 | [
"enhancement",
"Stale"
] | yxl23 | 8 |
FlareSolverr/FlareSolverr | api | 478 | [mteamtp] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-08-24T01:38:14Z | 2022-08-24T03:07:57Z | https://github.com/FlareSolverr/FlareSolverr/issues/478 | [
"invalid"
] | adamhzu | 1 |
tiangolo/uwsgi-nginx-flask-docker | flask | 267 | ARM64 support | Are there any intentions to support arm64/aarch64 Architectures, if not what are possible alternatives | closed | 2022-01-26T14:29:51Z | 2024-08-29T00:24:27Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/267 | [] | ibraheemalayan | 3 |
public-apis/public-apis | api | 4,193 | Invalid Sites | # Invalid Sites:
[Studio Ghibli](https://ghibliapi.herokuapp.com/) Resources from Studio Ghibli films No Yes Yes
[ColourLovers](http://www.colourlovers.com/api) Get various patterns, palettes and images No No Unknown
[xColors](https://x-colors.herokuapp.com/) Generate & convert colors No Yes Yes
[0x](https://0x.org/api) API for querying token and pool stats across various liquidity pools No Yes Yes
[Bitcambio](https://nova.bitcambio.com.br/api/v3/docs#a-public) Get the list of all traded assets in the exchange No Yes Unknown | open | 2025-03-17T17:01:55Z | 2025-03-17T18:28:20Z | https://github.com/public-apis/public-apis/issues/4193 | [] | amandaguan-ag | 0 |
streamlit/streamlit | machine-learning | 10,880 | `st.dataframe` displays wrong indizes for pivoted dataframe | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Under some conditions streamlit will display the wrong indices in pivoted / multi indexed dataframes.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10880)
```Python
import streamlit as st
import pandas as pd
df = pd.DataFrame(
{"Index": ["X", "Y", "Z"], "A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]}
)
df = df.set_index("Index")
st.dataframe(df)
st.dataframe(df.T.corr())
st.dataframe(df.T.corr().unstack())
print(df.T.corr().unstack())
```
### Steps To Reproduce
1. `streamlit run` the provided code.
2. Look at the result of the last `st.dataframe()` call.
### Expected Behavior
Inner index should be correct.
### Current Behavior
The provided code renders the following tables:

The first two tables are correct, while the last one displays a duplicate of the first index instead of the second one.
In comparison, this is the correct output from the `print()` statement:
```
Index Index
X X 1.000000
Y 0.999597
Z 0.888459
Y X 0.999597
Y 1.000000
Z 0.901127
Z X 0.888459
Y 0.901127
Z 1.000000
dtype: float64
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.2
- Python version: 3.12.9
- Operating System: Linux
- Browser: Google Chrome / Firefox
### Additional Information
The problem does not occur, when the default index is used.
```python
import streamlit as st
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]})
st.dataframe(df.T.corr().unstack())
```
This renders the correct dataframe:

---
This issue is possibly related to https://github.com/streamlit/streamlit/issues/3696 (parsing column names and handling their types) | open | 2025-03-23T15:50:44Z | 2025-03-24T13:49:35Z | https://github.com/streamlit/streamlit/issues/10880 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.data_editor"
] | punsii2 | 2 |
modelscope/data-juicer | data-visualization | 449 | [Feat]: Enhance Unit Test Coverage for Python and CUDA Compatibility | ### Search before continuing 先搜索,再继续
- [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。
### Description 描述
To address potential compatibility issues with libraries like `vllm`, `torch`, `numpy`, etc., it's crucial to enhance our test coverage for specific Python and CUDA environments, at least including:
- Python 3.9 and 3.10
- CUDA 11.8 and 12.1
### Use case 使用场景
_No response_
### Additional 额外信息
_No response_
### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR?
- [x] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR! | open | 2024-10-16T02:08:16Z | 2024-10-16T02:19:38Z | https://github.com/modelscope/data-juicer/issues/449 | [
"enhancement"
] | drcege | 0 |
tensorflow/tensor2tensor | deep-learning | 933 | tpu fails if eval_steps is too high | on tpu, if the eval_steps it greater than the dev_data_len/batch_size/8 we get a Out of range: End of sequence error. this does not happen for cpu/gpu. perhaps we can fix the docs, and maybe also suggest the appropriate value when it is too high. | open | 2018-07-12T01:06:23Z | 2018-09-26T21:23:21Z | https://github.com/tensorflow/tensor2tensor/issues/933 | [] | eyaler | 1 |
peerchemist/finta | pandas | 71 | Why is the result different depending on the size of input dataframe? | <!-- Describe the issue -->
Thank you for the good library!
I want to ask a question that I am not sure if it is my fault or a library issue.
I calculated MACD with 100rows of 1 minute ohlcv historical data.
First, I calculated it with whole 100 rows, and then I calculated it with last 80 rows.
And I saw that result of MACD and SIGNAL of same row (ex. index 99) of each calculation is different.
For example,
```
df = pd.read_csv("my_ohlcv.csv") #size of my_ohlcv is 100rows
result1 = TA.MACD(df)
df = df.iloc[-80:]
result2 = TA.MACD(df)
print(result1['MACD'].iloc[-1], result2['MACD']iloc[-1])
```
I expected result1.iloc[-1] and result2.iloc[-1] would be same, but the value was a little bit different.
But when I calculated CCI with same way I did with MACD, this problem doesn't happened and 2 results was totally identical.
Why these things happens with MACD? Do I need bigger size of data for accurate MACD calculation? I also tested it with more than 200rows, but result was not identical either.
<!--- What behavior did you expect? -->
<!--- What was the actual behavior? -->
<!--- How reliably can you reproduce the issue, what are the steps to do so? -->
<!-- What version of finta are you using, on which platform? What version of Python and Pandas are you using?-->
I'm using python3.6.9
<!-- Any extra information that might be useful in the debugging process. -->
| closed | 2020-07-05T14:13:10Z | 2020-10-25T10:27:13Z | https://github.com/peerchemist/finta/issues/71 | [] | sukwoo1414 | 2 |
assafelovic/gpt-researcher | automation | 646 | TypeError in Config class initialization and ensure proper type handling and directory validation. Solution Provided! | Issue
The Config class initialization in the GPT Researcher project was encountering a TypeError due to the config_file parameter being passed as a WebSocket object instead of a string representing the file path. This caused the os.path.expanduser function to fail, as it expected a string, bytes, or os.PathLike object. Additionally, there were issues with the type of the similarity_threshold and potential directory validation for doc_path.
Solution
The issue can be resolved by adding a type check for the `config_file` parameter to ensure it is a string before calling `os.path.expanduser`. The `similarity_threshold` can be corrected to use `float` instead of `int`. Furthermore, directory validation can be added for the `doc_path` to ensure the directory exists. The `load_config_file` method can also be modified to handle cases where `self.config_file` is `None`, ensuring that the configuration is loaded correctly without causing type errors. These changes can ensure the proper initialization and functionality of the `Config` class.
I want to fix this bug; can I be assigned this issue? | closed | 2024-07-06T11:49:13Z | 2024-07-06T13:42:12Z | https://github.com/assafelovic/gpt-researcher/issues/646 | [] | ahmad-thewhiz | 1 |
youfou/wxpy | api | 225 | 怎么发图片,看教程没有成功过。 | <img width="486" alt="2017-11-09 3 20 10" src="https://user-images.githubusercontent.com/37678/32593087-83e117aa-c561-11e7-8b8c-8e1d1e33a6dd.png">
| open | 2017-11-09T07:20:40Z | 2017-11-09T07:20:40Z | https://github.com/youfou/wxpy/issues/225 | [] | xiaods | 0 |
huggingface/datasets | machine-learning | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. change the dataset and re-upload
4. redownload
```python
import time
from datasets import Dataset, DatasetDict, DownloadMode, load_dataset
username = "YOUR_USERNAME_HERE"
initial = Dataset.from_dict({"foo": [1, 2, 3]})
print(f"Intial {initial['foo']}")
initial_ds = DatasetDict({"train": initial})
initial_ds.push_to_hub("test")
time.sleep(1)
download = load_dataset(f"{username}/test", split="train")
changed = download.map(lambda x: {"foo": x["foo"] + 1})
print(f"Changed {changed['foo']}")
changed.push_to_hub("test")
time.sleep(1)
download_again = load_dataset(f"{username}/test", split="train")
print(f"Download Changed {download_again['foo']}")
# >>> gives the out-dated [1,2,3] when it should be changed [2,3,4]
```
The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset
```python
download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD)
print(f"Force Download Changed {download_again_force['foo']}")
# >>> [2,3,4]
```
### Expected behavior
I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match
### Environment info
- `datasets` version: 2.15.0 │
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 │
- Python version: 3.8.17 │
- `huggingface_hub` version: 0.19.4 │
- PyArrow version: 13.0.0 │
- Pandas version: 2.0.3 │
- `fsspec` version: 2023.6.0 | open | 2023-12-02T21:35:17Z | 2024-08-20T08:32:11Z | https://github.com/huggingface/datasets/issues/6465 | [] | mnoukhov | 2 |
benbusby/whoogle-search | flask | 289 | [FEATURE] Let the iBangs !i, !v, and !n go to Whoogle's own image, video and news search | **Describe the feature you'd like to see added**
Right now the iBangs !i, !v, and !n goes to duckduckgo. This is not bad but i think it should be better to integrate them with whoogle instead. And besides, image, video and news search isn't available with duckduckgo when javascript is off. | closed | 2021-04-16T11:35:01Z | 2024-04-19T18:26:43Z | https://github.com/benbusby/whoogle-search/issues/289 | [
"enhancement"
] | alkarkhi | 3 |
tensorpack/tensorpack | tensorflow | 1,226 | Training FasterRCNN using scientific dataset |
I have a special dataset about extreme weather, each image has 16 channels and high resolution(768*1152), see https://extremeweatherdataset.github.io/.
My questions are:
1. Since there is no pre-trained model of the weather data, can I train the whole model including the base network?
2. The FasterRCNN code is implemented with COCO dataset. Just to make the code work, all I need to do is transform our dataset to COCO format? Do I need to modify the model because of the specificity of our dataset?
Thanks a lot if anyone can help.
| closed | 2019-06-05T08:15:07Z | 2019-06-18T08:38:58Z | https://github.com/tensorpack/tensorpack/issues/1226 | [
"examples"
] | jiangzihanict | 10 |
dpgaspar/Flask-AppBuilder | rest-api | 1,719 | Can't display list and direct_chart in class MultipleView(BaseView): | class ProceduresPatientsTimeChartView(DirectChartView):
datamodel = SQLAInterface(ProceduresPatients)
chart_type = "ColumnChart"
direct_columns = { "Draw": ("Data", "id")
}
base_order = ("Data", "asc")
class MultipleViewsExp(MultipleView):
views = [ ProceduresPatientsTimeChartView]
TypeError: 'NoneType' object is not iterable
Stop in
value_columns = self.datamodel.get_values(lst, list(direct))
Tell me how to fix it, please. | closed | 2021-10-19T08:24:51Z | 2022-04-28T14:41:18Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1719 | [
"stale"
] | vash-sa | 1 |
biolab/orange3 | numpy | 6,447 | Widget search | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
Hello all,
A small quick-win : I noticed the widgets are not alphabetically ordered. I understand this may be to get the most used at first (or maybe something like that). However this means I loose time to find the correct widget I want to use.
**What's your proposed solution?**
So, what I propose is a "search" space over or at the beginning of the widget list.

**Are there any alternative solutions?**
No
Best regards,
Simon | closed | 2023-05-15T07:28:01Z | 2023-07-04T09:13:09Z | https://github.com/biolab/orange3/issues/6447 | [] | simonaubertbd | 6 |
mirumee/ariadne | api | 179 | Move documentation to separate repo and host it on gh-pages | Sphinx has served us well, but we fell its too limiting for what we have planned for Ariadne.
We've decided to migrate the site to the [Docusaurus](https://docusaurus.io) and keep it on separate repo. | closed | 2019-05-20T11:37:14Z | 2019-05-23T14:05:30Z | https://github.com/mirumee/ariadne/issues/179 | [
"docs"
] | rafalp | 0 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 178 | what is "old_face_label_folder". | I found it seem need a label? but how to generate the label is not mentioned in Readme and Jouranl paper.
line26 of Face_Enhancement/data/face_dataset.py
image_path = os.path.join(opt.dataroot, opt.old_face_folder)
label_path = os.path.join(opt.dataroot, opt.old_face_label_folder) | closed | 2021-06-17T11:58:07Z | 2021-07-05T04:53:53Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/178 | [] | geshihuazhong | 1 |
piskvorky/gensim | data-science | 3,096 | Segfault when training FastText model | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Some hyper-parameter configurations for FastText seem to produce segfaults.
#### Steps/code/corpus to reproduce
Data & MRE available [here](https://github.com/TimotheeMickus/temp-git)
MRE:
```python
import datetime
import mmap
import multiprocessing
import gensim
class Restartable():
"""make generator 'restartable' for gensim"""
def __init__(self, g):
self.g = g
def __iter__(self):
print(datetime.datetime.now(), "iterating over corpus")
yield from self.g()
def get_sentences_in_file(filepath):
"""yield sentences in file"""
with open(filepath, "r+b") as istr:
with mmap.mmap(istr.fileno(), 0, access=mmap.ACCESS_READ) as mmfh:
yield from iter(mmfh.readline, b"")
def read_tokens():
"""yield tokens per sentence in directory"""
yield from map(lambda s: s.decode("utf-8").strip().split(), get_sentences_in_file("mini/corpus.txt"))
h_params = {
'alpha': 1.0,
'epochs': 11,
'max_n': 5,
'min_alpha': 0.00018265957307048248,
'min_count': 10,
'min_n': 3,
'negative': 18,
'ns_exponent': 3.972629499333897e-07,
'sample': 0.005188593539343843,
'window': 29
}
data = Restartable(read_tokens)
m = gensim.models.fasttext.FastText(vector_size=256, workers=multiprocessing.cpu_count(), **h_params)
m.build_vocab(corpus_iterable=data)
m.train(data, total_examples=m.corpus_count, epochs=m.epochs)
# segfault at the end of the first epoch
print(datetime.datetime.now(), "passed")
```
The above does not segfault with other configurations. Similar issues have been encountered with different hyper parameter configurations, the configuration in this MRE is just the first one I managed to consistently reproduce the segfault with.
#### Versions
```
>>> import platform; print(platform.platform())
Linux-5.4.0-70-generic-x86_64-with-glibc2.29
>>> import sys; print("Python", sys.version)
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.20.0
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.6.0
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.0.0beta
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
| open | 2021-03-30T15:27:11Z | 2021-04-01T22:13:43Z | https://github.com/piskvorky/gensim/issues/3096 | [
"bug",
"impact HIGH",
"reach MEDIUM"
] | TimotheeMickus | 9 |
Anjok07/ultimatevocalremovergui | pytorch | 1,720 | KeyError / Traceback Error / UVR.py & separate.py | I keep receiving the following error on every model I've tried so far:
```
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
KeyError: "'All Stems'"
Traceback Error: "
File "UVR.py", line 6860, in process_start
File "separate.py", line 701, in seperate
"
Error Time Stamp [2025-01-28 19:18:35]
``` | closed | 2025-01-29T01:21:56Z | 2025-01-29T21:56:08Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1720 | [] | scadams | 1 |
tensorflow/tensor2tensor | deep-learning | 1,326 | got same translate result | ### Description
hey, I exported the model using the command it provide. but when I predict new query, it always output the same thing. I tried to check few things and find that response = stub.Predict(request, timeout_secs) (from serving_utils.py ) always return the same value. any idea whats got wrong here ?
>> Hallo(my input is this )
the output is : In fact, the government does not want to protect itself or to do so. It is only a question, bigger than any other, that is necessary.
no matter what my input is I always got the same result.
...
### Environment information
```
OS: <your answer here>
mesh-tensorflow==0.0.5
tensor2tensor==1.11.0
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-gpu==1.10.1
tensorflow-hub==0.1.1
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
tensorflow-serving-api==1.12.0
tensorflow-tensorboard==1.5.1
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
Python 3.6.7 :: Anaconda, Inc.
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2018-12-24T09:12:26Z | 2019-06-25T09:21:19Z | https://github.com/tensorflow/tensor2tensor/issues/1326 | [] | xiaoxiong1988 | 1 |
ivy-llc/ivy | tensorflow | 28,509 | Fix Frontend Failing Test: numpy - math.paddle.conj | To-do List: https://github.com/unifyai/ivy/issues/27497 | closed | 2024-03-08T11:05:35Z | 2024-03-14T21:30:36Z | https://github.com/ivy-llc/ivy/issues/28509 | [
"Sub Task"
] | ZJay07 | 0 |
guohongze/adminset | django | 30 | 获取CPU数量的方法需要修改 | install/client/adminset_agent.py
修改:
cpu_cores = {"physical": psutil.cpu_count(logical=False) if psutil.cpu_count(logical=False) else 0, "logical": psutil.cpu_count()} | closed | 2017-11-25T05:36:45Z | 2018-02-17T11:46:21Z | https://github.com/guohongze/adminset/issues/30 | [] | fjibj | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,285 | Failed to build webrtcvad when installing a package | Hi, sorry for bothering but I had and an issue trying to install a package with pip, I don't know if maybe it's an error I'm making but I haven't been able to solve it.
This is what I was trying to install and the error that appeared
```
pip install ffsubsync
Defaulting to user installation because normal site-packages is not writeable
Collecting ffsubsync
Using cached ffsubsync-0.4.25-py2.py3-none-any.whl (36 kB)
Collecting auditok==0.1.5 (from ffsubsync)
Using cached auditok-0.1.5-py3-none-any.whl
Requirement already satisfied: charset-normalizer in /usr/lib/python3.12/site-packages (from ffsubsync) (3.2.0)
Collecting faust-cchardet (from ffsubsync)
Obtaining dependency information for faust-cchardet from https://files.pythonhosted.org/packages/81/33/a705c39e89b7ca7564b90c1a4ab4d4c2c0534cde911191d87a89b87b6c60/faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.3 kB)
Collecting ffmpeg-python (from ffsubsync)
Using cached ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting future>=0.18.2 (from ffsubsync)
Using cached future-0.18.3-py3-none-any.whl
Collecting numpy>=1.12.0 (from ffsubsync)
Obtaining dependency information for numpy>=1.12.0 from https://files.pythonhosted.org/packages/c4/c6/f971d43a272e574c21707c64f12730c390f2bfa6426185fbdf0265a63cbd/numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting rich (from ffsubsync)
Obtaining dependency information for rich from https://files.pythonhosted.org/packages/be/be/1520178fa01eabe014b16e72a952b9f900631142ccd03dc36cf93e30c1ce/rich-13.7.0-py3-none-any.whl.metadata
Using cached rich-13.7.0-py3-none-any.whl.metadata (18 kB)
Requirement already satisfied: six in /usr/lib/python3.12/site-packages (from ffsubsync) (1.16.0)
Collecting srt>=3.0.0 (from ffsubsync)
Using cached srt-3.5.3-py3-none-any.whl
Collecting tqdm (from ffsubsync)
Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata
Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting typing-extensions (from ffsubsync)
Obtaining dependency information for typing-extensions from https://files.pythonhosted.org/packages/b7/f4/6a90020cd2d93349b442bfcb657d0dc91eee65491600b2cb1d388bc98e6b/typing_extensions-4.9.0-py3-none-any.whl.metadata
Using cached typing_extensions-4.9.0-py3-none-any.whl.metadata (3.0 kB)
Collecting webrtcvad (from ffsubsync)
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting chardet (from ffsubsync)
Obtaining dependency information for chardet from https://files.pythonhosted.org/packages/38/6f/f5fbc992a329ee4e0f288c1fe0e2ad9485ed064cac731ed2fe47dcc38cbf/chardet-5.2.0-py3-none-any.whl.metadata
Using cached chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)
Collecting pysubs2>=1.2.0 (from ffsubsync)
Using cached pysubs2-1.6.1-py3-none-any.whl (35 kB)
Collecting markdown-it-py>=2.2.0 (from rich->ffsubsync)
Obtaining dependency information for markdown-it-py>=2.2.0 from https://files.pythonhosted.org/packages/42/d7/1ec15b46af6af88f19b8e5ffea08fa375d433c998b8a7639e76935c14f1f/markdown_it_py-3.0.0-py3-none-any.whl.metadata
Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting pygments<3.0.0,>=2.13.0 (from rich->ffsubsync)
Obtaining dependency information for pygments<3.0.0,>=2.13.0 from https://files.pythonhosted.org/packages/97/9c/372fef8377a6e340b1704768d20daaded98bf13282b5327beb2e2fe2c7ef/pygments-2.17.2-py3-none-any.whl.metadata
Using cached pygments-2.17.2-py3-none-any.whl.metadata (2.6 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich->ffsubsync)
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Using cached numpy-1.26.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)
Using cached chardet-5.2.0-py3-none-any.whl (199 kB)
Using cached faust_cchardet-2.1.19-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (317 kB)
Using cached rich-13.7.0-py3-none-any.whl (240 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Using cached pygments-2.17.2-py3-none-any.whl (1.2 MB)
Building wheels for collected packages: webrtcvad
Building wheel for webrtcvad (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for webrtcvad (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
copying webrtcvad.py -> build/lib.linux-x86_64-cpython-312
running build_ext
building '_webrtcvad' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/cbits
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio/signal_processing
creating build/temp.linux-x86_64-cpython-312/cbits/webrtc/common_audio/vad
gcc -fno-strict-overflow -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -fexceptions -fcf-protection -fexceptions -fcf-protection -fexceptions -fcf-protection -fPIC -DWEBRTC_POSIX -Icbits -I/usr/include/python3.12 -c cbits/pywebrtcvad.c -o build/temp.linux-x86_64-cpython-312/cbits/pywebrtcvad.o
cbits/pywebrtcvad.c:1:10: fatal error: Python.h: No such file or directory
1 | #include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for webrtcvad
Failed to build webrtcvad
ERROR: Could not build wheels for webrtcvad, which is required to install pyproject.toml-based projects
```
OS: Fedora Linux (KDE Desktop environment) | closed | 2024-01-15T01:51:04Z | 2024-02-07T19:13:26Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1285 | [] | emanuelps2708 | 3 |
kornia/kornia | computer-vision | 2,496 | Documentation on SOLD2 config parameters | ## 📚 Documentation
SOLD2 line segment detection results vary drastically with respect to hyper-parameters/configs
```
default_cfg: Dict[str, Any] = {
'backbone_cfg': {'input_channel': 1, 'depth': 4, 'num_stacks': 2, 'num_blocks': 1, 'num_classes': 5},
'use_descriptor': True,
'grid_size': 8,
'keep_border_valid': True,
'detection_thresh': 0.0153846, # = 1/65: threshold of junction detection
'max_num_junctions': 500, # maximum number of junctions per image
'line_detector_cfg': {
'detect_thresh': 0.5,
'num_samples': 64,
'inlier_thresh': 0.99,
'use_candidate_suppression': True,
'nms_dist_tolerance': 3.0,
'use_heatmap_refinement': True,
'heatmap_refine_cfg': {
'mode': "local",
'ratio': 0.2,
'valid_thresh': 0.001,
'num_blocks': 20,
'overlap_ratio': 0.5,
},
'use_junction_refinement': True,
'junction_refine_cfg': {'num_perturbs': 9, 'perturb_interval': 0.25},
},
'line_matcher_cfg': {
'cross_check': True,
'num_samples': 5,
'min_dist_pts': 8,
'top_k_candidates': 10,
'grid_size': 4,
},
}
```
Can we have descriptions of what these mean, possibly with some examples?
| open | 2023-08-02T21:12:11Z | 2023-10-10T22:52:09Z | https://github.com/kornia/kornia/issues/2496 | [
"docs :books:",
"code heatlh :pill:"
] | ogencoglu | 1 |
Nemo2011/bilibili-api | api | 687 | [BUG] cookies 刷新报错 | **Python 版本:** 3.11.5
**模块版本:** 最新dev分支
**运行环境:** Windows
<!-- 务必提供模块版本并确保为最新版 -->
---
执行刷新函数的时候,抛出了异常
```
Traceback (most recent call last):
File "e:\Project\bili\bili.py", line 90, in <module>
check_login()
File "e:\Project\bili\bili.py", line 43, in check_login
sync(credential.refresh())
File "C:\Users\Asaka\.conda\envs\bili\Lib\site-packages\bilibili_api_dev-17.0.0-py3.11.egg\bilibili_api\utils\sync.py", line 33, in sync
return loop.run_until_complete(coroutine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Asaka\.conda\envs\bili\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Asaka\.conda\envs\bili\Lib\site-packages\bilibili_api_dev-17.0.0-py3.11.egg\bilibili_api\credential.py", line 51, in refresh
new_cred: Credential = await refresh_cookies(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Asaka\.conda\envs\bili\Lib\site-packages\bilibili_api_dev-17.0.0-py3.11.egg\bilibili_api\credential.py", line 144, in refresh_cookies
refresh_csrf = await get_refresh_csrf(credential)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Asaka\.conda\envs\bili\Lib\site-packages\bilibili_api_dev-17.0.0-py3.11.egg\bilibili_api\credential.py", line 122, in get_refresh_csrf
raise Exception("correspondPath 过期或错误。")
Exception: correspondPath 过期或错误。
```
是否B站更新了接口? | closed | 2024-02-20T01:38:35Z | 2024-03-15T14:21:07Z | https://github.com/Nemo2011/bilibili-api/issues/687 | [
"bug",
"need debug info"
] | gongdananyou | 13 |
amidaware/tacticalrmm | django | 1,092 | When first loading the dashboard, no agents are shown in the list | **Server Info (please complete the following information):**
- OS: Ubuntu 20.04
- Browser: Google Chrome
- RMM Version (as shown in top left of web UI): v.0.13.1
**Installation Method:**
- [ ] Standard
- [x] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): Agent v2.0.3
- Agent OS: Windows 10 Pro, 64 bit v20H2
**Describe the bug**
When first loading the dashboard, no agents are shown in the list. When clicking on "Workstations", "Servers", "Mixed", "All Clients" or a client, this list refreshes correctly.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'rmm.DOMAIN.EXTENSION'
4. See that no agents are showing
**Expected behavior**
A correct list of agents being displayed. | closed | 2022-04-24T11:00:02Z | 2022-04-25T00:49:34Z | https://github.com/amidaware/tacticalrmm/issues/1092 | [
"bug",
"fixed"
] | JoachimVeulemans | 5 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 45 | Make calculate_accuracies class based | The ```calculate_accuracies``` module should be converted to class form so that users can extend it, override functions, add their own accuracy metrics easily etc. | closed | 2020-04-11T09:31:06Z | 2020-04-12T07:39:08Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/45 | [
"enhancement"
] | KevinMusgrave | 1 |
cupy/cupy | numpy | 8,155 | Incomplete type when compiling a `wmma::fragment` type with jitify | I'm not sure whether this is an issue with CuPy, but I am running into another problem. It works when `jitify=False`. But as it is, I'm just getting an incomplete type error. Compiling just the kernel code for the following does work with NVCC even though the Intellisense is messed up when loaded in Visual Studio.
```py
import cupy as cp
kernel_code = r'''
#include <mma.h>
using namespace nvcuda;
extern "C" __global__ void mykernel() {
wmma::fragment<wmma::accumulator, 16, 16, 8, float> v16;
return ;
}
'''
module = cp.RawModule(code=kernel_code, backend='nvrtc', jitify=True, enable_cooperative_groups=True)
module.get_function('mykernel')((1,),(1,),())
```
```
PS D:\Users\Marko\Source\Repos\The Spiral Language\Spiral Compilation Tests> d:; cd 'd:\Users\Marko\Source\Repos\The Spiral Language\Spiral Compilation Tests'; & 'C:\Users\mrakg\AppData\Local\Programs\Python\Python311\python.exe' 'c:\Users\mrakg\.vscode\extensions\ms-python.python-2023.22.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '53548' '--' 'D:\Users\Marko\Source\Repos\The Spiral Language\Spiral Compilation Tests\cuda_experiments\tensor1\bug_mma.py'
---------------------------------------------------
--- JIT compile log for C:\Users\mrakg\AppData\Local\Temp\tmpu7gc6bb7\2983982998ae8febc0583032d61861b2579068a4.cubin.cu ---
---------------------------------------------------
C:\Users\mrakg\AppData\Local\Temp\tmpu7gc6bb7\2983982998ae8febc0583032d61861b2579068a4.cubin.cu(7): error: incomplete type is not allowed
wmma::fragment<wmma::accumulator, 16, 16, 8, float> v16;
^
1 error detected in the compilation of "C:\Users\mrakg\AppData\Local\Temp\tmpu7gc6bb7\2983982998ae8febc0583032d61861b2579068a4.cubin.cu".
---------------------------------------------------
Traceback (most recent call last):
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 250, in _jitify_prep
name, options, headers, include_names = jitify.jitify(source, options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "cupy\cuda\jitify.pyx", line 240, in cupy.cuda.jitify.jitify
File "cupy\cuda\jitify.pyx", line 264, in cupy.cuda.jitify.jitify
RuntimeError: Runtime compilation failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Users\Marko\Source\Repos\The Spiral Language\Spiral Compilation Tests\cuda_experiments\tensor1\bug_mma.py", line 12, in <module>
module.get_function('mykernel')((1,),(1,),())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "cupy\_core\raw.pyx", line 487, in cupy._core.raw.RawModule.get_function
File "cupy\_core\raw.pyx", line 100, in cupy._core.raw.RawKernel.kernel.__get__
File "cupy\_core\raw.pyx", line 117, in cupy._core.raw.RawKernel._kernel
File "cupy\_util.pyx", line 64, in cupy._util.memoize.decorator.ret
File "cupy\_core\raw.pyx", line 538, in cupy._core.raw._get_raw_module
File "cupy\_core\core.pyx", line 2236, in cupy._core.core.compile_with_cache
File "cupy\_core\core.pyx", line 2254, in cupy._core.core.compile_with_cache
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 484, in _compile_module_with_cache
return _compile_with_cache_cuda(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 562, in _compile_with_cache_cuda
ptx, mapping = compile_using_nvrtc(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 319, in compile_using_nvrtc
return _compile(source, options, cu_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 284, in _compile
options, headers, include_names = _jitify_prep(
^^^^^^^^^^^^^
File "C:\Users\mrakg\AppData\Local\Programs\Python\Python311\Lib\site-packages\cupy\cuda\compiler.py", line 257, in _jitify_prep
raise JitifyException(str(cex)) from e
cupy.cuda.compiler.JitifyException: Runtime compilation failed
```
_Originally posted by @mrakgr in https://github.com/cupy/cupy/issues/8146#issuecomment-1918986468_
| closed | 2024-02-02T21:00:20Z | 2024-02-05T05:09:28Z | https://github.com/cupy/cupy/issues/8155 | [
"cat:bug"
] | mrakgr | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 342 | mdtex2html没有这个包 | *提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
### 详细描述问题
*请尽量具体地描述您遇到的问题,**必要时给出运行命令**。这将有助于我们更快速地定位问题所在。*
### 运行截图或日志
*请提供文本log或者运行截图,以便我们更好地了解问题详情。*
### 必查项目(前三项只保留你要问的)
- [ ] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus
- [ ] **运行系统**:Windows / MacOS / Linux
- [ ] **问题分类**:下载问题 / 模型转换和合并 / 模型训练与精调 / 模型推理问题(🤗 transformers) / 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) / 效果问题 / 其他问题
- [ ] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-16T04:09:01Z | 2023-05-17T11:33:46Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/342 | [] | SunYHY | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.