repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ray-project/ray | python | 51,270 | [core][gpu-objects] IPC communication for processes on the same GPU | ### Description
We chatted with @sven1977. Some use cases in RLlib involve both aggregator actors (producer) and learner actors (consumer) being on the same GPU. However, currently we need to write the tensors to the object store and read them back to the same GPU.
Support veRL's colocated Ray actor tasks.
### Use case
_No response_ | open | 2025-03-11T21:35:36Z | 2025-03-20T06:46:54Z | https://github.com/ray-project/ray/issues/51270 | [
"enhancement",
"P0",
"core",
"gpu-objects"
] | kevin85421 | 0 |
gunthercox/ChatterBot | machine-learning | 2,039 | error:no module name 'en' | no module name 'en' error is coming ``from chatterbot import ChatBot
bot=ChatBot(
'Friday',
storage_adapter='chatterbot.storage.SQLStorageAdapter', #collect database
logic_adapters=[
'chatterbot.logic.MathematicalEvaluation',
'chatterbot.logic.TimeLogicAdapter'
'chatterbot.logic.BestMatch'],
database_uri='sqlite:///database.db')
print('Ask something!!')
while True:
try:
user_input = input()
bot_response = bot.get_response(user_input)
print(bot_response)
except (KeyboardInterrupt, EOFError, SystemExit):
break
| closed | 2020-09-03T14:29:31Z | 2025-02-26T12:04:45Z | https://github.com/gunthercox/ChatterBot/issues/2039 | [
"answered"
] | Anushka290 | 9 |
DistrictDataLabs/yellowbrick | scikit-learn | 856 | PosTag does not sort xticklabels in frequency mode | **Describe the bug**
I noticed a strange behavior while working with PosTag visualizer. When in frequency mode it does sort the bars but the x tick labels remain in the initial order.

**To Reproduce**
```python
corpus = load_corpus('data/hobbies')
docs = corpus.data
labels = corpus.target
tagged_stanzas = [nltk.pos_tag(nltk.word_tokenize(sent)) for sent in docs]
tag = [tagged_stanzas]
_, (ax1,ax2) = plt.subplots(1,2)
viz = PosTagVisualizer(ax=ax1)
viz.fit(tag)
viz.poof()
viz.ax.grid(False)
oz = PosTagVisualizer(frequency=True, ax=ax2)
oz.fit(tag)
oz.poof()
oz.ax.grid(False)
```
where load_corpus is the function from yellowbrick contributing [section](https://www.scikit-yb.org/en/latest/api/text/corpus.html)
| closed | 2019-05-21T21:06:30Z | 2019-06-11T23:55:21Z | https://github.com/DistrictDataLabs/yellowbrick/issues/856 | [
"type: bug"
] | naresh-bachwani | 5 |
ultralytics/ultralytics | machine-learning | 18,897 | Excuse me, how can I solve the problem that the confidence level is only 0.1 after switching to the ONNX model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug

### Environment
[2025/01/26 15:52:38] ppocr DEBUG: Namespace(alpha=1.0, alphacolor=(255, 255, 255), benchmark=False, beta=1.0, binarize=False, cls_batch_num=6, cls_image_shape='3, 48, 192', cls_model_dir='./weights/ocr/ch_ppocr_mobile_v2.0_cls_infer/', cls_thresh=0.9, cpu_threads=10, crop_res_save_dir='./output', det=True, det_algorithm='DB', det_box_type='quad', det_db_box_thresh=0.6, det_db_score_mode='fast', det_db_thresh=0.3, det_db_unclip_ratio=1.5, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_east_score_thresh=0.8, det_limit_side_len=960, det_limit_type='max', det_model_dir='/home/tony/.paddleocr/whl/det/en/en_PP-OCRv3_det_infer', det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, det_pse_thresh=0, det_sast_nms_thresh=0.2, det_sast_score_thresh=0.5, draw_img_save_dir='./inference_results', drop_score=0.5, e2e_algorithm='PGNet', e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_limit_side_len=768, e2e_limit_type='max', e2e_model_dir=None, e2e_pgnet_mode='fast', e2e_pgnet_score_thresh=0.5, e2e_pgnet_valid_set='totaltext', enable_mkldnn=False, formula=False, formula_algorithm='LaTeXOCR', formula_batch_num=1, formula_char_dict_path=None, formula_model_dir=None, fourier_degree=5, gpu_id=0, gpu_mem=500, help='==SUPPRESS==', image_dir=None, image_orientation=False, invert=False, ir_optim=True, kie_algorithm='LayoutXLM', label_list=['0', '180'], lang='en', layout=True, layout_dict_path=None, layout_model_dir=None, layout_nms_threshold=0.5, layout_score_threshold=0.5, max_batch_size=10, max_text_length=25, merge_no_span_structure=True, min_subgraph_size=15, mode='structure', ocr=True, ocr_order_method=None, ocr_version='PP-OCRv4', output='./output', page_num=0, precision='fp32', process_id=0, re_model_dir=None, rec=True, rec_algorithm='SVTR_LCNet', rec_batch_num=6, rec_char_dict_path='./weights/ocr/ppocr_keys_v1_fhhx.txt', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_model_dir='./weights/ocr/0510/', recovery=False, recovery_to_markdown=False, return_word_box=False, save_crop_res=False, save_log_path='./log_output/', savefile=False, scales=[8, 16, 32], ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ser_model_dir=None, show_log=True, sr_batch_num=1, sr_image_shape='3, 32, 128', sr_model_dir=None, structure_version='PP-StructureV2', table=True, table_algorithm='TableAttn', table_char_dict_path=None, table_max_len=488, table_model_dir=None, total_process_num=1, type='ocr', use_angle_cls=False, use_dilation=False, use_gpu=True, use_mlu=False, use_mp=False, use_npu=False, use_onnx=False, use_pdf2docx_api=False, use_pdserving=False, use_space_char=True, use_tensorrt=False, use_visual_backbone=True, use_xpu=False, vis_font_path='./doc/fonts/simfang.ttf', warmup=False)
[2025/01/26 15:52:38] ppocr WARNING: The first GPU is used for inference by default, GPU ID: 0
[2025/01/26 15:52:39] ppocr WARNING: The first GPU is used for inference by default, GPU ID: 0
Ultralytics 8.3.66 🚀 Python-3.8.8 torch-1.13.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24132MiB)
### Minimal Reproducible Example
import cv2
import math
import copy
import torch
import time
import os
import onnxruntime as ort
from paddleocr import PaddleOCR
import concurrent.futures
# 将 YOLO 模型转换为 ONNX 模型
def export_to_onnx(weights):
from ultralytics import YOLO
model = YOLO(weights)
try:
model.export(format='onnx')
print("ONNX 模型转换成功。")
except Exception as e:
print(f"ONNX 模型转换失败: {e}")
class YOLO_det:
def __init__(self, weights, imgsz=640, conf_thres=0.1, iou_thres=0.25, max_det=1000): # 提高置信度阈值
if torch.cuda.is_available() and torch.cuda.device_count() > 0:
providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 4 * 1024 * 1024 * 1024, # 4GB 内存限制
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
})
]
else:
providers = ['CPUExecutionProvider']
onnx_weights = weights.replace('.pt', '.onnx')
if not os.path.exists(onnx_weights):
export_to_onnx(weights)
self.session = ort.InferenceSession(onnx_weights, providers=providers)
self.imgsz = imgsz
self.conf = conf_thres
self.iou = iou_thres
self.max_det = max_det
def preprocess(self, img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (self.imgsz, self.imgsz))
img = img.transpose(2, 0, 1)
img = img[None]
img = img.astype('float32') / 255.0
return img
def detect(self, img):
input_name = self.session.get_inputs()[0].name
input_img = self.preprocess(img)
try:
outputs = self.session.run(None, {input_name: input_img})
print(f"推理输出形状: {[o.shape for o in outputs]}") # 打印输出形状
print(f"推理输出部分内容: {outputs[0][0, :5, :]}") # 打印部分输出内容
except Exception as e:
print(f"推理过程中出现异常: {e}")
return []
# 假设输出只有一个数组,需要根据实际情况解析
output = outputs[0]
boxes = []
confidences = []
# 根据实际输出格式调整解析逻辑
if output.ndim == 3:
num_detections = output.shape[1]
for i in range(num_detections):
# 假设前 4 列是边界框信息,第 5 列是置信度
box = output[0, i, :4]
conf = output[0, i, 4]
boxes.append(box)
confidences.append(conf)
else:
print(f"不支持的输出形状: {output.shape}")
return []
return_list = []
for box, conf in zip(boxes, confidences):
if conf > self.conf:
xyxy = box
x1 = math.ceil(xyxy[0])
y1 = math.ceil(xyxy[1])
x2 = math.ceil(xyxy[2])
y2 = math.ceil(xyxy[1])
x3 = math.ceil(xyxy[2])
y3 = math.ceil(xyxy[3])
x4 = math.ceil(xyxy[0])
y4 = math.ceil(xyxy[3])
return_list.append([[x1, y1], [x2, y2], [x3, y3], [x4, y4]])
if not return_list:
print("未检测到目标。")
return return_list
def sorted_boxes(dt_boxes):
sorted_boxes = sorted(dt_boxes, key=lambda x: (x[0][1], x[0][0]))
_boxes = [[[int(num) for num in sub_list] for sub_list in main_list]
for main_list in sorted_boxes]
for i in range(len(dt_boxes) - 1):
for j in range(i, -1, -1):
if abs(_boxes[j + 1][0][1] - _boxes[j][0][1]) < 10 and (
_boxes[j + 1][0][0] < _boxes[j][0][0]
):
tmp = _boxes[j]
_boxes[j] = _boxes[j + 1]
_boxes[j + 1] = tmp
else:
break
return _boxes
def _4point2xyxy(points):
list_out_xyxy = []
for point in points:
x_coords, y_coords = zip(*point)
min_x, max_x = min(x_coords), max(x_coords)
min_y, max_y = min(y_coords), max(y_coords)
rectangle = [int(min_x), int(min_y), int(max_x), int(max_y)]
list_out_xyxy.append(rectangle)
return list_out_xyxy
def process_image(index, image_path, ocr_rec, yolo_det):
start = time.time() # 记录开始时间
try:
det_img = cv2.imread(image_path)
if det_img is None:
print(f"无法读取图片: {image_path}")
return
except Exception as e:
print(f"读取图片 {image_path} 时出现错误: {e}")
return
out_yolo_det = yolo_det.detect(det_img)
if not out_yolo_det:
print(f"在图片 {image_path} 中未检测到目标。")
out_yolo_det = sorted_boxes(out_yolo_det)
list_ocr_det_bbox_xyxy = _4point2xyxy(out_yolo_det)
show_img = copy.deepcopy(det_img)
for i in range(len(list_ocr_det_bbox_xyxy)):
# xyxy 坐标
x1, y1, x2, y2 = list_ocr_det_bbox_xyxy[i]
# 检查截取区域是否有效
if x2 > x1 and y2 > y1:
# 截取文本小图
ocr_rec_det_img = det_img[y1:y2, x1:x2]
one_ocr_rec_out = ocr_rec.ocr(ocr_rec_det_img, det=False, cls=False)
print(one_ocr_rec_out)
# 绘制 bbox
show_img = cv2.rectangle(show_img, (x1, y1), (x2, y2), (0, 0, 255), 1)
# 设置字体、大小、颜色和线条粗细
font = cv2.FONT_HERSHEY_SIMPLEX
# 绘制文本
show_img = cv2.putText(show_img, str(i), (x1, y1 + 20), font, 0.8, (0, 255, 0), 2)
# 确保 output 文件夹存在
output_folder = 'output_onnx'
if not os.path.exists(output_folder):
try:
os.makedirs(output_folder)
print(f"成功创建输出文件夹: {output_folder}")
except OSError as e:
print(f"创建输出文件夹时出错: {e}")
return
# 保存图片
output_path = os.path.join(output_folder, f'out_image{index + 1}.jpg')
if cv2.imwrite(output_path, show_img):
print(f"图片已成功保存到: {output_path}")
else:
print(f"无法保存图片到: {output_path},请检查文件权限或路径是否正确。")
end = time.time() # 记录结束时间
elapsed = end - start # 计算该图片处理耗时
print(f"图片 {image_path} 处理耗时: {elapsed:.2f} 秒")
print("\n", "=" * 200, "\n")
if __name__ == "__main__":
start_time = time.time()
# 文本识别_权重
rec_model_dir = './weights/ocr/0510/'
# 文本字典
rec_char_dict_path = './weights/ocr/ppocr_keys_v1_fhhx.txt'
# 方向分类器
cls_model_dir = './weights/ocr/ch_ppocr_mobile_v2.0_cls_infer/'
# yolo_det 权重目录
weighs = './weights/best.pt'
# 加载 文本检测权重...
ocr_rec = PaddleOCR(lang='en', rec_model_dir=rec_model_dir, rec_char_dict_path=rec_char_dict_path,
cls_model_dir=cls_model_dir, use_gpu=True) # 明确指定使用GPU
# 加载 yolo_det 权重...
yolo_det = YOLO_det(weighs, imgsz=640)
# 获取 input_images 文件夹下的所有图片路径
input_folder = 'input_images'
if not os.path.exists(input_folder):
print(f"输入文件夹 {input_folder} 不存在,请检查路径。")
else:
image_files = [os.path.join(input_folder, f) for f in os.listdir(input_folder)
if f.endswith(('.png', '.jpg', '.jpeg'))]
if not image_files:
print(f"在 {input_folder} 中未找到有效的图片文件,请检查文件夹内容。")
else:
# 使用线程池并行处理图片,并行数量为 10
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = []
for index, image_path in enumerate(image_files):
future = executor.submit(process_image, index, image_path, ocr_rec, yolo_det)
futures.append(future)
# 等待所有任务完成
concurrent.futures.wait(futures)
end_time = time.time()
elapsed_time = end_time - start_time
print(f"代码总运行时间: {elapsed_time:.2f} 秒")
# 一些后续可能添加的收尾操作可以在这里进行
# 例如,释放一些资源(虽然目前代码里没有明显需要手动释放的资源)
# 或者做一些数据统计、日志记录等额外工作
# 下面是一个简单的示例,用于记录本次运行的总时间到一个日志文件中
log_file_path = "run_log.txt"
try:
with open(log_file_path, "a") as log_file:
log_file.write(f"本次运行于 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())} 开始,耗时 {elapsed_time:.2f} 秒。\n")
except Exception as e:
print(f"写入日志文件时出现错误: {e}")
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-26T07:53:16Z | 2025-01-26T08:01:46Z | https://github.com/ultralytics/ultralytics/issues/18897 | [
"bug",
"exports"
] | CanhaoL | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 235 | NTXentLoss with sequences | Hi,
First, thanks a lot for this awesome contribution!
I was wondering whether and how one could use NTXentLoss for sequential data tasks, such as ASR or NLP. Say I'm using a Transformer and my data is a 3D tensor with shape (n_tokens, batch_size, model_dim). Is it possible to use NTXentLoss in this case? I guess one stright-forward way would be to call NTXentLoss for each token separately and then just sum up these losses, but I'm not sure that'd be the most efficient and accurate way (I'm pretty new to most this stuff). Anyway, any advice would be highly appreciated. Thanks again! | closed | 2020-11-20T07:49:46Z | 2020-11-25T01:51:46Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/235 | [
"Frequently Asked Questions",
"question"
] | asafbenj | 2 |
davidsandberg/facenet | tensorflow | 724 | TypeError: reduce_max() got an unexpected keyword argument 'keepdims' | python ~/face_paper/facenet-master/src/align/align_dataset_mtcnn.py \
~/face_paper/facenet-master/datasets/lfw/raw \
~/face_paper/facenet-master/datasets/lfw/lfw_mtcnnpy_160 \
--image_size 160 \
--margin 32 \
--random_order \
--gpu_memory_fraction 0.25 \
How to resolve this problem? | open | 2018-04-25T11:30:41Z | 2018-11-25T14:06:27Z | https://github.com/davidsandberg/facenet/issues/724 | [] | liuajian | 7 |
CTFd/CTFd | flask | 2,406 | Naming Challenge Hints | <!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
Idea: Allowing the naming of hints so that players will know what hints they are unlocking, especially when there are multiple hints and points are needed to unlock them. | open | 2023-10-01T16:47:45Z | 2023-10-01T16:47:45Z | https://github.com/CTFd/CTFd/issues/2406 | [] | ehlkeh | 0 |
ranaroussi/yfinance | pandas | 2,022 | INCORRECT MARKET DATA FOR NSE SEGMENT | There are discrepancy in market data for NSE and coverage is also limited .
It would be helpful it this correction and coverage are increased .
| closed | 2024-08-10T19:27:08Z | 2024-08-10T19:35:27Z | https://github.com/ranaroussi/yfinance/issues/2022 | [] | chirag111222 | 0 |
airtai/faststream | asyncio | 2,034 | refactor: remove RabbitQueue & RabbitExchange hashes | These classes are using to cache real connection objects https://github.com/airtai/faststream/blob/0.6.0/faststream/rabbit/helpers/declarer.py#L15-L16
So, we should use hash to be sure, that user call declarer for the same object
https://github.com/airtai/faststream/blob/0.6.0/faststream/rabbit/schemas/queue.py#L57
Thus, we should make a bunch of unit-tests to be prevent incorrect collisions
https://github.com/airtai/faststream/blob/0.6.0/tests/brokers/rabbit/test_schemas.py
| open | 2025-01-11T16:13:27Z | 2025-01-13T18:29:38Z | https://github.com/airtai/faststream/issues/2034 | [
"enhancement",
"RabbitMQ"
] | Lancetnik | 3 |
lucidrains/vit-pytorch | computer-vision | 261 | Multi-head attention part on ViT | Can you confirm that the current implementation of the multi-head attention is the same as the original paper?
From this paper (vit.py, line # 55 and 56)
qkv = self.to_qkv(x).chunk(3, dim = -1)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), qkv)
It seems like split q,k,v to a multiple small size feature (in test.py, separating originally 1024D embedding features to 16 of 64D features).
However, in the actual paper, instead of dividing and processing 1024 features and then combining them, there is a process of putting 1024 features into n multi-head attention and then concatenating them.
Can you confirm that the implemented multi-head attention is the same as the actual paper?
| closed | 2023-03-21T15:10:33Z | 2023-03-21T15:13:47Z | https://github.com/lucidrains/vit-pytorch/issues/261 | [] | andreYoo | 0 |
pallets/quart | asyncio | 312 | Cannot load `QUART_` prefixed environment variables | <!--
This issue tracker is a tool to address bugs in Quart itself. Please
use Pallets Discord or Stack Overflow for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
## Bug reproduction
I want to load `QUART_` prefixed environment variables to `app.config`, following the [doc](https://pgjones.gitlab.io/quart/how_to_guides/configuration.html).
```python
from quart import Quart
import os
os.environ["QUART_FOO"] = "bar"
app = Quart(__name__)
app.config.from_prefixed_env()
assert app.config["FOO"] == "bar"
@app.route('/')
async def hello():
return app.config["FOO"]
if __name__ == '__main__':
app.run()
```
Run the app
```shell
pip install quart
python main.py
```
Got an error
```text
Traceback (most recent call last):
File "/Users/jichengzhi/Documents/GitHub/bug-quart/main.py", line 10, in <module>
assert app.config["FOO"] == "bar"
~~~~~~~~~~^^^^^^^
KeyError: 'FOO'
```
<!--
Describe the expected behavior that should have happened but didn't.
-->
## Expected behavior
According to the [doc](https://pgjones.gitlab.io/quart/how_to_guides/configuration.html), `app.config["FOO"]` should be `"bar"` because by default all `QUART_` prefixed env vars will be loaded. In fact, if you change the prefix to `FLASK_`, the app will run without error.
```python
from quart import Quart
import os
os.environ["FLASK_FOO"] = "bar"
app = Quart(__name__)
app.config.from_prefixed_env()
assert app.config["FOO"] == "bar"
```
This is because the `config` attribute is instantiated in `flask.sansio.app.__init__()` by calling [`self.make_config(instance_relative_config)`](https://github.com/pallets/flask/blob/c2f65dd1cfff0672b902fd5b30815f0b4137214c/src/flask/sansio/app.py#L499):
```python
def make_config(self, instance_relative: bool = False) -> Config:
# ignore details
return self.config_class(root_path, defaults)
```
where [`config_class`](https://github.com/pallets/flask/blob/c2f65dd1cfff0672b902fd5b30815f0b4137214c/src/flask/sansio/app.py#L196) is type `flask.config.Config`
```python
#: The class that is used for the ``config`` attribute of this app.
#: Defaults to :class:`~flask.Config`.
#:
#: Example use cases for a custom class:
#:
#: 1. Default values for certain config options.
#: 2. Access to config values through attributes in addition to keys.
#:
#: .. versionadded:: 0.11
config_class = Config
```
Environment:
- Python version: 3.11.5
- Quart version: 0.19.4
| closed | 2024-01-05T13:51:45Z | 2024-04-01T17:13:00Z | https://github.com/pallets/quart/issues/312 | [] | jichengzhi | 1 |
aleju/imgaug | machine-learning | 681 | Worrying discrepancy between PIL Resize and Imgaug Resize | I am resizing a 1920x1080 image to be 1333x750 pixels using bilinear interpolation. On this simple task, PIL Resize and Imgaug Resize (master) shows very worrying differences.
```
import numpy as np
from PIL import Image
import imgaug.augmenters as iaa
img_fpath = "img.png"
with Image.open(img_fpath) as f:
in_image = f.convert('RGB')
img_np = np.asarray(in_image)
pil_image = Image.fromarray(img_np)
pil_image = pil_image.resize((1333, 750), Image.BILINEAR)
image = np.asarray(pil_image)
aug = iaa.Resize({"height": 750, "width": 1333}, interpolation="linear")
img_augmented = aug(image=img_np)
print("img, ", np.mean(img_np))
print("pil, ", np.mean(image))
print("iaa, ",np.mean(img_augmented))
```
The result I get back are
img, 96.09632989326131
pil, 96.1052009669084
iaa, 95.98408402100524
where the Pil and ImgAug resizing are very clearly different but the PIL one seems to more accurately maintain the average color values of the original.
Its not clear to me why they should have different performance when they both use bilinear interpolation on the same data (I could actually see a difference in performance on a downstream detection task on a model originally trained on the pil resizing). The image used here is the test image "img.png" from https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/tree/master/test | open | 2020-05-28T14:45:50Z | 2020-05-29T11:36:44Z | https://github.com/aleju/imgaug/issues/681 | [] | rmcavoy | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 743 | Program continues to use GPU when --cpu is True | after installing all the necessary dependencies and the requirements.txt, I did `python demo_cli.py --cpu` in hopes that it would do processing on my cpu instead. But the program continued to use the gpu regardless of the `--cpu` argument.

| closed | 2021-04-23T01:37:37Z | 2021-05-18T04:19:08Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/743 | [] | jmath3912 | 3 |
plotly/dash-table | dash | 639 | [Feature Request] Limit max characters | How do I limit the maximum number of characters for cells in a column for dash-table? I tried using the Format method, but it appears to only work with numbers.
Thanks,
Vivek | open | 2019-11-12T16:13:50Z | 2019-11-12T16:14:05Z | https://github.com/plotly/dash-table/issues/639 | [] | vivekvs1 | 0 |
dask/dask | numpy | 11,726 | ⚠️ Upstream CI failed ⚠️ | [Workflow Run URL](https://github.com/dask/dask/actions/runs/13200030606)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/dataframe/dask_expr/tests/test_collection.py::test_warn_annotations: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
```
</details>
| closed | 2025-02-07T07:05:31Z | 2025-02-10T12:32:51Z | https://github.com/dask/dask/issues/11726 | [
"upstream"
] | github-actions[bot] | 0 |
aleju/imgaug | machine-learning | 204 | assertion error | I am trying to figure out why I get this error but I am a little stuck.
the augmentation code:
```python
flip_j = lambda keypoints_on_images, random_state, parents, hooks: flip_symmetric_keypoints(
keypoints_on_images)
noop = lambda images, random_state, parents, hooks: images
seq = iaa.SomeOf(2, [
iaa.Sometimes(0.4, iaa.Scale(iap.Uniform(0.5,1.0))),
iaa.Sometimes(0.6, iaa.CropAndPad(percent=(-0.25, 0.25), pad_mode=["edge"], keep_size=False)),
iaa.Sometimes(0.2,iaa.Sequential([iaa.Fliplr(1), iaa.Lambda(noop, flip_j)])),
iaa.Sometimes(0.4, iaa.AdditiveGaussianNoise(scale=(0, 0.05 * 50))),
iaa.Sometimes(0.1, iaa.GaussianBlur(sigma=(0, 3.0)))
])
seq_det = seq.to_deterministic()
```
I think there must be a few images with a resolution for which a combination of the augmentations doesn't work, but I can't figure out a way to find it because I can't print out debug inside the neural network training loop of the library keras.
my output:
Epoch 1/50
788/3150 [======>.......................] - ETA: 13:52 - loss: 0.0609 - 0_conv_1x1_parts_loss: 0.0345 - 1_conv_1x1_parts_loss: 0.0264 - 0_conv_1x1_parts_acc: 0.0766 - 1_conv_1x1_parts_acc: 0.0833Traceback (most recent call last):
File "train.py", line 62, in <module>
batch_size=args.batch_size)
File "../net/hourglass.py", line 65, in trainMPII2
epochs=epochs, callbacks=xcallbacks)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/keras/engine/training.py", line 2212, in fit_generator
generator_output = next(output_generator)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/keras/utils/data_utils.py", line 779, in get
six.reraise(value.__class__, value, value.__traceback__)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/six.py", line 686, in reraise
raise value
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/keras/utils/data_utils.py", line 644, in _data_generator_task
generator_output = next(self._generator)
File "../data_gen/mpII_datagen2.py", line 122, in generator
image_aug = seq_det.augment_image(image)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 323, in augment_image
return self.augment_images([image], hooks=hooks)[0]
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 431, in augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 1762, in _augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 431, in augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 1979, in _augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 431, in augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 1522, in _augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/meta.py", line 431, in augment_images
hooks=hooks
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/size.py", line 611, in _augment_images
crop_top, crop_right, crop_bottom, crop_left, pad_top, pad_right, pad_bottom, pad_left, pad_mode, pad_cval = self._draw_samples_image(seed, height, width)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/augmenters/size.py", line 727, in _draw_samples_image
ia.do_assert(regain_bottom <= crop_bottom)
File "/media/ssddata/jstaley/miniconda3/envs/py35/lib/python3.5/site-packages/imgaug/imgaug.py", line 678, in do_assert
raise AssertionError(str(message))
AssertionError: Assertion failed. | open | 2018-11-10T14:23:50Z | 2020-06-19T08:50:48Z | https://github.com/aleju/imgaug/issues/204 | [] | MetaDev | 4 |
taverntesting/tavern | pytest | 565 | cannot define an empty value in test | Hi, I get tavern.util.exceptions.BadSchemaError: Error at yaml:28 - column 41 - cannot define an empty value in test - either give it a value or explicitly set it to None.
This is the test:
```yaml
- name: test context was created
request:
url: "http://localhost:81/api/context?name={test_context_exists_name:s}"
method: GET
response:
strict: False
status_code: 200
json:
apis:
- name: {test_api_exists_name:s}
version: {test_api_exists_version:s}
```
line 28 if for "apis:". It is a dictionary with a list as a value, containing dictionaries.
I use tavern 1.2.2 | closed | 2020-06-29T07:06:56Z | 2020-08-26T11:18:26Z | https://github.com/taverntesting/tavern/issues/565 | [] | AlbertoBarcessat | 1 |
JaidedAI/EasyOCR | pytorch | 863 | Recognize and write on the top | Hi @rkcosmos, How we can recognize complete word and write down on the top of the word? I directly want to save the image after recognize, I do not want to look through matplotlib.

| open | 2022-09-26T06:55:06Z | 2022-09-27T08:11:07Z | https://github.com/JaidedAI/EasyOCR/issues/863 | [] | khawar-islam | 0 |
jschneier/django-storages | django | 1,141 | Is there a types stub for this library? | I am getting warnings from mypy, and I wondered if there was a types stub for this lib? I couldn't find it. | closed | 2022-06-05T19:28:53Z | 2023-08-26T21:18:37Z | https://github.com/jschneier/django-storages/issues/1141 | [] | cammil | 1 |
torchbox/wagtail-grapple | graphql | 45 | Exception: model attribute exists but is not a field | I'm not exactly sure how what happened because I don't recall seeing this behavior previously.
I have a model like this:
```python
class BlogIndexPage(Page):
intro = RichTextField(blank=True)
content_panels = Page.content_panels + [FieldPanel("intro", classname="full")]
@property
def blogpages(self):
return self.get_children().live().order_by("-first_published_at")
graphql_fields = [GraphQLString("intro")]
```
which gives me an error like:
```
Traceback (most recent call last):
File "python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "django/utils/autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "django/utils/autoreload.py", line 77, in raise_last_exception
raise _exception[1]
File "django/core/management/__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "django/utils/autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "django/apps/registry.py", line 122, in populate
app_config.ready()
File "grapple/apps.py", line 16, in ready
load_type_fields()
File "grapple/actions.py", line 263, in load_type_fields
node = type(type_name, (base_type,), type_meta)
File "graphene/utils/subclass_with_meta.py", line 52, in __init_subclass__
super_class.__init_subclass_with_meta__(**options)
File "graphene_django/types.py", line 177, in __init_subclass_with_meta__
construct_fields(model, registry, fields, exclude, convert_choices_to_enum),
File "graphene_django/types.py", line 46, in construct_fields
raise Exception(
Exception: "approved_schedule" exists on model <class 'blog.models.BlogIndexPage'> but it's not a field.
```
I am currently working around this by manually including all properties of a wagtail `Page`, but that doesn't seem quite correct based on the documentation:
```python
page_fields = [
GraphQLString(f)
for f in [
"page_ptr",
"approved_schedule",
"blogpages",
"default_preview_mode",
"full_url",
"pk",
"preview_modes",
"status_string",
"url",
]
]
class BlogIndexPage(Page):
intro = RichTextField(blank=True)
content_panels = Page.content_panels + [FieldPanel("intro", classname="full")]
@property
def blogpages(self):
return self.get_children().live().order_by("-first_published_at")
graphql_fields = page_fields + [GraphQLString("intro")]
```
Versions:
```
Django==2.2.9
graphene==2.1.8
graphene-django==2.8.0
graphql-core==2.2.1
wagtail==2.7.1
wagtail-grapple==0.4.8
```
| closed | 2020-01-12T03:08:00Z | 2020-01-24T15:49:25Z | https://github.com/torchbox/wagtail-grapple/issues/45 | [] | indirectlylit | 2 |
TheAlgorithms/Python | python | 11,702 | Add AES Algorithm | ### Feature description
Implement AES 128 Algorithm | closed | 2024-10-03T13:25:31Z | 2024-10-04T09:16:59Z | https://github.com/TheAlgorithms/Python/issues/11702 | [
"enhancement"
] | unniznd | 1 |
ultralytics/ultralytics | python | 19,831 | different augmentation and confidence for each label | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I would like to know if there is an option to define different augmentations / different confidence threshold for every class.
Thank you,
Roi
### Additional
_No response_ | open | 2025-03-23T10:06:06Z | 2025-03-23T21:32:35Z | https://github.com/ultralytics/ultralytics/issues/19831 | [
"question"
] | guetaro | 2 |
HIT-SCIR/ltp | nlp | 191 | 浏览器访问本地服务的url | http://192.168.1.107:12345/ltp?s=提问的也越来越多,但是好的问题却凤毛麟角&t=all&x=n
上面这个url无法在浏览器中使用,希望告知url格式
谢谢~ | closed | 2016-11-16T15:11:16Z | 2016-11-17T05:01:17Z | https://github.com/HIT-SCIR/ltp/issues/191 | [] | lifeng1989 | 1 |
iperov/DeepFaceLab | machine-learning | 564 | Avatar model - extract unaligned faces - faces tilted sideways | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
*run 5) data_dst extract unaligned faces S3FD best GPU (avatar only) to train an avatar model.*
## Actual behavior
*Running 5) data_dst extract unaligned faces outputs misaligned faces. All faces are tilted sideways.*
## Steps to reproduce
*Run 5) data_dst extract unaligned faces with the latest commit.*
## Other relevant information
- **Used prebuilt Windows Version (Cuda 26.12)**
Has some1 an older version where it still functions ?
Thanks in advance for any help !
Greetings. | closed | 2020-01-19T23:33:54Z | 2020-01-28T21:57:52Z | https://github.com/iperov/DeepFaceLab/issues/564 | [] | BostonCs1820 | 0 |
jupyter/nbviewer | jupyter | 932 | 404 : Not Found error | The following URL is not displaying a render for my notebook. Can someone help?
https://nbviewer.jupyter.org/github/UWTMGIS/Capstone_S20/blob/06db74b36ada54aa286068e071dd68422fcad517/VanMechelen/2019_Stanely_Cup_Finals.ipynb
Remote HTTP 404: Not Found ({"message":"Not Found","documentation_url":"https://developer.github.com/v3/git/trees/#get-a-tree"})
```[tasklist]
### Tasks
```
| open | 2020-05-20T20:05:05Z | 2024-02-27T21:49:17Z | https://github.com/jupyter/nbviewer/issues/932 | [] | vanmeciv | 3 |
marshmallow-code/apispec | rest-api | 1 | [RFC] Pluggable API documentation generator | Now that smore has many of the lower-level functions for converting marshmallow `Schema` and webargs `Args` to swagger definitions, next step is to implement a system for generating full API docs.
Ideas for initial iteration:
- Based on Swagger 2.0 spec. This will allow us to leverage the latest Swagger-UI
- Pluggable. The documentation generator will work with any web framework, with or without webargs, etc. etc. Plugins provide helpers for generating metadata.
- Easy way to serve swagger docs. Possibly part of the Flask plugin.
Ideas for the future:
- Generate swagger-based from docstrings.
- Sphinx extension?
## Proof of concept
I wrote up a simple proof-of-concept in this gist: https://gist.github.com/sloria/dc1b2d2e43fbcea866ae
## Prior art
- [django-rest-swagger](https://github.com/marcgibbons/django-rest-swagger)
- [flask-restful-swagger](https://github.com/rantav/flask-restful-swagger)
- [flask-restplus](https://github.com/noirbizarre/flask-restplus)
- [cornice](http://cornice.readthedocs.org/en/latest/sphinx.html) (not swagger-based, but may provide ideas for sphinx extension)
| closed | 2014-12-25T21:00:33Z | 2015-12-04T04:37:26Z | https://github.com/marshmallow-code/apispec/issues/1 | [
"feedback welcome"
] | sloria | 5 |
kizniche/Mycodo | automation | 1,180 | Can't Open Dependencies Page From The Menu | Hi,
I'm new to the community, so I may not have done the best job with this but here goes.
### Describe the problem/bug:
After updating to the newest version of Mycodo (8.13.9), I tried to navigate to the dependencies page from the menu. After clicking the link in the menu, the page loads for a considerable amount of time, to eventually bring me to an Error 500 (Internal Server Error) page.
Here's the entire traceback below:
> Error (Full Traceback):
>
> Traceback (most recent call last):
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask/app.py", line 2077, in wsgi_app
> response = self.full_dispatch_request()
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask/app.py", line 1525, in full_dispatch_request
> rv = self.handle_user_exception(e)
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask_restx/api.py", line 672, in error_router
> return original_handler(e)
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask/app.py", line 1523, in full_dispatch_request
> rv = self.dispatch_request()
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask/app.py", line 1509, in dispatch_request
> return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
> File "/var/mycodo-root/env/lib/python3.9/site-packages/flask_login/utils.py", line 277, in decorated_view
> return current_app.ensure_sync(func)(*args, **kwargs)
> File "/home/srprototype/Mycodo/mycodo/mycodo_flask/routes_admin.py", line 383, in admin_dependencies
> if each_dep not in unmet_list:
> TypeError: unhashable type: 'list'
### Versions:
_Version: 8.13.9
Database: b354722c9b8b
Model: Raspberry Pi 4 Model B Rev 1.4
Release:
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
Firmware:
b''_
### Reproducibility
1) From any page, click the settings menu icon (gear icon) in the top right corner of the page.
2) Select the dependencies link from the drop down menu.
### Expected Behaviour
When clicking on the dependencies link, the dependencies page should appear.
**I'm open to any feedback on my bug reporting.
Thanks, have a good day folks.** | closed | 2022-04-21T23:54:21Z | 2022-05-20T02:37:01Z | https://github.com/kizniche/Mycodo/issues/1180 | [
"bug",
"Fixed and Committed"
] | dcgris | 8 |
collerek/ormar | fastapi | 365 | `ValidationError` is not thrown out correctly with `get_pydantic` method | **Description**
According to the [documentation](https://collerek.github.io/ormar/fastapi/requests/#generate-pydantic-model-from-ormarmodel) I noticed that the `ValidationError` is not thrown out correctly if models generated with `get_pydantic` are used in FastAPI requests.
**Example:**
```py
class EnumExample(str, enum.Enum):
A = 'A'
B = 'B'
C = 'C'
class ModelExample(ormar.Model):
class Meta(ormar.ModelMeta):
database = database
metadata = metadata
tablename = "examples"
id: int = ormar.Integer(primary_key=True)
str_field: str = ormar.String(min_length=5, max_length=10, nullable=False)
enum_field: str = ormar.String(max_length=1, nullable=False, choices=list(EnumExample))
@pydantic.validator('str_field')
def validate_str_field(cls, v):
if ' ' not in v:
raise ValueError('must contain a space')
return v
ModelExampleCreate = ModelExample.get_pydantic(exclude={'id'})
@app.post("/examples/", response_model=ModelExample)
async def create_example(example: ModelExampleCreate):
return await ModelExample(**example.dict()).save()
```
**Result:**
Client receives an `Internal Server Error`, the `ValidationError` is only output in the error log.
```
File "/home/vscode/.local/lib/python3.9/site-packages/ormar/models/newbasemodel.py", line 143, in __init__
raise validation_error
pydantic.error_wrappers.ValidationError: 1 validation error for ModelExample
__root__
enum_field: 'D' not in allowed choices set: ['A', 'B', 'C'] (type=value_error)
```
```
File "/home/vscode/.local/lib/python3.9/site-packages/ormar/models/newbasemodel.py", line 143, in __init__
raise validation_error
pydantic.error_wrappers.ValidationError: 1 validation error for ModelExample
str_field
must contain a space (type=value_error)
```
**Expected result:**
Client receives the `ValidationError`.
**Note:**
Everything goes as expected with the original model:
```py
@app.post("/examples/", response_model=ModelExample)
async def create_example(example: ModelExample):
return await example.save()
```
**Versions:**
- `ormar` 0.10.20
- `pydantic` 1.8.2
- `fastapi` 0.68.2 | closed | 2021-10-06T10:04:33Z | 2021-10-15T08:55:47Z | https://github.com/collerek/ormar/issues/365 | [
"bug"
] | derzinn | 10 |
ultralytics/ultralytics | deep-learning | 18,687 | YOLOv8 detection head intuitive feature specialization (e.g., small/medium/large object focus) | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have repeatedly read and observed that in the case of YOLOv3, the detection heads focus on small medium and large object detection respectively. I don't believe (and have not observed) this to be true for YOLOv8, and I am wondering if there is any sort of equivalent or analogous intuitive semantic feature specialization for its detection heads.
For example, the following depicts the input image with the bounding box whose features correspond to the first, second, and third head respectively for YOLOv3.
<img width="380" alt="Image" src="https://github.com/user-attachments/assets/e051c349-a472-45fe-b144-80670e6bac0b" />
It's clear that the first/second/third head correspond to small/medium/large objects. It is not the case for YOLOv8:
<img width="378" alt="Image" src="https://github.com/user-attachments/assets/f9d48ed4-d559-41e0-a21f-6f237d607ac4" />
I am working with extracted activation maps from the YOLOv8 detection heads and it would be helpful if there was a sort of intuitive grouping between them as there is in YOLOv3, just wondering if such a grouping exists (even if it is not small/medium/large objects as it is in YOLOv3).
Further, what mechanism in the YOLOv3 architecture is responsible for this explicit specialization?
### Additional
_No response_ | open | 2025-01-14T19:59:12Z | 2025-01-15T18:15:15Z | https://github.com/ultralytics/ultralytics/issues/18687 | [
"question",
"detect"
] | leethologica | 4 |
CPJKU/madmom | numpy | 283 | TransitionModel returns wrong number of states if state is unreachable | In this example, the last state is not reachable:
```python
>>> A = np.array([[.5, .5, 0.], [.5, .5, 0.], [.5, .5, 0.]])
>>> frm, to = A.nonzero()
>>> tm = TransitionModel.from_dense(to, frm, A[frm, to])
>>> print tm.num_states
2
```
Expected output would be '3'.
This is because `num_states` in `TransitionModel` relies on the length of `self.pointers`. A possible solution might be to use `states.max() + 1`. | closed | 2017-05-16T06:38:40Z | 2017-05-17T08:57:13Z | https://github.com/CPJKU/madmom/issues/283 | [] | fdlm | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 321 | 好像都不能解析了, | 一直都在
Server酱正收到你输入的链接啦!(◍•ᴗ•◍)
正在努力处理中,请稍等片刻... | closed | 2024-02-05T03:08:38Z | 2024-03-25T22:30:46Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/321 | [] | shabbyme | 5 |
InstaPy/InstaPy | automation | 6,101 | Error when I try to follow people. | Code:
UserHashTag = 'user'
Session.follow_likers(UserHashTag, photos_grab_amount = 1, follow_likers_per_photo = 20, randomize=True, sleep_delay=10, interact=False)
Error:
Error occured while retrieving data.
b'Message: The element reference of <main class="SCxLW uzKWK CsONw"> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed\n'
Not sure why this is happening would appreciate any help! | closed | 2021-03-02T00:59:47Z | 2021-07-21T04:18:48Z | https://github.com/InstaPy/InstaPy/issues/6101 | [
"wontfix"
] | 123automator | 2 |
statsmodels/statsmodels | data-science | 8,922 | How does lowess handle larger gaps in data? | I have been using the lowess smoother to calculate trends for time series data for a while now but until now my data was always without gaps.
I now have to work with data where there are quite large gaps in time and by reading the documentation and looking at the actual implementation of the lowess smoother I couldn't really find out how it handels missing data. My data is sampled every minute with gaps in the order of 2 hours so ~120 samples.
The produced trend is obviously not correct and the console output confirms that this somehow causes a problem since I get either a `RuntimeWarning: divide by zero encountered in divide` or `RuntimeWarning: invalid value encountered in divide`.
Any frac value over 0.03 results in a trend that reaches 1e29 while the data is closer to 0.4. Does anyone know how exactly gaps are handled or if I have to divide my data into chunks without gaps?
EDIT: Nevermind, I just found out there was something wrong on my end. | closed | 2023-06-21T06:13:33Z | 2023-10-27T09:57:24Z | https://github.com/statsmodels/statsmodels/issues/8922 | [] | arianmustafa | 0 |
kochlisGit/ProphitBet-Soccer-Bets-Predictor | seaborn | 6 | ERROR LEAGUE | I have a problem when I run the file with Visual Code, the Prophitbet application opens correctly but in the league menu to create a league a white window opens. When I go back to the Visual Studio Code code, a file error appears on the league :
PS C:\Users\Liamine> & C:/Users/Liamine/AppData/Local/Microsoft/WindowsApps/python3.9.exe c:/Users/Liamine/Downloads/ProphitBet-Soccer-Bets-Predictor-main/main.py
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1892, in __call__
return self.func(*args)
File "c:\Users\Liamine\Downloads\ProphitBet-Soccer-Bets-Predictor-main\gui\main\application.py", line 159, in _create_league
self._open_league_name, self._open_league, self._matches_df = CreateLeagueDialog(
File "c:\Users\Liamine\Downloads\ProphitBet-Soccer-Bets-Predictor-main\gui\dialogs\league.py", line 16, in __init__
self._all_leagues = league_repository.get_all_available_leagues()
File "c:\Users\Liamine\Downloads\ProphitBet-Soccer-Bets-Predictor-main\database\repositories\league.py", line 20, in get_all_available_leagues
with open(file=self._available_leagues_filepath, mode='r', encoding='utf=8') as csvfile:
FileNotFoundError: [Errno 2] No such file or directory: 'database/storage/leagues/available_leagues.csv'
can you help e ?
| open | 2023-01-29T00:49:11Z | 2023-03-01T11:24:08Z | https://github.com/kochlisGit/ProphitBet-Soccer-Bets-Predictor/issues/6 | [] | Papito8z | 19 |
tflearn/tflearn | data-science | 1,029 | tflearn\utils.py throws thread exception during fit() even though all data is in numpy array | Here is my data loading function:
```
def data_loader(image_folder_path,csv_path):
(data,target)=load_csv(data_csv_path,target_column=-1,columns_to_ignore=[1],has_header=True)
data=np.array(data) // array of float values stored as str() types recieved
data=data.astype(dtype=np.float32) // converted all values to float
target=np.array(target)
target=target.astype(dtype=np.int)
images=[]
for i in range (1,TOTAL_NO_OF_IMGS+1): // since images named as 1,2,3....etc
images_temp=cv2.imread(image_folder_path+str(i)+'.jpeg',cv2.IMREAD_GRAYSCALE)[IMG_CROP_HEIGHT-1:IMG_MAX_HEIGHT]
images+=[images_temp]
cv2.imshow("Loaded_images",images[i-1]) // just seeing to verify that images are correctly loaded
cv2.waitKey(1)
images=np.array(images)
cv2.destroyAllWindows()
print(len(images[0]))
print(images[0])
return(images,data,target)
```
you can see i have converted all the data to numpy array, yet the following error occurs:
```
Training samples: 234
Validation samples: 26
--
Exception in thread Thread-9:
Traceback (most recent call last):
File "F:\Anaconda\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "F:\Anaconda\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "F:\Anaconda\lib\site-packages\tflearn\data_flow.py", line 187, in fill_feed_dict_queue
data = self.retrieve_data(batch_ids)
File "F:\Anaconda\lib\site-packages\tflearn\data_flow.py", line 222, in retrieve_data
utils.slice_array(self.feed_dict[key], batch_ids)
File "F:\Anaconda\lib\site-packages\tflearn\utils.py", line 187, in slice_array
return X[start]
IndexError: index 250 is out of bounds for axis 0 with size 1
```
the value of index error changes with every excution, i have used
```
import tensorflow as tf
tf.reset_default_graph()
```
at the beginning too and yet just to be safe i'm restarting the IPython kernel before each execution.
i'll post my whole code if you want me to, but it's a bit lengthy. so i'm only giving the snippets.
What is happening here ? and how can i solve it | open | 2018-03-22T16:54:36Z | 2018-03-22T16:54:36Z | https://github.com/tflearn/tflearn/issues/1029 | [] | adarsh9975 | 0 |
custom-components/pyscript | jupyter | 423 | HA Blocked - Detected blocking call to sleep inside the event loop | Sometimes HA become unresponsive until I manually restart it, and I found in the log many lines like the following one:
> Logger: homeassistant.util.async_
> Source: util/async_.py:180
> First occurred: 19:24:49 (241 occurrences)
> Last logged: 19:28:58
>
> Detected blocking call to sleep inside the event loop. This is causing stability issues. Please report issue to the custom integration author for pyscript doing blocking calls at custom_components/pyscript/eval.py, line 1906: return func(*args, **kwargs)
>
Is there something I can do to debug the issue ? | closed | 2023-01-03T18:30:03Z | 2023-02-26T06:04:59Z | https://github.com/custom-components/pyscript/issues/423 | [] | marcoCasamento | 3 |
MycroftAI/mycroft-core | nlp | 2,730 | Failed to find intent. | I am running the latest stable version of Mycroft.
If I start with ```debug``` it will have an error that " Failed to find intent. "
but if I start with cli it works fine | closed | 2020-10-23T02:14:20Z | 2020-10-24T16:05:00Z | https://github.com/MycroftAI/mycroft-core/issues/2730 | [] | weathon | 5 |
waditu/tushare | pandas | 1,513 | 获取指数成分和权重的文档有误 | https://tushare.pro/document/2?doc_id=96
官方文档中的输入参数中的trade_date无法获得数据,要改为tradedate才能获得 | open | 2021-02-08T10:16:54Z | 2021-02-08T10:16:54Z | https://github.com/waditu/tushare/issues/1513 | [] | lzwcaptain | 0 |
kennethreitz/responder | flask | 71 | POST data in CBV? | Hi all!
I make a `POST` request with some data.
How can i get post data in CBV?
```
@api.route("/test")
class GreetingResource:
def on_request(self, req, resp):
resp.text = "hello, world!"
resp.headers.update({'X-Life': '42'})
resp.status_code = api.status_codes.HTTP_416
```
As i see `req.content` and `req.media()` is coroutines. But i can't use `async` here, because in `responder/api.py` we have
```
try:
getattr(view, "on_request")(req, resp)
except AttributeError:
pass
# Then on_get.
method = req.method.lower()
try:
getattr(view, f"on_{method}")(req, resp)
except AttributeError:
pass
```
Any suggestions?
Maybe add some options for `responder/api.py` with `async` for getting `POST` data? | closed | 2018-10-17T10:09:10Z | 2018-10-17T11:12:54Z | https://github.com/kennethreitz/responder/issues/71 | [] | Ranc58 | 3 |
coqui-ai/TTS | python | 2,494 | Reporting a vulnerability | Hello!
I hope you are doing well!
We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.
Can you enable it, so that we can report it?
Thanks in advance!
PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository | closed | 2023-04-10T11:04:25Z | 2023-05-12T13:53:40Z | https://github.com/coqui-ai/TTS/issues/2494 | [
"wontfix"
] | igibek | 4 |
gevent/gevent | asyncio | 2,084 | gevent with c-ares resolver parses /etc/services on every request | * gevent version: tested on 24.2.1 and 24.11.1 from pypi
* Python version: cPython 3.12.7 compiled from source python.org
* Operating System: Debian bookworm and RHEL9
### Description:
When migrating our application container from Debian to RHEL9 we found a 2x latency regression on highly concurrent workloads (e.g. our replication max latency went from 20 sec to 40 sec).
After some profiling we found that the RHEL9 image was spending 10x the time on ares resolver `__getaddrinfo` calls. Strace showed every call to `getaddrinfo` was leading to a full read+parse of system files like /etc/services, which turns out to be +700kb on stock RHEL9 vs 70kb on Debian. Reducing the size of that file eliminated the large performance gap.
While we have been able to work around the performance issue I think its probably worth for gevent devs to take a look. Some notes:
1. I was not able to understand why every `getaddrinfo` call leads to reading those system files. c-ares (presumably) reads those files on init, from what I understand gevent only initializes the resolver once, here https://github.com/gevent/gevent/blob/24.11.1/src/gevent/resolver/cares.pyx#L406
2. All calls to gevent's patched `socket.create_connection` lead to c-ares `getaddrinfo` calls, even when using ipv4 addresses and port numbers instead of domain and/or service names. there might be room for an optimized path here.
3. Even though reducing the size of the system files improved the performance, there are some more gains left on the table by avoiding the extra syscalls. A quick review of c-ares resolver docs/code gave me the impression it should be the default (only read those files when they change). So not sure whats going on.
3.1 EDIT: I am looking closer to the strace and I see it is doing an fstat for nsswitch.conf (probably to check last modified) while for /etc/services is going for a full read, so might actually be a gap on c-ares not remembering last read on /etc/services. but then again why it has to read that file if I am giving a port number?
### What I've run:
```python
# under: strace python3
import gevent.monkey; gevent.monkey.patch_all() # noqa
import socket
socket.create_connection(('10.210.56.39', 8080))
# strace above is from this second call
socket.create_connection(('10.210.56.39', 8080))
```
see full reads of system files when doing the second socket call, seems like c-ares reinitializes on every call.
```
rt_sigaction(SIGWINCH, {sa_handler=0x7fc447fd5080, sa_mask=[], sa_flags=SA_RESTORER|SA_ONSTACK, sa_restorer=0x7fc4483fa6f0}, {sa_handler=0x7fc447fa7280, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fc4483fa6f0}, 8) = 0
newfstatat(AT_FDCWD, "/etc/nsswitch.conf", {st_mode=S_IFREG|0644, st_size=256, ...}, 0) = 0
openat(AT_FDCWD, "/etc/services", O_RDONLY|O_CLOEXEC) = 6
fstat(6, {st_mode=S_IFREG|0644, st_size=68, ...}) = 0
lseek(6, 0, SEEK_SET) = 0
read(6, "domain 53/tcp\ndomain 5"..., 4096) = 68
read(6, "", 4096) = 0
close(6) = 0
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 6
ioctl(6, FIONBIO, [1]) = 0
getsockopt(6, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
connect(6, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("10.210.56.39")}, 16) = -1 EINPROGRESS (Operation now in progress)
getpid() = 14914
epoll_ctl(3, EPOLL_CTL_ADD, 6, {events=EPOLLOUT, data={u32=6, u64=8589934598}}) = 0
epoll_wait(3, [{events=EPOLLOUT, data={u32=6, u64=8589934598}}], 64, 1500001070) = 1
getsockopt(6, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
connect(6, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("10.210.56.39")}, 16) = 0
```
Note: these samples below are from our app performing thousands of concurrent https requests to IPv4 addresses.
RHEL9: getaddrinfo dominates time spent on cpu

Debian: getaddrinfo impact has reduced ~10x, and code is now spending most of cpu time on tls, as I would expect.

| open | 2024-12-09T10:52:47Z | 2024-12-16T01:04:54Z | https://github.com/gevent/gevent/issues/2084 | [] | glic3rinu | 1 |
PokeAPI/pokeapi | graphql | 1,108 | Ability by effects | Hi, was just wondering if it's possible to get a list of abilities by effect, similar to https://bulbapedia.bulbagarden.net/wiki/Category:Abilities_by_effect | closed | 2024-06-16T15:33:05Z | 2024-06-18T02:50:37Z | https://github.com/PokeAPI/pokeapi/issues/1108 | [] | blevy115 | 3 |
coqui-ai/TTS | deep-learning | 2,842 | Special character like ö, ä, ü not spoken [Bug] | ### Describe the bug
The special character do not correct convertet to spoken text.
from TTS.api import TTS
def read_file_to_string(file_path):
try:
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
return content
except FileNotFoundError:
print("Datei nicht gefunden.")
return ""
except Exception as e:
print("Fehler beim Lesen der Datei:", e)
return ""
file_content = read_file_to_string("text.txt")
print(file_content)
api = TTS(model_name="tts_models/de/thorsten/tacotron2-DCA", gpu=False)
api.tts_to_file(file_content, file_path="output.wav", encoding='utf-8')
The string file_content is in correct utf-8 format.
### To Reproduce
Run the code and check the output.wav.
### Expected behavior
Correct speaking with ö, ä, ü
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cpu",
"TTS": "0.14.3",
"numpy": "1.21.6"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD",
"python": "3.8.17",
"version": "10.0.22621"
}
}
```
### Additional context
_No response_ | closed | 2023-08-06T09:21:44Z | 2024-08-09T08:25:35Z | https://github.com/coqui-ai/TTS/issues/2842 | [
"bug"
] | frixos25 | 9 |
microsoft/nni | machine-learning | 5,690 | ConnectionClosedError: sent 1011 (unexpected error) keepalive ping timeout; no close frame received | **Describe the issue**: I am running multiple NNI experiments on my university's server at the same time (7 experiments, each using one GPU, for 7 days). Every experiment failed at about the same time with the same error. Any idea what might have caused this?
[2023-10-03 10:33:22] [31mERROR: Strategy failed to execute.[0m
[2023-10-03 10:35:40] [31mERROR: Failed to receive command. Retry in 0s[0m
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 959, in transfer_data
message = await self.read_message()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1029, in read_message
frame = await self.read_data_frame(max_size=self.max_size)
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1104, in read_data_frame
frame = await self.read_frame(max_size)
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 1161, in read_frame
frame = await Frame.read(
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/framing.py", line 68, in read
data = await reader(2)
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/streams.py", line 723, in readexactly
await self._wait_for_data('readexactly')
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/streams.py", line 517, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 99, in _receive_command
command = conn.receive()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 103, in receive
msg = _wait(self._ws.recv())
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 568, in recv
await self.ensure_open()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/protocol.py", line 944, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1011 (unexpected error) keepalive ping timeout; no close frame received
[2023-10-03 10:36:15] [32mStopping experiment, please wait...[0m
[2023-10-03 10:36:17] [31mERROR: Failed to receive command. Retry in 1s[0m
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 655, in __await_impl_timeout__
return await self.__await_impl__()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 659, in __await_impl__
_transport, _protocol = await self._create_connection()
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1026, in create_connection
infos = await self._ensure_resolved(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1405, in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
return await self.run_in_executor(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
executor.submit(func, *args), loop=self)
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
[2023-10-03 10:36:49] [31mERROR: Failed to receive command. Retry in 2s[0m
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 655, in __await_impl_timeout__
return await self.__await_impl__()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 659, in __await_impl__
_transport, _protocol = await self._create_connection()
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1026, in create_connection
infos = await self._ensure_resolved(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1405, in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
return await self.run_in_executor(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
executor.submit(func, *args), loop=self)
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
[2023-10-03 10:36:56] [32mCheckpoint saved to /home/lmarreiros/omnia-nas/omnia/examples/drug_synergy/nni/expr_dgi_drugs_ECFP4/MultiInputModel/3n9pl067/checkpoint.[0m
[2023-10-03 10:37:00] [31mERROR: Failed to receive command. Retry in 3s[0m
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 655, in __await_impl_timeout__
return await self.__await_impl__()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 659, in __await_impl__
_transport, _protocol = await self._create_connection()
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1026, in create_connection
infos = await self._ensure_resolved(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1405, in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
return await self.run_in_executor(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
executor.submit(func, *args), loop=self)
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
[2023-10-03 10:37:13] [31mERROR: Failed to receive command. Retry in 4s[0m
Traceback (most recent call last):
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 655, in __await_impl_timeout__
return await self.__await_impl__()
File "/home/lmarreiros/.cache/pypoetry/virtualenvs/omnia-local-AEBrPPsi-py3.9/lib/python3.9/site-packages/websockets/legacy/client.py", line 659, in __await_impl__
_transport, _protocol = await self._create_connection()
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1026, in create_connection
infos = await self._ensure_resolved(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 1405, in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
return await self.run_in_executor(
File "/home/lmarreiros/miniconda3/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
executor.submit(func, *args), loop=self)
File "/home/lmarreiros/miniconda3/lib/python3.9/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
[2023-10-03 10:37:25] [33mWARNING: Failed to receive command. Last retry[0m
[2023-10-03 10:37:40] [32mExperiment stopped[0m
**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc): local
- Client OS:
- Server OS (for remote mode only): CentOS Stream 9
- Python version: 3.9.13
- PyTorch/TensorFlow version: 1.13.0
- Is conda/virtualenv/venv used?: pypoetry
- Is running in Docker?: no
**Log message**:
- nnimanager.log: [nnimanager.log](https://github.com/microsoft/nni/files/12791949/nnimanager.log)
- dispatcher.log: [experiment.log](https://github.com/microsoft/nni/files/12791950/experiment.log)
| open | 2023-10-03T11:22:35Z | 2023-10-03T11:22:35Z | https://github.com/microsoft/nni/issues/5690 | [] | sw33zy | 0 |
ydataai/ydata-profiling | data-science | 1,018 | issue with visions application in pandas_profiling __version__ = "2.6.0" in Python 3.9.7 | ### Current Behaviour
I have uninstalled visions 0.7.5 and installed 0.7.4 for pandas_profiling even then I have the same error.
it pops with ModuleNotFoundError: No module named 'visions.application' from jupiter run
I have-> Python 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
pandas_profiling =__version__ = "2.6.0"
This is very simple code below
================
import pandas as pd # pip install pandas openpyxl
from pandas_profiling import ProfileReport # pip install pandas-profiling
# Read CSV File
# importing the data
df=pd.read_csv(r'C:\Users\myname\Downloads\test.csv')
# Create Pandas Profiling Report
profile = ProfileReport(df, title="Pandas Profiling Report")
#profile.to_file('test.html')
### Expected Behaviour
NA
### Data Description
NA
### Code that reproduces the bug
```Python
import pandas as pd # pip install pandas openpyxl
from pandas_profiling import ProfileReport # pip install pandas-profiling
# Read CSV File
# importing the data
df=pd.read_csv(r'C:\Users\myname\Downloads\test.csv')
# Create Pandas Profiling Report
profile = ProfileReport(df, title="Pandas Profiling Report")
#profile.to_file('test.html')
```
### pandas-profiling version
__version__ = "2.6.0"
### Dependencies
```Text
nA
```
### OS
WINDOWS10
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2022-08-05T18:25:21Z | 2022-08-24T00:34:47Z | https://github.com/ydataai/ydata-profiling/issues/1018 | [
"needs-triage"
] | bi2017dg | 1 |
huggingface/transformers | pytorch | 36,550 | size mismatch for lm_head when fintune QWEN2.5 | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0
- Platform: Linux-6.6.0-72.0.0.64.oe2403.x86_64-x86_64-with-glibc2.38
- Python version: 3.10.16
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L40
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I finetune qwen2.5 using follow code:
```python
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
dataset = load_dataset("trl-lib/Capybara", split="train")
dataset = dataset.select(range(500))
MODEL_ID = 'Qwen/Qwen2.5-0.5B'
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules="all-linear",
modules_to_save=["lm_head", "embed_token"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="Qwen2.5-0.5B-SFT-Capybara", # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=4, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
bf16=True, # use bfloat16 precision
tf32=True, # use tf32 precision
learning_rate=2e-4, # learning rate, based on QLoRA paper
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=False, # push model to hub
# report_to="tensorboard", # report metrics to tensorboard
)
trainer = SFTTrainer(
MODEL_ID,
train_dataset=dataset,
args=args,
peft_config=peft_config
)
trainer.train()
print('end')
```
and I use follow code to inference:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
# peft_model_id = args.output_dir
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
prompt = "3的5倍是多少"
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=200
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
print(1)
```
an error occur when load model with AutoPeftModelForCausalLM:
```
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Traceback (most recent call last):
File "/home/chenjq/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenjq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenjq/pythonWork/nlp/test14.py", line 11, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 581, in from_pretrained
load_result = model.load_adapter(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 1239, in load_adapter
load_result = set_peft_model_state_dict(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 451, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([151936, 896]) from checkpoint, the shape in current model is torch.Size([151665, 896]).
Process finished with exit code 1
```
### Expected behavior
expecte model can predict normally. | closed | 2025-03-05T03:54:51Z | 2025-03-10T02:50:17Z | https://github.com/huggingface/transformers/issues/36550 | [
"bug"
] | minmie | 8 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 96 | 抖音主页下载不支持图集 | 调用API发现视频的可以下载,图集的不行。 | closed | 2022-11-03T07:39:29Z | 2024-04-23T05:03:28Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/96 | [
"BUG",
"enhancement",
"help wanted"
] | liuliuzx | 7 |
lepture/authlib | flask | 444 | Confusing behavior with OAuth2Session and state not being checked | **Describe the bug**
In the [documentation](https://docs.authlib.org/en/latest/client/oauth2.html#fetch-token) for how to use OAuth2Session client, it says that by supplying state when instantiating the object, then state will be checked when making the `fetch_token` request. In addition, the [docstring](https://github.com/lepture/authlib/blob/v1.0.0/authlib/oauth2/rfc6749/parameters.py#L131) for `parse_authorization_code_response` says that state is a required parameter when state is present in the client authorization request, but the [code](https://github.com/lepture/authlib/blob/v1.0.0/authlib/oauth2/rfc6749/parameters.py#L154) doesn't enforce that. Instead, it skips the check for state unless the user explicitly passes the state kwarg into the call to `fetch_token`. This leads to misleading behavior, where state is not actually checked.
**Error Stacks**
None
**To Reproduce**
We know there is a Flask OAuth client, and our example below doesn't use it, but uses Flask to create an easy, reproducible example. In our real app, we are using OAuth2Session client and not using Flask.
```python
import flask
import authlib.integrations.requests_client
app = flask.Flask(__name__)
@app.route('/')
def index():
client = _client()
uri, _ = client.create_authorization_url(
'https://github.com/login/oauth/authorize',
'<your server ip address>:8000/auth-github-authorize'
)
return flask.redirect(uri)
@app.route('/auth-github-authorized')
def auth_github_authorized():
# FIXME: Supplying state here doesn't make a difference. It isn't checked.
client = _client(state='a totally made up state')
client.fetch_token(authorization_response=flask.request.url)
raise AssertionError('Should not have gotten here. State is invalid.')
def _client(state=None):
return authlib.integrations.requests_client.OAuth2Session(
'<your-github-oauth-key>',
'<your-github-oauth-secret>',
scope='user:email',
state=state,
token_endpoint='https://github.com/login/oauth/access_token',
)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
```
**Expected behavior**
authlib.oauth2.rfc6749.errors.MismatchingStateException should be raised.
**Environment:**
- OS: Fedora 32
- Python Version: 3.7.2
- Authlib Version: 1.0.0
| closed | 2022-03-22T16:34:48Z | 2022-07-02T19:31:51Z | https://github.com/lepture/authlib/issues/444 | [
"bug"
] | rorour | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 544 | Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files. | I typed python .\demo_toolbox.py in the cmd.
After that, I getting this error message:
"Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
low_mem: False
seed: None
no_mp3_support: False
Librosa will be unable to open mp3 files if additional software is not installed.
Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files."
Please guide me!
| closed | 2020-10-05T10:45:38Z | 2020-10-05T20:26:37Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/544 | [] | varungolusupudi | 2 |
graphql-python/graphene-django | graphql | 781 | Filtering not working correctly with 2.6.0 | Related to #750. Appreciate the fix for this problem!!
### Problem
When using filter_fields I get an error about using wrong types which started appearing in 2.4.0.
`Variable "userEmail" of type "String" used in position expecting type "ID".` The error does not occur with graphene-django 2.3.2
### Context
- using django-filter 2.2.0
- django 2.4.0
- graphene-django 2.6.0
###
**model.py**
```
class Membership(TimeStampedModel):
user = models.ForeignKey(User, on_delete=models.CASCADE)
tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE)
class User(TimeStampedModel, AbstractBaseUser, PermissionsMixin):
email = EmailField(unique=True, verbose_name=_('email'))
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = []
objects = UserManager()
```
**Schema.py**
```
class MembershipNode(DjangoObjectType):
class Meta:
model = Membership
filter_fields = {
'id': ['exact'],
'user__email': ['exact'],
}
interfaces = (MembershipNodeInterface,)
```
**Query:**
```
QUERY_MEMBERSHIPS = '''
query memberships($tenant: String!, $userEmail: String) {
memberships(tenant: $tenant, user_Email: $userEmail) {
edges {
node {
id
isFitter
isMonitor
isAdmin
isStaff
}
}
}
}
'''
```
**Result:**
`Variable "userEmail" of type "String" used in position expecting type "ID".`
### Solution
Should be related to #750. Might be a special case due to the `email` being the identifying field of the `User`
> I am confident it is related to this PR: https://github.com/graphql-python/graphene-django/pull/682/files . In graphene_django/filter/utils.py the way how to retrieve the Type of a field was changed.
Keep on rocking :)
| closed | 2019-09-23T01:22:42Z | 2019-11-28T19:28:41Z | https://github.com/graphql-python/graphene-django/issues/781 | [] | lassesteffen | 10 |
ultralytics/yolov5 | machine-learning | 13,404 | problem with int8 quantization of tensorrt for models trained with adam optimizer | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Export
### Bug
Hello
When I use the adam optimizer to train a pt model, then convert it to onnx, and then convert it to the tensorrt engine model, there is a problem with the output threshold during testing, but when I use the sgd optimizer to train the model and perform the above steps, the engine model output threshold is normal. What is the reason?
When using sgd, the normal threshold of the int8 engine output is 0.92, but when using adam, the output threshold is 0.14
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | open | 2024-11-08T08:51:16Z | 2024-11-08T22:17:27Z | https://github.com/ultralytics/yolov5/issues/13404 | [
"bug",
"exports"
] | skynn1128 | 2 |
autogluon/autogluon | computer-vision | 3,924 | Request: Implement Feature Importance Explainability for Time-Series Module | ### Summary:
The AutoGluon time-series module has proven to be a powerful tool for forecasting tasks. However, one area that could significantly enhance its utility is the inclusion of feature importance explainability in terms of both global training as well as inclusion as covariates, akin to what is currently available in the AutoGluon tabular module. This feature would greatly aid in understanding model decisions, facilitating a more intuitive analysis and improvement of models by highlighting which features contribute most to predictions.
### Detail:
The tabular module in AutoGluon offers an insightful feature importance mechanism that helps users understand the impact of each feature on the model's predictions. This is not only crucial for model interpretation but also for improving model performance by focusing on the most influential features. Implementing a similar feature for the time-series module would provide users with a comprehensive tool for time-series forecasting that is not only powerful but also interpretable.
- Model Transparency: Provides clear insights into how and why predictions are made, increasing trust in the model.
- Feature Engineering: Identifies which features are most valuable, guiding users on where to focus their feature engineering efforts.
- Model Improvement: Helps in diagnosing model performance issues by highlighting features that are less important or potentially noisy.
## Suggested Implementation:
It would be extremely helpful for the time-series module to incorporate a feature importance mechanism. This could potentially leverage some modified version of existing frameworks like SHAP (SHapley Additive exPlanations) or permutation importance, similar to the approach used in the tabular module.
The addition of feature importance explainability to the AutoGluon time-series module would be a valuable enhancement, making the module not only a powerful forecasting tool but also an interpretable and transparent one. It would align with the growing need for explainable AI in critical applications and facilitate a deeper understanding and trust in AI-driven forecasting models.
Thank you for considering this feature request. I believe it would make a significant contribution to the AutoGluon toolkit and its user community. | closed | 2024-02-15T16:00:13Z | 2024-04-09T16:41:52Z | https://github.com/autogluon/autogluon/issues/3924 | [
"enhancement",
"module: timeseries"
] | kristinakupf | 3 |
cvat-ai/cvat | computer-vision | 8,712 | Notifications can make it hard to download exported annotations | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Export a task
2. Go to Requests tab
3. Try to download

It there are lots of notifications (e.g. from some errors or just from a number of exports), you either have to refresh the page or close all of the notifications.
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | open | 2024-11-15T14:32:23Z | 2025-02-06T08:11:57Z | https://github.com/cvat-ai/cvat/issues/8712 | [
"ui/ux"
] | zhiltsov-max | 9 |
microsoft/hummingbird | scikit-learn | 474 | ONNX and PyTorch model from RandomForestClassifier have different prediction results | When I try to convert a RandomForestClassifier model to ONNX and Pytorch format, the prediction results from these two models fail to match, here is the example code
```python
import numpy as np
import onnxruntime as ort
import torch
from hummingbird.ml import convert
from onnxconverter_common import FloatTensorType
from onnxmltools import convert_sklearn
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
X_bc, y_bc = load_breast_cancer(return_X_y=True)
nrows = 15000
X_bc: np.ndarray = X_bc[0:nrows]
y_bc: np.ndarray = y_bc[0:nrows]
if __name__ == "__main__":
sklearn_model = RandomForestClassifier(n_estimators=10, max_depth=10)
sklearn_model.fit(X_bc, y_bc)
sample_input = torch.rand(100, X_bc.shape[1], dtype=torch.float32)
sklearn_model_predict = sklearn_model.predict(sample_input.numpy())
onnx_ml_model = convert_sklearn(
sklearn_model, initial_types=[("input", FloatTensorType([sample_input.shape[0], sample_input.shape[1]]))],
target_opset=11
)
session = ort.InferenceSession(onnx_ml_model.SerializeToString())
output_names = [session.get_outputs()[i].name for i in range(len(session.get_outputs()))]
inputs = {session.get_inputs()[0].name: sample_input.numpy()}
onnx_ml_model_pred = session.run(output_names, inputs)[0].flatten()
pt_model = convert(sklearn_model, "torch", X_bc)
pt_model_pred = pt_model.predict(sample_input)
np.testing.assert_allclose(onnx_ml_model_pred, pt_model_pred, rtol=1e-5, atol=0)
```
and here is the output
```
AssertionError:
Not equal to tolerance rtol=1e-05, atol=0
Mismatched elements: 9 / 100 (9%)
Max absolute difference: 1
Max relative difference: 1.
x: array([1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1,
1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,...
y: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,...
``` | closed | 2021-03-27T03:02:10Z | 2021-03-30T20:52:02Z | https://github.com/microsoft/hummingbird/issues/474 | [] | univerone | 8 |
JaidedAI/EasyOCR | machine-learning | 379 | HTTPError: HTTP Error 403: Forbidden | In [1]: import easyocr
...: reader = easyocr.Reader(['ch_sim','en'])
CUDA not available - defaulting to CPU. Note: This module is much faster with a GPU.
Downloading detection model, please wait
.....
HTTPError: HTTP Error 403: Forbidden | closed | 2021-02-22T02:31:59Z | 2022-03-02T09:24:33Z | https://github.com/JaidedAI/EasyOCR/issues/379 | [] | liuke0002 | 3 |
ultrafunkamsterdam/undetected-chromedriver | automation | 972 | driver.quit() and/or driver.close() causing urllib3 and logging Warnings/Erros | Don't know how, but if I use either `driver.close()` or `driver.quit()`, it causes urllib3 WARNINGS and ALL my logging goes to the terminal (stdout) instead going only to the streamfile.
My code is something like this:
```
logger = logging.getLogger("TEST")
logger.setLevel(logging.INFO)
handler = TimedRotatingFileHandler("TEST.log", when="midnight", interval=1, encoding='utf-8')
handler.suffix = "%Y-%m-%d"
logger.addHandler(handler)
def run_test():
logger.info("START")
try:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument('--disable-application-cache')
chrome_options.add_argument("--disable-setuid-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument("--start-maximized")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
driver.delete_all_cookies()
driver.get("https://anysite")
except Exception as e:
print("Exception", e)
finally:
try:
driver.close()
except Exception as e:
print("close", e)
try:
driver.quit()
except Exception as e:
print("quit", e)
# more code
logger.info("FINISH")
def start():
run_test()
logger.info("FINISH START")
```
It only happens when I close / quit the driver. Otherwise, the logging keeps writing to the file I designed to.
Some logs:
```
WARNING:urllib3.connectionpool:Connection pool is full, discarding connection: localhost
WARNING:urllib3.connectionpool:Connection pool is full, discarding connection: localhost
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000123FE4B5188>: Failed to establish a new connection: [WinError 10061] Nenhuma conexão pôde ser feita porque a máquina de destino as recusou ativamente')': /session/26ce881f83f3183033dde36a660d9261/se/log
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000123FE497788>: Failed to establish a new connection: [WinError 10061] Nenhuma conexão pôde ser feita porque a máquina de destino as recusou ativamente')': /session/26ce881f83f3183033dde36a660d9261/se/log
Then my info logs goes wrongly to the terminal:
INFO:TEST:[Thread-13][04/01/2023 20:10:36.332082] - Some Info
INFO:TEST:[Thread-13][04/01/2023 20:10:36.336100] - Some Info
INFO:TEST:[Thread-13][04/01/2023 20:10:36.336100] - Some Info
INFO:TEST:[Thread-13][04/01/2023 20:10:36.336100] - Some Info
```
Don't know if is the URLLIB3 WARNING who's causing this logging.info to go to terminal/stdout or something else. All I know is that if I use quit/close, this logging -> stdout error happens. | closed | 2023-01-04T23:32:45Z | 2023-01-22T05:32:08Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/972 | [] | ggnetoo | 5 |
ipython/ipython | jupyter | 14,516 | Tab completion on path with space not working MacOS | I would like to autocomplete path that has space in one of the sub-directory. The issue is, after it reaches the directory with space, the tab completion doesn't work anymore.
I just migrated from Ubuntu with older version of Python (Python 3.7) and iPython – I am pretty sure I did not encounter this issue before.
It would be very helpful to get this working, is there any workaround for this?
```sh
darren@Darrens-MacBook-Pro ~ % ipython --version
8.26.0
darren@Darrens-MacBook-Pro ~ % python --version
Python 3.12.4
```
https://github.com/user-attachments/assets/3504e0a1-9583-4eca-8cc8-efd32d0cea77
| open | 2024-09-13T02:13:09Z | 2025-02-13T20:31:58Z | https://github.com/ipython/ipython/issues/14516 | [
"bug",
"tab-completion"
] | darrencl | 7 |
thtrieu/darkflow | tensorflow | 825 | ImportError: /content/darkflow/darkflow/cython_utils/cy_yolo_findboxes.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PyFPE_jbuf | Running on colab gave me this bug
ImportError: /content/darkflow/darkflow/cython_utils/cy_yolo_findboxes.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PyFPE_jbuf
though it didn't happen on my local machine | closed | 2018-06-28T10:53:07Z | 2018-07-17T17:35:55Z | https://github.com/thtrieu/darkflow/issues/825 | [] | jibinmathew69 | 2 |
deepspeedai/DeepSpeed | pytorch | 6,720 | [BUG] RuntimeError: CUDA error: no kernel image is available for execution on the device | Hi,
I ran an example code:
```
import os
import deepspeed
import torch
from transformers import pipeline
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B',
device=local_rank)
generator.model = deepspeed.init_inference(generator.model,
tensor_parallel={"tp_size": world_size},
dtype=torch.float,
replace_with_kernel_inject=True)
string = generator("DeepSpeed is", do_sample=True, min_length=50)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(string)
```
And found this error:
```
Traceback (most recent call last):
File "/home/aisg/peerat/imp/test.py", line 13, in <module>
generator.model = deepspeed.init_inference(generator.model,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/__init__.py", line 364, in init_inference
engine = InferenceEngine(model, config=ds_inference_config)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 156, in __init__
self._apply_injection_policy(config)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 413, in _apply_injection_policy
replace_transformer_layer(client_module, self.module, checkpoint, config, self.config)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 393, in replace_transformer_layer
replaced_module = replace_module(model=model,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 642, in replace_module
replaced_module, _ = _replace_module(model, policy, state_dict=sd)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 702, in _replace_module
_, layer_id = _replace_module(child,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 702, in _replace_module
_, layer_id = _replace_module(child,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 678, in _replace_module
replaced_module = policies[child.__class__][0](child,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 321, in replace_fn
new_module = replace_with_policy(child,
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 234, in replace_with_policy
_container.initialize_tensors()
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/containers/features/meta_tensor.py", line 26, in initialize_tensors
super().initialize_tensors(enable_training=enable_training)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/containers/features/hybrid_engine.py", line 30, in initialize_tensors
super().initialize_tensors(enable_training=enable_training)
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 142, in initialize_tensors
self.set_attention(*self.policy.attention(enable_training=enable_training))
File "/shared/miniconda3/envs/peerat_mllm/lib/python3.10/site-packages/deepspeed/module_inject/containers/gptneo.py", line 128, in attention
qkvw = Parameter(torch.cat((qw, kw, vw), dim=0), requires_grad=enable_training)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Here is my pip list
torch 2.0.1
deepspeed 0.15.3
transformers 4.38.0
CUDA 12.2
Python 3.10.15
GPU 8 of A100 (80 GB)
OS Ubuntu 22.04.5 LTS
I tried re-installing deepspeed with `DS_BUILD_FISED_ADAM=1 pip install deepspeed`, but I still get the same error.
Any suggestion?
Thank you. | closed | 2024-11-06T13:40:12Z | 2024-11-09T04:33:05Z | https://github.com/deepspeedai/DeepSpeed/issues/6720 | [
"bug",
"inference"
] | mrpeerat | 4 |
plotly/dash-bio | dash | 66 | Volcano two different data set has a strange behavior | I created a self contained demo of the reduced problem at https://dash-gallery.plotly.host/dash-volcano-bug-app/ (the repo is at https://dash-gallery.plotly.host/GIT/dash-volcano-bug-app).
When you use the dropdown to select Set2 from Set3 (which are the same), it is fine. The problem arises when you select Set1 and then again Set2 or Set3, then a whole bunch of data point are not rendered but you can see that they are there with the hover info... you can get the whole dataset to be displayed properly by setting the Thershold to 7 and then back to its initial value of 4, then everything works properly until one selects Set1 and then Set2 again...
I have checked that the data sets that were sent to the `figure` prop of the `dcc.Graph` were the one I expected and they were. For some reason they do not get rendered. | closed | 2018-12-02T14:43:27Z | 2021-05-04T20:27:45Z | https://github.com/plotly/dash-bio/issues/66 | [
"bug",
"App QA"
] | Bachibouzouk | 7 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 735 | Better support for enum | I'd like to be able to use an sqlachemy Enum column, and have the name stored to DB while the value is shown in the UI. I've tried the naïve approach:
```
class PageType(enum.Enum):
html = 'HTML page'
raw = 'Raw text'
class Page(db.Model):
[...]
page_type = Column(db.Enum(PageType, name='page_type'))
class PageAdmin(sqla.ModelView):
[...]
admin.add_view(PageAdmin(Page, db.session))
```
This works, but shows "Page Type" with the options {html,raw} instead of what I'd like, {HTML page,Raw text}
Doing e.g. `page_type = Column(db.Enum(*[e.value for e in PageType], name='page_type'))` works, but will save the full value in the db instead of the enum name, and looks ugly. | closed | 2019-05-11T18:58:12Z | 2020-12-05T20:37:08Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/735 | [] | xim | 2 |
piskvorky/gensim | machine-learning | 3,217 | Get travis-ci.com working with this repo | @piskvorky Could you please go through the steps described in the tutorial below? Only the project owner can do it, unfortunately.
https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci-using-github
We need TravisCI to build for certain platforms that github actions does not support yet (e.g. aarm64). | closed | 2021-08-18T12:20:44Z | 2021-08-19T03:34:07Z | https://github.com/piskvorky/gensim/issues/3217 | [
"housekeeping"
] | mpenkov | 5 |
d2l-ai/d2l-en | data-science | 2,546 | Notebooks are not working on Colab | Trying to run the very first cell (in any notebook):
`!pip install d2l==1.0.0-beta0`
I get the following error:
```
Collecting d2l==1.0.0-beta0
Using cached d2l-1.0.0b0-py3-none-any.whl (141 kB)
Collecting jupyter (from d2l==1.0.0-beta0)
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from d2l==1.0.0-beta0) (1.23.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from d2l==1.0.0-beta0) (3.7.1)
Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.10/dist-packages (from d2l==1.0.0-beta0) (0.1.6)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from d2l==1.0.0-beta0) (2.31.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from d2l==1.0.0-beta0) (1.5.3)
Collecting gym==0.21.0 (from d2l==1.0.0-beta0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
It's not possible to run notebooks right now. | closed | 2023-08-16T15:41:05Z | 2023-08-28T08:32:14Z | https://github.com/d2l-ai/d2l-en/issues/2546 | [] | lithuak | 3 |
apache/airflow | automation | 47,941 | rendered_task_instance_fields stores op_args as a string in Airflow 3 instead of a list as in Airflow 2 | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Let's say have the below task
```
@task
def pusher1(dict1):
return dict1
t1 = pusher1(["hello_world", '{{ macros.uuid.UUID("01234567891011121314151617181920") }}'])
```
Now for AF3 its stored as string

and in AF2 it was list

### What you think should happen instead?
_No response_
### How to reproduce
Use below task and check the table
```
@task
def pusher1(dict1):
return dict1
t1 = pusher1(["hello_world", '{{ macros.uuid.UUID("01234567891011121314151617181920") }}'])
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-19T04:12:37Z | 2025-03-19T05:51:58Z | https://github.com/apache/airflow/issues/47941 | [
"kind:bug",
"priority:medium",
"area:core",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 2 |
sunscrapers/djoser | rest-api | 602 | Inactive_account message not used in TokenCreateSerializer | "inactive_account" is set to settings.CONSTANTS.messages.INACTIVE_ACCOUNT_ERROR but never used in `TokenCreateSerializer`, so if the user can't log in because his account has not been activated he'll get the `invalid_credentials` error.
That's the `validate` method from TokenCreateSerializer
```
def validate(self, attrs):
password = attrs.get("password")
params = {settings.LOGIN_FIELD: attrs.get(settings.LOGIN_FIELD)}
self.user = authenticate(**params, password=password)
if not self.user:
self.user = User.objects.filter(**params).first()
if self.user and not self.user.check_password(password):
self.fail("invalid_credentials")
if self.user and self.user.is_active:
return attrs
self.fail("invalid_credentials")
``` | open | 2021-03-14T19:10:17Z | 2021-04-07T09:52:27Z | https://github.com/sunscrapers/djoser/issues/602 | [] | Frohus | 2 |
roboflow/supervision | tensorflow | 1,367 | Add tracking for KeyPoints | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Right now there is no way to track objects that have keypoints associated with them, because the tracker does not have a way to track keypoints. This feature would be able to track objects that keypoints are associated with even if there are multiple options. Ideally it would simply use the existing ByteTrack module to track the objects' bounding boxes and then keep the keypoints associated with that tracked object. Note this is different than tracking each individual keypoint, which would require an entirely different tracker.
### Use case
This is important for many different applications where tracking keypoints through a video can provide some important information. For example for sports science if two players a playing basketball and you want to analyze the movement of players, you would need to track the keypoints of the two players separately.
### Additional
I see several ways this could be implemented.
**Option 1: Add keypoints to Detections and change all the `from_model()` functions**
Add keypoints as another possible attribute to the `Detections` object, similar to the `mask` attribute. This would most likely involve adding keypoints to `Detections` and adding `from_mediapipe()` to the `Detections` class and modifying all of the other `from_model()` functions to support KeyPoints. Then the tracker could be used as normal on these detections objects.
**Option 2: Add keypoints to Detections after a detections object has been created**
The same as option 1, but instead of modifying all of the `from_model()` functions, make it so that the keypoints attribute is None unless the keypoints object were added to the existing Detections object. This would require the indices of the keypoints to exactly match the associated detection boxes. This could work with models that don't output bounding boxes by creating the boxes from the keypoints. Then the tracker could be used as normal.
**Option 3: Add bounding boxes and object confidence scores to the `KeyPoints` class**
We could add bounding boxes and object confidence scores to the `KeyPoints` class in the same way as `Detections`. For the ultralytics pose models this would be easy as they are included as outputs. For the other models this could be implemented by creating a bounding box from the keypoints of each object, and confidence scores as an average of the keypoints confidence values. Then the `KeyPoints` object could simply be sent into the object tracker. It would require a small amount of modification to the tracker, but would be relatively simple on the whole. It would be redundant to have `KeyPoints` and `Detections` have some of the same information.
**Option 4: Do this hacky thing**
I don't like this option because it is ugly and inefficient and is slightly confusing, but it works right now without any changes.
```
results = model(frame, imgsz = 1280,verbose=False)[0]
pre_track_detections = sv.Detections.from_ultralytics(results)
keypoints = sv.KeyPoints.from_ultralytics(results)
post_track_detections = byte_tracker.update_with_detections(pre_track_detections)
pre_track_bounding_boxes = pre_track_detections.xyxy
post_track_bounding_boxes = post_track_detections.xyxy
ious = sv.tracker.byte_tracker.matching.box_iou_batch(pre_track_bounding_boxes, post_track_bounding_boxes)
iou_costs = 1 - ious
matches, _, _ = sv.tracker.byte_tracker.matching.linear_assignment(iou_costs, 0.5)
post_track_keypoints = sv.KeyPoints.empty()
post_track_keypoints.xy = np.empty((len(post_track_detections), keypoints.xy.shape[1], 2), dtype=np.float32)
post_track_keypoints.class_id = np.empty((len(post_track_detections), keypoints.xy.shape[1]), dtype=np.float32)
post_track_keypoints.confidence = np.empty((len(post_track_detections), keypoints.xy.shape[1]), dtype=np.float32)
post_track_keypoints.data = keypoints.data
for i_detection, i_track in matches:
post_track_keypoints.xy[i_track] = keypoints.xy[i_detection]
post_track_keypoints.class_id[i_track] = keypoints.class_id[i_detection]
post_track_keypoints.confidence[i_track] = keypoints.confidence[i_detection]
```
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2024-07-16T20:07:44Z | 2024-11-06T20:03:49Z | https://github.com/roboflow/supervision/issues/1367 | [
"enhancement"
] | rolson24 | 5 |
DistrictDataLabs/yellowbrick | matplotlib | 666 | Add a UMAPVisualizer for text data | After seeing Rebecca speak at PyDataNY I promised her a text/UMAPVisualizer as a drop in replacement for the current text/TSNEVisualizer currently in Yellowbrick. | closed | 2018-12-07T15:34:26Z | 2018-12-28T22:08:23Z | https://github.com/DistrictDataLabs/yellowbrick/issues/666 | [
"type: feature"
] | jc-healy | 4 |
onnx/onnx | scikit-learn | 5,869 | Cannot install on windows 10 with pip - `test_data_set_0` folder is missing | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
When trying to install it via `pip install onnx`, I get the following error:
```
ERROR: Could not install packages due to an OSError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\hrger\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\onnx\\backend\\test\\data\\node\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False\\test_data_set_0'
```
upon `cd`ing to `C:\\Users\\hrger\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\onnx\\backend\\test\\data\\node\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False`, it has only one file `model.onnx`:
```
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 19/01/2024 20:05 303 model.onnx
```
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
- OS Platform and Distribution: WIndows 10 Professional 22H2
- ONNX version: 1.15.0
- Python version: 3.10.11
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
`pip instsall onnx`
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
It should install successfully.
### Notes
<!-- Any additional information -->
| closed | 2024-01-19T20:11:58Z | 2024-01-25T14:20:05Z | https://github.com/onnx/onnx/issues/5869 | [
"bug"
] | Grsz | 3 |
scikit-learn/scikit-learn | data-science | 30,546 | ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (6, 33810) + inhomogeneous part. | Hello Scikit-learn team,
I am encountering an issue while running inference VotingClassifier model with `voting="hard"` argument, I found that this issue may related to [NEP 34](https://numpy.org/neps/nep-0034-infer-dtype-is-object.html) restriction of `dtype=object` in numpy and the solution is downgrading to numpy `1.23.1`. However, it doesn't work in my case due to dependency conflicts with pandas and other packages. I'd appreciate if you could analyze this issue and provide an update when possible.
```
Traceback (most recent call last):
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 135, in <module>
ensemble_model, trained_models, model_results, ensemble_results = main(sparse=False)
^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 127, in main
trained_ensemble, ensemble_results = train_ensemble_model(
^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 89, in train_ensemble_model
ensemble_results, trained_ensemble = train_and_evaluate_ensemble(voting_clf, X_train, X_test, y_train, y_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training/ensemble_trainer.py", line 33, in train_and_evaluate_ensemble
y_pred_ensemble = voting_clf.predict(X_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/.venv/lib/python3.11/site-packages/sklearn/ensemble/_voting.py", line 443, in predict
predictions = self._predict(X)
^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/.venv/lib/python3.11/site-packages/sklearn/ensemble/_voting.py", line 80, in _predict
return np.asarray([est.predict(X) for est in self.estimators_]).T
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (6, 33810) + inhomogeneous part.
```
### Steps/Code to Reproduce
```
try:
main_logger.info("Training ensemble")
voting_clf.fit(X_train, y_train)
main_logger.info("Evaluating ensemble")
y_pred_ensemble = voting_clf.predict(X_test)
results = classification_report(y_test, y_pred_ensemble, output_dict=True)
main_logger.info(f"Ensemble Results:\n{classification_report(y_test, y_pred_ensemble)}")
return results, voting_clf
except Exception as e:
main_logger.error(f"Error in ensemble training: {str(e)}")
raise
```
### Expected Results
```Finish training```
### Actual Results
```
Traceback (most recent call last):
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 135, in <module>
ensemble_model, trained_models, model_results, ensemble_results = main(sparse=False)
^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 127, in main
trained_ensemble, ensemble_results = train_ensemble_model(
^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training.py", line 89, in train_ensemble_model
ensemble_results, trained_ensemble = train_and_evaluate_ensemble(voting_clf, X_train, X_test, y_train, y_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/training/ensemble_trainer.py", line 33, in train_and_evaluate_ensemble
y_pred_ensemble = voting_clf.predict(X_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/.venv/lib/python3.11/site-packages/sklearn/ensemble/_voting.py", line 443, in predict
predictions = self._predict(X)
^^^^^^^^^^^^^^^^
File "/home/mtoan65/Documents/Sentiment_Analysis/.venv/lib/python3.11/site-packages/sklearn/ensemble/_voting.py", line 80, in _predict
return np.asarray([est.predict(X) for est in self.estimators_]).T
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (6, 33810) + inhomogeneous part.
```
### Versions
```shell
1.5.2
```
| open | 2024-12-27T13:47:54Z | 2024-12-27T13:53:18Z | https://github.com/scikit-learn/scikit-learn/issues/30546 | [
"Bug",
"Needs Info"
] | mtoan65 | 1 |
apify/crawlee-python | web-scraping | 460 | Implement max crawl depth | - Implement "max crawl depth" / "crawling depth limit"
- See https://github.com/apify/crawlee-python/discussions/441
- The depth information should be stored in the `Request` (`user_data` -> `crawlee_data`) | closed | 2024-08-26T07:11:01Z | 2024-11-04T10:38:56Z | https://github.com/apify/crawlee-python/issues/460 | [
"enhancement",
"t-tooling",
"hacktoberfest"
] | vdusek | 2 |
Lightning-AI/pytorch-lightning | deep-learning | 20,464 | A gracefull design to introduce third-party models as tool for validation | ### Description & Motivation
python3.10.12 + pytorch_lightning 2.4.0
I need a gracefull design to introduce third-party pretrained models for use during the validation steps. so that there is no such Error reported:
```
RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, ....
```
### Pitch
I am training a model which need other third-party pretrained model during validation. example:
the third party model:
```
class PretrainedPicGen(torch.nn.Module):
def __init__(self, pretrained_path):
self.backbone = load_checkpoint(pretrained_path)
def forward(self, to_validate):
return self.backbone(to_validate)
```
And the lightning project I am training:
```
class MyModel(pl.LightningModule):
def __init__(self, my_param, third_party_pretrained_path):
....
self.pretrained_pic_gen = PretrainedPicGen(third_party_pretrained_path)
self.validation_outs = []
....
def validation_step(self, batch, *args, **kwargs):
validation_output = self.sample(....)
self.validation_outputs.append({"vali_out": validation_output})
def on_validation_epoch_end(self) : # Here we use the third party model for post processing the validation out
outputs = self.validation_outputs
for i, output in enumerate(outputs):
visible_output = self.pretrained_pic_gen(output)
self.logger.experiment.add_image(f"validate/{i}", visible_output, self.global_step)
```
and the config file yaml:
```
model:
class_path: myproject.MyModel
init_args:
my_param: 1234
third_party_pretrained_path: /path/to/third_party_pretrained
```
but When I run the training, there report the Error information as mentioned before:
```
RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, ....
```
And I think to config the `strategy=ddp_find_unused_parameters_true` may be not good solution, is there any gracefull design here? for example, support extra parameters in the `on_validation_epoch_end` callback and provide a gracefull third_party initialization supported in the config file.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @tchaton @justusschock @awaelchli | open | 2024-12-04T12:14:44Z | 2024-12-05T13:54:34Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20464 | [
"feature",
"design"
] | JohnHerry | 1 |
pandas-dev/pandas | data-science | 60,580 | BUG: when I assign value of 1-dim np.array holding single instance, it results in 0-dim array instance | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
arr = np.array(['one'], dtype="object")
df = pd.DataFrame({'col1': [None]}, index=[100])
df.at[100, 'col1'] = arr
```
### Issue Description
when I assign value of 1-dim np.array holding single instance, it results in 0-dim array instance:
>>> df.at[100, 'col1']
array('one', dtype=object)
### Expected Behavior
>>> df.at[100, 'col1']
array(['one'], dtype=object)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.7
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.2.0
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.28.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| closed | 2024-12-16T14:32:17Z | 2024-12-16T17:29:11Z | https://github.com/pandas-dev/pandas/issues/60580 | [
"Bug",
"Needs Triage",
"Nested Data"
] | kcerniauskas3 | 3 |
cobrateam/splinter | automation | 614 | update the docs about http error handling | in 0.8 the http error handling was removed from splinter.
the docs about it should be updated or removed: https://github.com/cobrateam/splinter/blob/master/docs/http-status-code-and-exception.rst | closed | 2018-05-28T14:08:57Z | 2018-08-19T00:41:13Z | https://github.com/cobrateam/splinter/issues/614 | [
"Docs",
"easy",
"good first issue"
] | andrewsmedina | 0 |
sinaptik-ai/pandas-ai | pandas | 989 | Ollama API with pandasai always gets Incorrect Answers or Errors Occurring | ### System Info
OS version: ubuntu 16.04
Python version: 3.9
pandasai version: 1.5.19
### 🐛 Describe the bug
I’m having an issue with the OLLAMA API in pandasai. It never seems to provide the correct answer. Could anyone in the community help me understand why this is happening and how I can fix it? If your setup is working correctly, could you please share how you set it up and what model you are using? Thank you.
Here's the minimal code example:
```
llm = Ollama(model="mistral", base_url='url')
dataframe = pd.read_sql('SELECT * FROM data', conn)
conn.close()
df = Agent(dataframe,
config={"llm": llm,
"save_charts_path": OUTPUT_GPAPH_FOLDER,
"save_charts": True,
"enable_cache": False,
"custom_prompts": {
"correct_error": MyCorrectErrorPrompt(),
},
"response_parser": MyResponseParser
})
question_prompt = "prompt"
question = f"{question_prompt}{prompt}"
answer = df.chat(question)
```
I've tried the pandasai version 2.0 too but it's look like still the same. | closed | 2024-03-04T01:52:59Z | 2024-03-07T18:59:39Z | https://github.com/sinaptik-ai/pandas-ai/issues/989 | [] | octadion | 1 |
PokeAPI/pokeapi | api | 1,218 | Missing Sentret Cry | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
The latest cry for Sentret is a blank audio file. There is no noise that is in the audio file.
Steps to Reproduce:
1. Go to https://raw.githubusercontent.com/PokeAPI/cries/main/cries/pokemon/latest/161.ogg
2. Play the downloaded file
| open | 2025-03-05T15:59:10Z | 2025-03-06T20:04:34Z | https://github.com/PokeAPI/pokeapi/issues/1218 | [] | Eavoo | 4 |
JaidedAI/EasyOCR | machine-learning | 734 | ERROR WHEN INSTALL opencv-python-headless | TERMINAL SHOW THIS ERROR WHEN I UPDATE LATEST VER. OF opencv-python-headless 4.5.5.64:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
easyocr 1.4.2 requires opencv-python-headless<=4.5.4.60, but you have opencv-python-headless 4.5.5.64 which is incompatible | closed | 2022-05-22T19:36:44Z | 2022-06-20T11:10:12Z | https://github.com/JaidedAI/EasyOCR/issues/734 | [] | VERISBABY | 1 |
NVlabs/neuralangelo | computer-vision | 72 | Extracting output as pointcloud | Hello,
Thanks for the awesome work!
Is there a way to extract the resulting surface as a point cloud? | closed | 2023-08-24T05:08:36Z | 2023-08-24T07:49:29Z | https://github.com/NVlabs/neuralangelo/issues/72 | [] | Mehi44 | 1 |
home-assistant/core | asyncio | 140,451 | Shelly pro 3EM neutral current does not have a device entity | ### The problem
i saw #88999 but it looks like it should be done/working now, however i don't get an entity for neutral current
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
shelly
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/shelly
### Diagnostics information
[home-assistant_shelly_2025-03-12T13-17-19.210Z.log](https://github.com/user-attachments/files/19210568/home-assistant_shelly_2025-03-12T13-17-19.210Z.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
note that 'n_current' is returning values.
```
### Additional information
_No response_ | open | 2025-03-12T13:28:54Z | 2025-03-12T19:04:55Z | https://github.com/home-assistant/core/issues/140451 | [
"integration: shelly"
] | speakxj7 | 4 |
littlecodersh/ItChat | api | 618 | 请问为什么itchat发送给好友的消息也会进入自己的消息队列里呢? | ```Python
s = []
@itchat.msg_register(TEXT, isFriendChat=True, isGroupChat=True, isMpChat=True)
def store_msg(msg):
s.append(msg)
return 'I received: ' + msg.text
itchat.auto_login(True)
itchat.run(blockThread=False)
```
我通过一个全局变量s记录了我收到的消息,然后发现之后用itchat.send发送出去的消息也会出现在s里面,请问这是为什么呢? | closed | 2018-03-26T08:04:44Z | 2018-04-11T11:21:43Z | https://github.com/littlecodersh/ItChat/issues/618 | [] | 1049451037 | 3 |
davidsandberg/facenet | computer-vision | 1,147 | Why can embedding be splited into anchor、positive、negative? | I can't understand the principle of why can embedding be splited into anchor、positive、negative?
I know the embedding is from the network, but I want to know the structure of the data set.
Thanks. | open | 2020-03-31T08:35:28Z | 2022-08-05T01:57:01Z | https://github.com/davidsandberg/facenet/issues/1147 | [] | JasonChenhx | 1 |
SYSTRAN/faster-whisper | deep-learning | 1,021 | audio_split example | Hey guys, right now Im splitting my audio into channels using ffmpeg and numpy, after that I send to `BatchedInferencePipeline.Transcribe` for transcription.
But I was looking at `transcribe.py` class and found a method named `audio_split`. Does it do the same process of separating audio into channels? Cant find any documentation or usage of it. Also, didn't get why segments should be passed as parameter since segments are generated after transcription process.
| closed | 2024-09-24T14:07:40Z | 2024-10-30T13:57:47Z | https://github.com/SYSTRAN/faster-whisper/issues/1021 | [] | Evilmaax | 2 |
NVIDIA/pix2pixHD | computer-vision | 215 | RuntimeError: Expected object of scalar type Byte but got scalar type Bool for argument #2 'other' in call to _th_or | 
How can I fix it? | open | 2020-08-31T07:51:39Z | 2020-09-02T06:54:55Z | https://github.com/NVIDIA/pix2pixHD/issues/215 | [] | yeomja99 | 1 |
pyeve/eve | flask | 944 | Is there a way to avoid count operation with pagination enabled? | In MongoDB, a count operation on a query with a filter is very, very slow even with an index on the filtered fields. I would like to be able to disable the "_meta.total" calculation but still keep the pagination enabled.
I know I wont be able to calculate the total amount of pages but I prefer this instead of having to wait ~10s per request just because of the count operation takes ~9.870s. : (
Is there any workaround to accomplish this?? | closed | 2016-12-01T23:20:59Z | 2016-12-19T02:19:51Z | https://github.com/pyeve/eve/issues/944 | [
"enhancement",
"wip"
] | dvddarias | 10 |
recommenders-team/recommenders | deep-learning | 2,012 | [FEATURE] Alternative to scrapbook to execute notebooks programmatically for tests | ### Description
<!--- Describe your expected feature in detail -->
Scrapbook is not being developed anymore, and it doesn't support Python 3.10 (See https://github.com/recommenders-team/recommenders/pull/1988#issuecomment-1712425248)
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| closed | 2023-10-08T09:49:53Z | 2023-12-23T08:11:01Z | https://github.com/recommenders-team/recommenders/issues/2012 | [
"enhancement"
] | miguelgfierro | 5 |
JaidedAI/EasyOCR | machine-learning | 702 | module 'cv2' has no attribute 'imdecode' | I can't useing easyocr to read text in image , img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
AttributeError: module 'cv2' has no attribute 'imdecode' | open | 2022-04-07T12:56:38Z | 2022-05-14T03:27:04Z | https://github.com/JaidedAI/EasyOCR/issues/702 | [] | yourstar9 | 4 |
aio-libs/aiopg | sqlalchemy | 86 | Cannot make db -> python value conversion working with custom SA columns | The definition:
``` python
import logging
import sqlalchemy.types as types
from enum import Enum as PythonEnum
log = logging.getLogger(__name__)
class PythonMappedEnum(types.TypeDecorator):
""" Implements mapping between Postgres' Enums and Python Enums.
"""
impl = types.Enum
def __init__(self, python_enum_type: PythonEnum, **kwargs):
self.python_enum_type = python_enum_type
self.kwargs = kwargs
enum_args = [x.value for x in python_enum_type]
super(PythonMappedEnum, self).__init__(*enum_args, **self.kwargs)
def process_bind_param(self, value: PythonEnum, dialect):
""" Convert to postgres value
"""
return value.value
def process_result_value(self, value: str, dialect):
""" Convert to python value
"""
log.debug("=====================")
log.debug("Called")
for __, case in self.python_enum_type.__members__.items():
if case.value == value:
return case
raise TypeError("Cannot map Enum value '{}' to Python's {}".format(
value, self.python_enum_type
))
def copy(self):
return PythonMappedEnum(self.python_enum_type, **self.kwargs)
```
The calling code (abstract):
``` python
result = yield from SAPoolConnection.execute(SATable.select().limit(1))
data = yield from result.fetchone()
```
When `data` is processed, the value that corresponds to the custom Enum field is of type `str`, because the `process_result_value()` never gets called. But for insert statements, `process_bind_param()` is called as expected.
| closed | 2015-11-08T19:19:03Z | 2016-11-25T22:36:46Z | https://github.com/aio-libs/aiopg/issues/86 | [] | avanov | 10 |
lanpa/tensorboardX | numpy | 668 | draw NaN with triangle | Today when there are NaN or Inf values, it draw as 0.
In Tensorboard, NaN or Inf are draw as Triangle [Link](https://github.com/tensorflow/tensorboard/pull/4461)
I wish to help but don't know where.
Thanks, Roni | open | 2022-06-14T10:20:51Z | 2022-06-14T10:20:51Z | https://github.com/lanpa/tensorboardX/issues/668 | [] | ronigober | 0 |
lazyprogrammer/machine_learning_examples | data-science | 81 | rl/monte_carlo.py - "iterative_policy_evaluation" doesn't exist! | "iterative_policy_evaluation" in the mentioned file must be changed to "iterative_policy_evaluation_deterministic" (or probabilistic). | closed | 2021-09-24T11:57:25Z | 2022-04-04T20:42:52Z | https://github.com/lazyprogrammer/machine_learning_examples/issues/81 | [] | MJamshidnejad | 1 |
xorbitsai/xorbits | numpy | 731 | BUG: too many open files | ### Describe the bug
I'm process a very large file (25G, each line with max 100,000 long str), I'm using dedup function, it gives out this error
### To Reproduce
To help us to reproduce this bug, please provide information below:
Your Python version: 3.10
The version of Xorbits you use: 0.6.3
Versions of crucial packages, such as numpy, scipy and pandas: numpy 1.26.0, scipy 1.11.3, pandas 2.1.1
4. Full stack of the error.
5. Minimized code to reproduce the error.
### Expected behavior
A clear and concise description of what you expected to happen.
### Additional context
Add any other context about the problem here.
| open | 2023-10-03T10:29:49Z | 2024-12-16T01:52:35Z | https://github.com/xorbitsai/xorbits/issues/731 | [
"bug"
] | charliedream1 | 6 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,137 | RAM | I meeting an question
start it run quickly,but up to 70% running slowly
why???
my RAM 88GB (8+16+32+32),GPU 3070TI
The memory limit has been reached, and I can only run up to 12GB of memory



| open | 2025-01-08T13:38:02Z | 2025-01-10T23:39:17Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1137 | [] | chais-xp | 1 |
InstaPy/InstaPy | automation | 6,003 | Implementation of follow strategy | Is it possible to follow one person and unfollow one instead of running through the entire follow loop?
Right now my script will `follow by likers` and then unfollow users from 4 days ago. It’ll do +100 and then -100. So my following count is jumping up and down by 100.
Anyway to make it +1, -1?
I don’t want to restart the gathering of names by the `follow by likers` function since it starts at the top and I don’t want that since they’re usually bots at the top. So setting `amount` to 1 and repeating 100 times is not an option.
Possible solution: somehow export `follow by likers amount=100` names to a list. Then run `follow by list amount=1` 100 times with a unfollow script in the middle. | open | 2021-01-03T22:43:00Z | 2021-07-21T03:19:16Z | https://github.com/InstaPy/InstaPy/issues/6003 | [
"wontfix"
] | Ardy000 | 4 |
apify/crawlee-python | web-scraping | 350 | Reconsider crawler inheritance | Currently, we have the following inheritance chains:
- `BasicCrawler` -> `HttpCrawler`
- `BasicCrawler` -> `BeautifulSoupCrawler`
- `BasicCrawler` -> `PlaywrightCrawler`
- `BasicCrawler` -> `ParselCrawler` (#348 )
This is an intentional difference from the JS version, where
- `BrowserCrawler` is a common ancestor of `PlaywrightCrawler` and `PuppeteerCrawler`
- this is not relevant in Python ecosystem - we won't implement anything similar to Playwright anytime soon
- `CheerioCrawler` and `JSDomCrawler` inherit from `HttpCrawler`
- this is the important difference
- We decided to do this differently to avoid inheritance chains, which make it harder to track down the code that is actually being executed. The cost is a bit of code duplication.
- In the Python version, we also have the HttpClient abstraction and most of the http-handling logic is contained there
We might want to reconsider this because
- New HTML parsers are being added as we speak
- This might make the code duplication too costly to maintain
- For #249, we would like to have a "parse the current HTML" helper that works with all supported HTML parsers, not just beautifulsoup, for instance
The possible ways out are
1. Leave it as it is now
2. Parametrize `HttpCrawler` with an HTML parser
- this would make `BeautifulSoupCrawler` and `ParselCrawler` very thin - they would just pass the right `HttpClient` and `HtmlParser` to `HttpCrawler`
- we may want to consider moving the `send_request` context helper from `BasicCrawlingContext` to `HttpCrawlingContext`
3. Remove `HttpCrawler` altogether and pull its functionality into `BasicCrawler` | closed | 2024-07-23T21:59:34Z | 2024-12-09T09:51:47Z | https://github.com/apify/crawlee-python/issues/350 | [
"t-tooling",
"debt",
"v0.5"
] | janbuchar | 5 |
Avaiga/taipy | data-visualization | 1,685 | [BUG] Investigate Azure issue | ### What would you like to share or ask?
From a user feedback:
We’re having some odd issues with Taipy App deployment. The Taipy App uses the Taipy framework and has an external connection (i.e., Azure Cosmos).
1. Create WebApp and Deploy Taipy App using Azure CLI
a. Create WebApp resource and Deploy Taipy App ‘taipyapp2-DEV’ using the command ‘az webapp up’.
b. Results: OK. The deployment succeeds and the webapp runs without error.
2. Deploying a Taipy App using Azure CLI to a pre-created WebApp resource.
a. Deploy to ‘taipyapp-DEV’. (Note this is the WebApp I asked you to create yesterday. I assume the WebApp was created via Azure Portal)
b. The Azure CLI command ‘az web app up’ (the same as 1) is used to deploy, and we specify the name of the WebApp to deploy to.
c. Results: Fails during deployment because resource not found. Error states that the WebApp resource cannot be found using Azure CLI ‘az webapp up’ command. It is odd because I can list WebApp via the ‘az webapp list’ command.
3. Deploying a Taipy App using Azure CLI to a pre-created WebApp
a. Deploy to ‘webapp-DEV’. Note this was created a long time ago. I assume the WebApp was created via Azure Portal
b. Azure CLI command ‘az webapp up’ (same as 1) is used to deploy and we specify the name of the WebApp to deploy to.
c. Results: Fails during deployment with a build failure.
4. Deploying a Taipy App using DevOps pipeline to a pre-created WebApp
a. Deploy to ‘webapp-DEV’. Note this was created a long time ago and the deployment uses the build and release pipelines that you set up for us.
b. Results: Build / Deploy succeeds but App throw ‘Monkey Patch Error’ (the one I showed you before). This is an odd error because the Deployment using 1 above uses the exact same code, requirements.txt file, etc. so the only difference is the deployment method and the way the WebApp was created. Likely we need to look at the build and deploy script too.
So, we think it’s a combination of two issues:
- There is something different about the App created via ‘az webapp up’ command and the one’s created separately. On the surface, I didn’t see any major differences.
- There is some adjustment needed for the build and/or deploy script to match what ‘az webapp up’ is doing.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2024-08-20T10:39:43Z | 2025-02-07T13:33:25Z | https://github.com/Avaiga/taipy/issues/1685 | [
"🖧 Devops",
"💥Malfunction",
"🆘 Help wanted",
"🟧 Priority: High"
] | FlorianJacta | 0 |
gradio-app/gradio | deep-learning | 10,609 | install_gradio.bat Fails with "pip_required is not installed" Due to Incorrect Subroutine Handling in helpers.bat | ### Describe the bug
Running script like `scripts\install_gradio.bat` on Windows throws an error:
```
ERROR: Value for default option cannot be empty.
Type "WHERE /?" for usage.
is not installed on the computer...
The system cannot find the batch label specified - pip_required
```
This is because the scripts load `scripts\helpers.bat` and attempt to run subroutines from it, but this is [not possible in `batch`](https://stackoverflow.com/questions/30168091/call-a-subroutine-in-a-batch-from-another-batch-file) (as opposed to `sh`).
To fix this, the `scripts\helpers.bat` script needs to run subroutines from a parameter passed to it, and every script using a subroutine from it should use `call scripts\helpers.bat [subroutine_name]`.
I will add a PR for this issue.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Run `scripts\install_gradio.bat` on Windows after a fresh clone of the repository.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.16.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.28.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.28.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it | closed | 2025-02-17T15:47:41Z | 2025-02-18T01:11:28Z | https://github.com/gradio-app/gradio/issues/10609 | [
"bug"
] | BilHim | 0 |
deeppavlov/DeepPavlov | tensorflow | 964 | Regarding Spelling Error model | Thanks for amazing toolkit :) Can you please share your views on below questions
1. How does **correct_prior** & **incorrect_prior** calculation done in Error model ?
2. How do we incorporate "**count**" with incorrect-correct pair e.g. if training data is in form of (intended_word, observed_word, count).
3. Is there any other way we can combine LM score & EM score in LM beam search method ?
Thanks a lot !!
| closed | 2019-08-09T11:21:11Z | 2020-05-11T06:53:39Z | https://github.com/deeppavlov/DeepPavlov/issues/964 | [] | smilenrhyme | 26 |
pytorch/pytorch | machine-learning | 149,824 | flex_attention raises error at compile | ### 🐛 Describe the bug
I'm trying to accelerate WindowAttention with flex_attention.
However, when the window size equals 8, it raises an error when compiling.
Please refer to this [code](https://github.com/dslisleedh/ESC/blob/main/scripts/compare_attn.py)
```bash
python compare_attn.py --h 64 --w 64 --window_size 16 --attn_func flex # This works
python compare_attn.py --h 64 --w 64 --window_size 8 --attn_func flex # Raises Error !!!
```
The second line raises an error following:
```bash
Traceback (most recent call last):
File "/home/leedh97/ESC/scripts/compare_attn.py", line 149, in <module>
model(x) # Make sure CUDNN to find proper algorithms, especially for convolutions.
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leedh97/ESC/scripts/compare_attn.py", line 105, in forward
out = self.attn_func(q, k, v, score_mod=self.get_rpe)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_attention.py", line 1096, in flex_attention
return create_flex_decoding_kernel(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_decoding.py", line 425, in create_flex_decoding_kernel
kernel_options.setdefault("SPLIT_KV", get_split_k(B, Hkv, seq_len_kv))
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_decoding.py", line 303, in get_split_k
split_k = max(split_k, 1)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/sympy/core/relational.py", line 516, in __bool__
raise TypeError("cannot determine truth value of Relational")
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: TypeError: cannot determine truth value of Relational
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg1_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg5_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 4, s0, 16], stride=[64*s0, 16*s0, 16, 1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x14b295723a30>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x14b2957405e0>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _ = index
tmp0 = ops.load(buf0, 0)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int32)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1],
origin_node=convert_element_type,
origins=OrderedSet([sum_1, convert_element_type])
)
)), TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _, _ = index
tmp0 = ops.index_expr(0, dtype=torch.int16)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int16)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1, 1],
origin_node=convert_element_type_1,
origins=OrderedSet([sort, convert_element_type_1])
)
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.25
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': False}
args[7]: (s5, TensorBox(StorageBox(
InputBuffer(name='arg6_1', layout=FixedLayout('cuda:0', torch.float32, size=[4, 225], stride=[225, 1]))
)))
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Interestingly, when the input size is small, the window size 8 works and 16 fails to compile.
```bash
python compare_attn.py --h 16 --w 16 --window_size 8 --attn_func flex # This works
python compare_attn.py --h 16 --w 16 --window_size 16 --attn_func flex # Raises Error !!!
```
Error:
```bash
Traceback (most recent call last):
File "/home/leedh97/ESC/scripts/compare_attn.py", line 150, in <module>
model(x) # Make sure CUDNN to find proper algorithms, especially for convolutions.
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leedh97/ESC/scripts/compare_attn.py", line 106, in forward
out = self.attn_func(q, k, v, score_mod=self.get_rpe)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/leedh97/.conda/envs/esc/lib/python3.10/site-packages/torch/_inductor/kernel/flex_attention.py", line 1155, in flex_attention
assert q_strides[-1] == 1, "Query must be contiguous in the last dimension"
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError: Query must be contiguous in the last dimension
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg1_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg3_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg5_1', layout=FixedLayout('cuda:0', torch.float32, size=[1, 4, s1, 16], stride=[64*s1, 16*s1, 1, s1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf2', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x1534afd63d90>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf3', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x1534afd84940>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x1534afd62b90>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x1534afd855a0>, ranges=[1, 1, 1, 1]))
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.25
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': False}
args[7]: (s5, TensorBox(StorageBox(
InputBuffer(name='arg6_1', layout=FixedLayout('cuda:0', torch.float32, size=[4, 961], stride=[961, 1]))
)))
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Fedora release 36 (Thirty Six) (x86_64)
GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Clang version: 14.0.5 (Fedora 14.0.5-2.fc36)
CMake version: version 3.22.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.6.77_TGMv2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6526Y
CPU family: 6
Model: 207
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 49%
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 75 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 1.24.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu | open | 2025-03-23T08:55:51Z | 2025-03-24T18:46:43Z | https://github.com/pytorch/pytorch/issues/149824 | [
"oncall: pt2"
] | dslisleedh | 0 |
marcomusy/vedo | numpy | 839 | Colored PLY file gets seeming random colors applied when using k3d backend. | When I display this .ply file using the vtk interface it works fine, ie solid colored meshes. When I use the k3d backend I get multi-colored meshes.

here is the notebook code.
```
from vedo import Mesh, Plotter, Volume, settings
settings.default_backend = 'k3d'
msh = Mesh("debug.ply").subdivide()
plt = Plotter(bg='black')
plt.show(msh)
```
Here is the mesh, I renamed the extension because github wouldn't let me upload a .ply file.
[debug.txt](https://github.com/marcomusy/vedo/files/11059955/debug.txt)
Some additional details. msh.print() shows the correct coloring. | open | 2023-03-24T08:34:40Z | 2023-03-24T19:33:08Z | https://github.com/marcomusy/vedo/issues/839 | [
"long-term"
] | odinsbane | 1 |
marcomusy/vedo | numpy | 838 | .getCellArray('labels') | After the mesh loading when I try to use the attribute getGetArray('labels') it gives me the following error:
AttributeError Traceback (most recent call last)
/tmp/ipykernel_101534/645426849.py in <module>
----> 1 mesh.getCellArray("labels")
AttributeError: 'Mesh' object has no attribute 'getCellArray'
I want to know if this attribute was in some way deprecated or something
| closed | 2023-03-23T15:50:57Z | 2023-03-24T19:33:32Z | https://github.com/marcomusy/vedo/issues/838 | [] | giuliarubiu | 3 |
deepset-ai/haystack | machine-learning | 8,734 | Google Vertex ChatGenerator - support for Tool | This might also be a good opportunity for refactoring.
We should investigate if it makes sense to use the [new Google Gen AI SDK](https://cloud.google.com/vertex-ai/generative-ai/docs/sdks/overview), that provides a unified interface to Gemini 2.0 through both the Gemini Developer API and the Gemini API on Vertex AI.
Related GoogleAI issue: #8735
```[tasklist]
### Tasks
- [x] Code + release
- [x] update https://github.com/deepset-ai/haystack-integrations
- [x] update docs (in review)
- [x] update cookbook (in review)
- [x] update blog (in review)
``` | closed | 2025-01-16T14:10:18Z | 2025-01-31T11:56:58Z | https://github.com/deepset-ai/haystack/issues/8734 | [
"P1"
] | anakin87 | 0 |
X-PLUG/MobileAgent | automation | 33 | GroundingDINO报错:BoxAnnotator.annotate() got an unexpected keyword argument 'labels' | python 3.10的环境,这个错误有人遇到吗? | closed | 2024-07-16T02:14:48Z | 2024-07-16T02:32:22Z | https://github.com/X-PLUG/MobileAgent/issues/33 | [] | zqxuturbo | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.