repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
kynan/nbstripout | jupyter | 88 | pip install fails | Here's what happened when I tried the pip install on my py3.6 on my macOS 10.12.6:
```bash
pip install --upgrade nbstripout (py36)
Requirement already up-to-date: nbstripout in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (0.3.3)
Requirement already satisfied, skipping upgrade: nbformat in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from nbstripout) (4.4.0)
Requirement already satisfied, skipping upgrade: ipython_genutils in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from nbformat->nbstripout) (0.2.0)
Requirement already satisfied, skipping upgrade: traitlets>=4.1 in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from nbformat->nbstripout) (4.3.2)
Requirement already satisfied, skipping upgrade: jsonschema!=2.5.0,>=2.4 in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from nbformat->nbstripout) (2.6.0)
Requirement already satisfied, skipping upgrade: jupyter_core in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from nbformat->nbstripout) (4.4.0)
Requirement already satisfied, skipping upgrade: six in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from traitlets>=4.1->nbformat->nbstripout) (1.11.0)
Requirement already satisfied, skipping upgrade: decorator in /Users/klay6683/miniconda3/envs/py36/lib/python3.6/site-packages (from traitlets>=4.1->nbformat->nbstripout) (4.3.0)
/Users/klay6683/miniconda3/envs/stable/bin/python /Users/klay6683/Dropbox/src/nbstripout/nbstripout.py: /Users/klay6683/miniconda3/envs/stable/bin/python: No such file or directory
error: external filter '/Users/klay6683/miniconda3/envs/stable/bin/python /Users/klay6683/Dropbox/src/nbstripout/nbstripout.py' failed 127
error: external filter '/Users/klay6683/miniconda3/envs/stable/bin/python /Users/klay6683/Dropbox/src/nbstripout/nbstripout.py' failed
fatal: docs/examples.ipynb: clean filter 'nbstripout' failed
``` | closed | 2018-10-05T20:16:41Z | 2018-10-16T18:31:36Z | https://github.com/kynan/nbstripout/issues/88 | [
"resolution:invalid"
] | michaelaye | 2 |
thp/urlwatch | automation | 3 | BeautifulSoup usage? | Hello,
First, thank you for your great product. I created a set of RPMs on [AXIVO repository](https://www.axivo.com/packages/setup), available for CentOS 6 (and soon for CentOS 7 with BeautifulSoup4):
```
$ yum --enablerepo=axivo list | egrep 'urlwatch|futures|beautifulsoup'
python-beautifulsoup.noarch 3.2.1-1.el6 @axivo
python-futures.noarch 2.1.6-1.el6 @axivo
urlwatch.noarch 1.17-1.el6 @axivo
```
Can you please give an example how to use BeautifulSoup in hooks.py, to filter a specific URL? For example, on your site, I want to check for latest update, using the `class="filename"` tag. I apologize for my lack of Python knowledge, I see several lines into `hooks.py` and I'm not sure if I should remove them all and use only the BeautifulSoup lines you posted on your site. Google returned no information, so I hope you post a detailed example here.
Thank you for your help.
| closed | 2014-08-21T23:32:42Z | 2014-10-27T08:06:56Z | https://github.com/thp/urlwatch/issues/3 | [] | ghost | 3 |
davidteather/TikTok-Api | api | 1,031 | Empty response | Someone experiencing the same problem with video api? Can't get info of video becuase api is giving empty page
Traceback (most recent call last):
File "E:\tiktok\s.py", line 94, in <module>
data = video.info()
File "E:\tiktok\venv\lib\site-packages\TikTokApi\api\video.py", line 74, in info
return self.info_full(**kwargs)["itemInfo"]["itemStruct"]
File "E:\tiktok\venv\lib\site-packages\TikTokApi\api\video.py", line 95, in info_full
return self.parent.get_data(path, **kwargs)
File "E:\tiktok\venv\lib\site-packages\TikTokApi\tiktok.py", line 384, in get_data
raise EmptyResponseException(0, None,
TikTokApi.exceptions.EmptyResponseException: 0 -> Empty response from Tiktok to https://m.tiktok.com/api/item/detail/?aid=1988 | closed | 2023-07-18T15:51:59Z | 2023-08-08T21:39:53Z | https://github.com/davidteather/TikTok-Api/issues/1031 | [] | dortesy | 6 |
microsoft/unilm | nlp | 1,096 | error when fine-tuning VLMO | **Describe**
Model I am using (UniLM, MiniLM, LayoutLM ...): VLMO

error happens when I fine-tuned pre-trained VLMO model on COCO retrieval task... | closed | 2023-05-17T13:27:26Z | 2023-06-14T11:51:02Z | https://github.com/microsoft/unilm/issues/1096 | [] | det-tu | 5 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,060 | Quality of generated audio | When using a recording from the LibriSpeech downloaded dataset, a good ratio of the generated audio pieces sound good and accurate. However, whenever I record some audio and use that, no matter who the speaker is, all the generated audio pieces sound the same. Is there any way I can fix this, or am I not understanding how to use this tool correctly? I've seen others on YouTube using the tool the same way I am and the resulting audio clips sound far better than my own, | open | 2022-05-01T20:45:45Z | 2022-10-21T17:38:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1060 | [] | aryanpanpalia | 3 |
Kanaries/pygwalker | plotly | 589 | How to use pygwalker in fastapi? | I used the following code to read the file and send it to pygwalker, but the returned result rendering has no effect and cannot produce dynamic rendering results like flask.
```html index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<div>
<h2>PyGwalker Flask App</h2>
</div>
<div>
<form action="/upload" method="post" enctype="multipart/form-data">
<input type="file" name="datafile">
<input type="submit" value="Submit">
</form>
</div>
<div>
{{ html_str | safe}}
</div>
</body>
</html>
```
```python
from fastapi import Request, APIRouter, HTTPException, Form, File, UploadFile
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi.responses import HTMLResponse
from pydantic import BaseModel, Field
import pandas as pd
import shutil
import pygwalker as pyg
router = APIRouter()
router.mount("/static", StaticFiles(directory="static"), name="static")
templates = Jinja2Templates(directory="templates")
@router.get("/", response_class=HTMLResponse)
async def index(request:Request):
return templates.TemplateResponse('index.html',{"request":request})
# return HTMLResponse(content='index.html', status_code=200)
@router.post("/upload", response_class=HTMLResponse)
def upload_csv(request: Request, datafile: UploadFile = File(...)):
with open(f"static/{datafile.filename}", "wb") as buffer:
shutil.copyfileobj(datafile.file, buffer)
df = pd.read_csv(f"static/{datafile.filename}", encoding="utf-8")
html_str = pyg.walk(df, html_string=True)
return templates.TemplateResponse('index.html',{"request":request,"html_str":html_str})
``` | closed | 2024-07-05T02:34:44Z | 2024-07-15T12:50:05Z | https://github.com/Kanaries/pygwalker/issues/589 | [] | soundmemories | 1 |
quokkaproject/quokka | flask | 399 | Example of making forms using Custom Values | Is there an example to make a new form that contain fields based on existing Post custom values?
For example:
A post have Price custom value, I want to make a form that has Price as one of the fields.
| closed | 2016-10-09T17:04:19Z | 2018-02-06T13:45:53Z | https://github.com/quokkaproject/quokka/issues/399 | [] | ncmonger | 0 |
koxudaxi/datamodel-code-generator | fastapi | 2,346 | When I using as module calling generate method and pass in disable_timestamp=True, the timestamp will still be displayed | ```py
from datamodel_code_generator import (
DataModelType, # 注意:DataModelType 不在 __all__ 中定义,可能在未来的版本中发生变化
InputFileType,
PythonVersion,
generate,
)
generate(
json_str,
input_file_type=InputFileType.Json, # JSON类型
output=model_file, # 输出文件路径
output_model_type=output_model_type, # 输出模型类型
# 这边datamodel_code_generator有bug(https://github.com/koxudaxi/datamodel-code-generator/issues/1977)
# 将allow_extra_fields设置为True, 目的让生成的 Pydantic 模型包含所有字段
allow_extra_fields=self.extra == "allow", # 是否禁止额外字段
target_python_version=PythonVersion.PY_311, # 目标Python版本
class_name=main_class_name, # Pydantic模型类名(根类)
use_schema_description=True, # 使用Schema描述
use_field_description=True, # 使用字段描述
field_constraints=True, # 使用字段约束
snake_case_field=True, # 使用蛇形命名法
use_standard_collections=True, # 使用标准集合类型
use_union_operator=True, # 使用 | 而不是 Union
use_double_quotes=True, # 使用双引号
disable_timestamp=True, # 生成的文件头不显示timestamp
enable_version_header=True,
)
```
```python
# generated by datamodel-codegen:
# filename: <stdin>
# timestamp: 2025-03-15T05:19:22+00:00
from __future__ import annotations
from pydantic import BaseModel, Field
class Geo(BaseModel):
lat: str
lng: str
class Address(BaseModel):
street: str
suite: str
city: str
zipcode: str
geo: Geo
class Company(BaseModel):
name: str
catch_phrase: str = Field(..., alias="catchPhrase")
bs: str
class GetUserWithNormalUser(BaseModel):
id: int
name: str
username: str
email: str
address: Address
phone: str
website: str
company: Company
```
| open | 2025-03-15T05:22:21Z | 2025-03-15T05:22:21Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2346 | [] | Airpy | 0 |
recommenders-team/recommenders | machine-learning | 1,807 | [ASK] Running evaluation on test set for DKN | ### Description
I've have been using [DKN Deep Dive](https://github.com/microsoft/recommenders/blob/aeb6b0b12e177b3eaf55bb7ab2b747549a541394/examples/02_model_content_based_filtering/dkn_deep_dive.ipynb), but I wanted to add evaluation on the test data. When I run `model.run_eval(test_file)` I am getting an error. I believe this is because the test data has users that are not present in the training data.
I am using the small MIND dataset. But I split my training, validation and test data a little differently.
Training is all of the small training data but the last day.
Validation is the last day of the small training data
Test is all of the small validation data.
<img width="1197" alt="Screen Shot 2022-08-04 at 1 09 19 PM" src="https://user-images.githubusercontent.com/110063544/182911207-d9a93dd1-1c73-4727-aabf-f24a7381c749.png">
Is this the expected behavior?
### Other Comments
| open | 2022-08-04T17:16:03Z | 2022-08-04T17:16:03Z | https://github.com/recommenders-team/recommenders/issues/1807 | [
"help wanted"
] | Bhammin | 0 |
deepspeedai/DeepSpeed | pytorch | 7,117 | safe_get_full_grad & safe_set_full_grad | deepspeed 0.15.3
zero 3 is used
For "safe_get_full_grad", does it return the same gradient values on each process/rank?
As for "safe_set_full_grad", should it be called on all the processes/ranks? or just one of them is enough?
If it's the former one, users will need to ensure gradient values to be set on each process/rank are the same?
Also, which float type should be used for "safe_set_full_grad"? any way to check this? | open | 2025-03-09T10:10:19Z | 2025-03-21T22:12:20Z | https://github.com/deepspeedai/DeepSpeed/issues/7117 | [] | ProjectDisR | 3 |
apache/airflow | machine-learning | 47,718 | Support AssetRef when binding to AssetAlias | ### Body
A followup to #47677. This will need some changes to the `outlet_events` exchange format.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | open | 2025-03-13T09:51:07Z | 2025-03-13T09:53:27Z | https://github.com/apache/airflow/issues/47718 | [
"kind:meta",
"area:datasets"
] | uranusjr | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,497 | Make ORM private api more explicitly so | Related to the discussion in https://github.com/sqlalchemy/sqlalchemy/discussions/10474
Rename function to have underscore in them / rename private modules to have an udnerstore.
This in an attempt to discourage usage of private ORM api by 3-rd party library.
If such api is required then a clear feature request should be done to see if it can be accommodated using alternative public api or if the creation of such public api can be done | closed | 2023-10-18T15:31:27Z | 2024-11-19T12:50:47Z | https://github.com/sqlalchemy/sqlalchemy/issues/10497 | [
"orm"
] | CaselIT | 9 |
huggingface/transformers | machine-learning | 36,410 | Conflicting Keras 3 mitigations | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-6.13.4-zen1-1-zen-x86_64-with-glibc2.41
- Python version: 3.13.2
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
I was attempting to create a BART pipeline but it failed with the errors:
```
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/transformers/activations_tf.py", line 22, in <module>
import tf_keras as keras
ModuleNotFoundError: No module named 'tf_keras'
```
```
File "/usr/lib/python3.13/site-packages/transformers/models/bart/modeling_tf_bart.py", line 25, in <module>
from ...activations_tf import get_tf_activation
File "/usr/lib/python3.13/site-packages/transformers/activations_tf.py", line 27, in <module>
raise ValueError(
...<3 lines>...
)
ValueError: Your currently installed version of Keras is Keras 3, but this is not yet supported in Transformers. Please install the backwards-compatible tf-keras package with `pip install tf-keras`.
```
```
File "/usr/lib/python3.13/site-packages/transformers/utils/import_utils.py", line 1865, in _get_module
raise RuntimeError(
...<2 lines>...
) from e
RuntimeError: Failed to import transformers.models.bart.modeling_tf_bart because of the following error (look up to see its traceback):
Your currently installed version of Keras is Keras 3, but this is not yet supported in Transformers. Please install the backwards-compatible tf-keras package with `pip install tf-keras`.
```
Now, I did install the relevant package and it fixed it and that's fine. But, the problem I wished to highlight is that there are two different methods of dealing with Keras 3 implemented: in PRs #28588 and #29598.
And theoretically, setting the environment variable `TF_USE_LEGACY_KERAS=1` should force tensorflow to use Keras 2 only and fix the issue without needing the `tf_keras` package. Unless I'm misunderstanding something.
If so, then the `try: import tf_keras` block should be inside the `elif os.environ["TF_USE_LEGACY_KERAS"] != "1":` block, I think, shouldn't it? Or a refactor of the whole thing.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour (hopefully):
1. Set `export TF_USE_LEGACY_KERAS=1`
2. Don't have `tf_keras` installed.
3. Try to set up a BART pipeline like so:
```
from transformers import pipeline
# 1. Set up the Hugging Face summarization pipeline using BART model
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
```
or in any other suitable manner.
I understand my Step 2 fixes the problem, but it's not the problem I'm reporting, rather the approach to mitigating usage of Keras 3.
### Expected behavior
I expect that just setting the environment variable would force the use of Keras 2, and `tf_keras` not be needed. | closed | 2025-02-26T05:40:32Z | 2025-02-26T14:38:42Z | https://github.com/huggingface/transformers/issues/36410 | [
"bug"
] | mistersmee | 2 |
huggingface/datasets | numpy | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.
Is there an existing way to add image transformations directly to the dataset loading pipeline?
### Steps to reproduce the bug
from datasets import load_dataset
from torch.utils.data import DataLoader
```python
def collate_fn(batch):
images = [item['image'] for item in batch]
texts = [item['text'] for item in batch]
return {
'images': images,
'texts': texts
}
dataset = load_dataset("Yuki20/pokemon_caption", split="train")
dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)
# Output shows varying image sizes:
# [(1280, 1280), (431, 431), (789, 789), (769, 769)]
```
### Expected behavior
I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| open | 2024-11-26T16:50:32Z | 2024-11-26T16:53:53Z | https://github.com/huggingface/datasets/issues/7299 | [] | fabiozappo | 0 |
thtrieu/darkflow | tensorflow | 659 | How boxes in annotation are mapped into output vector | Say I have only 1 class A to detect and data set with a few jpg images.
Each image contains a bounding box(s) with the class A.
Should I somehow provide negative samples where there is no class A?
If so, how the annotation should look like in this case?
Or darkflow gets those negative samples outside the boxes with the object in the data set images?
| open | 2018-03-23T09:03:03Z | 2018-03-23T09:03:03Z | https://github.com/thtrieu/darkflow/issues/659 | [] | vfateev | 0 |
ploomber/ploomber | jupyter | 175 | Make it easy to add new tasks | We should have a way to quickly create new tasks by providing templates.
Proposed API:
```sh
ploomber add python/sql
```
Asks for name and then creates the file.
Python:
```python
# ---
# jupyter:
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=["parameters"]
upstream = None
product = None
# +
# your code here...
````
SQL:
```sql
/*
*/
DROP TABLE IF EXISTS {{product}};
CREATE TABLE AS {{product}}
SELECT * FROM {{upstream['some_task']}};
```
Note: try to locate `pipeline.yaml` to guide more specific directions (e.g. if extract_upstream=True), then the code should contain upstream dependencies. If thee is no `pipeline.yaml`, just create a generic template. Ask at the end if the new task should be added to `pipeline.yaml`.
Note: apart from the CLI, this could also be a jupyter frontend extension
| closed | 2020-07-07T04:34:04Z | 2020-07-08T06:18:25Z | https://github.com/ploomber/ploomber/issues/175 | [] | edublancas | 0 |
davidsandberg/facenet | computer-vision | 829 | Large loss with pretrained_model trained on CASIA-WebFace_align | I run:
python src/train_softmax.py --data_dir /home/han/facenet/CASIA-WebFace_align --image_size 160 __embedding_size 512 /home/han/facenet/src/pretrained_models/20180408-102900/model-2018408-102900.ckpt-90
Theoretically, the accuracy of pretrained model trained 90 epoch should be nearly 0, but when I trained on the pretrained model on david's dataset-CASIA-WebFace, the loss is 1.198,
and when i add all parameters except validation parameters followed https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1:
python src/train_softmax.py --logs_base_dir /home/han2/logs/facenet/ --models_base_dir /home/han2/models/facenet/ --data_dir /home/han/facenet/CASIA-WebFace_align --image_size 160 --model_def models.inception_resnet_v1 --optimizer ADAM --learning_rate -1 --max_nrof_epochs 150 --keep_probability 0.8 --random_crop --random_flip --use_fixed_image_standardization --learning_rate_schedule_file data/learning_rate_schedule_classifier_casia.txt --weight_decay 5e-4 --embedding_size 512 --pretrained_model /home/han2/facenet/src/pretrained_models/20180408-102900/model-20180408-102900.ckpt-90
the loss is 2.775 which is very large...
Has anybody else tried this??? I couldn't figure out the reason, is the code wrong???
| open | 2018-07-31T04:44:08Z | 2018-07-31T04:50:43Z | https://github.com/davidsandberg/facenet/issues/829 | [] | Victoria2333 | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 644 | Is it possible to seperate the backing vocal to a channel? | How to remove backing vocal? | open | 2023-07-04T05:59:26Z | 2023-07-04T05:59:26Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/644 | [] | zhouhao27 | 0 |
dadadel/pyment | numpy | 35 | Problem to install on RHEL-7 (with python 2.7) | ```
matej@mitmanek: pyment (master)$ python setup.py develop --user
running develop
running egg_info
writing Pyment.egg-info/PKG-INFO
writing top-level names to Pyment.egg-info/top_level.txt
writing dependency_links to Pyment.egg-info/dependency_links.txt
writing entry points to Pyment.egg-info/entry_points.txt
reading manifest file 'Pyment.egg-info/SOURCES.txt'
writing manifest file 'Pyment.egg-info/SOURCES.txt'
running build_ext
Creating /home/matej/.local/lib/python2.7/site-packages/Pyment.egg-link (link to .)
Adding Pyment 0.3.2.dev0 to easy-install.pth file
Installing pyment script to /home/matej/.local/bin
Installed /home/matej/archiv/knihovna/repos/pyment
Traceback (most recent call last):
File "setup.py", line 15, in <module>
'pyment = pyment.pymentapp:main'
File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/site-packages/setuptools/command/develop.py", line 27, in run
self.install_for_development()
File "/usr/lib/python2.7/site-packages/setuptools/command/develop.py", line 129, in install_for_development
self.process_distribution(None, self.dist, not self.no_deps)
File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 701, in process_distribution
distreq.project_name, distreq.specs, requirement.extras
TypeError: __init__() takes exactly 2 arguments (4 given)
matej@mitmanek: pyment (master)$
``` | closed | 2017-04-20T08:55:59Z | 2021-03-08T14:51:11Z | https://github.com/dadadel/pyment/issues/35 | [] | mcepl | 2 |
fastapi/fastapi | python | 12,246 | OpenAPI servers not being returned according how the docs say they should be | ### Discussed in https://github.com/fastapi/fastapi/discussions/12226
<div type='discussions-op-text'>
<sup>Originally posted by **mzealey** September 19, 2024</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from fastapi import FastAPI
app = FastAPI()
# you can add a test endpoint here or not - same bug either way
```
### Description
```
$ curl localhost:8$ curl localhost:8000/openapi.json
{"openapi":"3.1.0","info":{"title":"FastAPI","version":"0.1.0"},"paths":{}}
```
According to the documentation of the `servers` parameter in FastAPI:
> If the servers list is not provided, or is an empty list, the default value would be a dict with a url value of /.
(assuming that `root_path_in_servers = True` (the default))
Clearly this is not happening.
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.110.3 (but according to github code seems to be in latest also)
### Pydantic Version
2.5.3
### Python Version
Python 3.10.12
### Additional Context
_No response_</div> | open | 2024-09-22T10:29:30Z | 2024-09-22T16:10:30Z | https://github.com/fastapi/fastapi/issues/12246 | [
"question"
] | Kludex | 3 |
521xueweihan/HelloGitHub | python | 2,818 | 【开源自荐】shares: vscode股票看盘插件 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/xxjwxc/shares
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:(Go,vue)
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:vscode股票看盘插件
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:
- 大家都在说,中国的牛市来了。中国的房地产要企稳了。
- 模仿leek-fund(韭菜盒子)做的一套A股看盘工具。程序员划水必备神器。
- 项目三套客户端源码:小程序端,h5公众号端,vscode插件端。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 利用Chatgpt,coze自研金融大模型
- 全链路人工智能量化
- 行业板块分析等
- echarts各种看盘图表实现与模板参考
- 日频级别的股东数,公募持仓
- 资金流向(沪深港通资金流向、北向资金、南向资金)
- 个股诊断
- 黄金上穿,优选日等
- 示例代码:(可选)
- 截图:


- 后续更新计划:
- 继续迭代优化
| open | 2024-09-29T02:49:33Z | 2024-10-23T01:53:54Z | https://github.com/521xueweihan/HelloGitHub/issues/2818 | [] | xxjwxc | 1 |
paperless-ngx/paperless-ngx | django | 7,470 | [BUG] User cannot login after upgrade | ### Description
User cannot login after Upgrade from 2.11.0 to 2.11.4
However, Admin can login.
### Steps to reproduce
Load Login page.
login as regular user.
### Webserver logs
```bash
{"headers":{"normalizedNames":{},"lazyUpdate":null},"status":403,"statusText":"OK","url":"https://docs.my.domain/api/ui_settings/","ok":false,"name":"HttpErrorResponse","message":"Http failure response for https://docs.my.domain/api/ui_settings/: 403 OK","error":{"detail":"Sie sind nicht berechtigt diese Aktion durchzuführen."}}
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.4
### Host OS
Ubuntu 22.04
### Installation method
Bare metal
### System status
```json
{
"pngx_version": "2.11.4",
"server_os": "Linux-6.8.12-1-pve-x86_64-with-glibc2.31",
"install_type": "bare-metal",
"storage": {
"total": 21474836480,
"available": 21346254848
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0001_initial_squashed_0009_mailrule_assign_tags",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-07-04T00:00:01.886143+02:00",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Edge
### Configuration changes
none
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-14T19:27:30Z | 2024-09-14T03:04:25Z | https://github.com/paperless-ngx/paperless-ngx/issues/7470 | [
"not a bug"
] | manuelkamp | 3 |
littlecodersh/ItChat | api | 65 | 如何设置屏蔽手机端发送的消息 | ``` Python
@itchat.msg_register('Text', isGroupChat=False)
def auto_reply(msg):
try:
print itchat.get_friends(userName=msg['FromUserName'])['NickName']
print itchat.get_friends(userName=msg['ToUserName'])['NickName']
print msg['Content']
except:
pass
```
结果,只输出自己的NickName。。。。
| closed | 2016-08-15T16:14:06Z | 2016-08-16T11:46:48Z | https://github.com/littlecodersh/ItChat/issues/65 | [
"bug"
] | hadwinfu | 2 |
pyg-team/pytorch_geometric | deep-learning | 8,936 | Cannot install both torch-scatter and torch-sparse on PyTorch 2.2 | ### 🐛 Describe the bug
When attempting to import the `torch_geometric` package in my jupyter notebook, I encounter the warnings below:
```
[/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_geometric/typing.py:72](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_geometric/typing.py:72): UserWarning: An issue occurred while importing 'torch-scatter'. Disabling its usage. Stacktrace: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_scatter/_version_cpu.so, 0x0006): Symbol not found: __ZN3c1017RegisterOperatorsD1Ev
Referenced from: <CB013B2E-1F24-3179-9DB7-2827CCE30A6C> [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_scatter/_version_cpu.so](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_scatter/_version_cpu.so)
Expected in: <44DEDA27-4DE9-3D4A-8EDE-5AA72081319F> [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/lib/libtorch_cpu.dylib](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/lib/libtorch_cpu.dylib)
warnings.warn(f"An issue occurred while importing 'torch-scatter'. "
[/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_geometric/typing.py:110](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_geometric/typing.py:110): UserWarning: An issue occurred while importing 'torch-sparse'. Disabling its usage. Stacktrace: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_sparse/_version_cpu.so, 0x0006): Symbol not found: __ZN3c1017RegisterOperatorsD1Ev
Referenced from: <DB3BD544-3EC2-35A8-8706-C8B62DEB4F13> [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_sparse/_version_cpu.so](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch_sparse/_version_cpu.so)
Expected in: <44DEDA27-4DE9-3D4A-8EDE-5AA72081319F> [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/lib/libtorch_cpu.dylib](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/lib/libtorch_cpu.dylib)
warnings.warn(f"An issue occurred while importing 'torch-sparse'. "
```
This arises from the following code (previously I could use it normally without this warning):
```py
### inside site_packages/torch_geometric/typing.py ###
try:
import torch_scatter # noqa
WITH_TORCH_SCATTER = True
except Exception as e:
if not isinstance(e, ImportError): # pragma: no cover
warnings.warn(f"An issue occurred while importing 'torch-scatter'. "
f"Disabling its usage. Stacktrace: {e}")
torch_scatter = object
WITH_TORCH_SCATTER = False
```
### HERE
So I tried removing and installing those packages again. I'm on macOS so don't have CUDA and I've got
> "ERROR: Could not build wheels for torch-scatter, which is required to install pyproject.toml-based projects"
when trying to reinstall `torch-scatter` package:
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.2.0+${cpu}.html
```
as well as `torch-sparse`:
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.2.0+${cpu}.html
```
Above all, there had never been an error like this before, so I tried to downgrade my PyTorch to version 2.1 `pip install torch==2.1.0`, and all the errors I wrote above disappeared. I therefore discovered that PyTorch 2.2 was the source of the issue.
_Hope this issue post is useful to the PyG team._
<img width="851" alt="Screenshot 2567-02-19 at 13 41 22" src="https://github.com/pyg-team/pytorch_geometric/assets/74043014/397e72c7-0d05-4fbc-a6dd-2984683ba1d4">
### Versions
Collecting environment information...
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.2.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.7 (v3.11.7:fa7a6f2303, Dec 4 2023, 15:22:56) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-14.2.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.0
[pip3] torch_geometric==2.5.0
[pip3] torch-scatter==2.1.2
[pip3] torch-sparse==0.6.18
[pip3] torchdata==0.7.1
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.3.1
[conda] No relevant packages | closed | 2024-02-19T06:45:46Z | 2024-02-22T15:34:08Z | https://github.com/pyg-team/pytorch_geometric/issues/8936 | [
"bug"
] | Panichito | 6 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 888 | The voice is way too different can anyone explain how to improve? | I was trying an audio from this clip https://youtu.be/ZZuWSkuqjPA downloaded and converted to wav.
There is a lot of krrrr krrr noise in the output.
I tried passing a 30sec, 5 min, 10min clips, and also cleaned the WAV files with noise reduction in audacity and gave them as input; still no difference | closed | 2021-11-08T20:35:40Z | 2021-11-08T22:59:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/888 | [] | 0xrushi | 1 |
pytest-dev/pytest-qt | pytest | 147 | Missing bits from pytest-qt 2.0 docs | @The-Compiler
> I'm a bit late, but I just realised check_params_cb and order aren't mentioned in the signal docs at all, and order isn't even in the changelog.
| closed | 2016-07-29T11:06:45Z | 2016-10-19T00:08:52Z | https://github.com/pytest-dev/pytest-qt/issues/147 | [] | nicoddemus | 3 |
fastapi/sqlmodel | sqlalchemy | 50 | Is dynamic schema supported like in SQLAlchemy? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
Example code from the blog:
Base = declarative_base()
class MyTableClass(Base):
__tablename__ = 'myTableName'
myFirstCol = Column(Integer, primary_key=True)
mySecondCol = Column(Integer, primary_key=True)
Base.metadata.create_table(engine)
attr_dict = {'__tablename__': 'myTableName',
'myFirstCol': Column(Integer, primary_key=True),
'mySecondCol': Column(Integer)}
```
### Description
I am looking if SQLModel supports dynamic schema like SQLAlchemy does. Example: https://sparrigan.github.io/sql/sqla/2016/01/03/dynamic-tables.html
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.0
### Additional Context
_No response_ | open | 2021-08-28T06:14:59Z | 2021-11-19T06:00:42Z | https://github.com/fastapi/sqlmodel/issues/50 | [
"question"
] | aghanti7 | 2 |
ploomber/ploomber | jupyter | 365 | HTML - Display youtube videos on a responsive table | The videos section in the docs embeds youtube videos one after another: https://ploomber.readthedocs.io/en/latest/videos.html
it'd be better to create a responsive table that displays 2-3 columns and collapses them on mobile, we're using the bootstrap framework so it should be simple to do it
source code: https://github.com/ploomber/ploomber/blob/master/doc/_templates/videos.html
contributing guide: https://github.com/ploomber/ploomber/blob/master/doc/CONTRIBUTING.md
| closed | 2021-10-15T16:05:54Z | 2021-10-18T23:19:45Z | https://github.com/ploomber/ploomber/issues/365 | [
"documentation",
"good first issue"
] | edublancas | 4 |
graphql-python/graphene-django | graphql | 570 | [question] How to use generated inner types? | For instance:
I have some django model:
```python
class Foo(models.Model):
field = models.CharField(max_length=255, choices=(('foo', 'Foo'), ('bar', 'Bar')))
```
Then I generate object type using `DjangoObjectType`:
```python
class FooNode(DjangoObjectType):
class Meta:
model = Foo
```
Graphene will create Enum type for that field named `FooField` (I can see it in docs in graphiql).
But how can I use this type inside my code? How can I extract it from `FooNode`? | closed | 2019-01-13T22:44:02Z | 2021-06-22T18:32:34Z | https://github.com/graphql-python/graphene-django/issues/570 | [] | vanyakosmos | 2 |
ray-project/ray | data-science | 51,506 | CI test windows://python/ray/tests:test_multi_tenancy is consistently_failing | CI test **windows://python/ray/tests:test_multi_tenancy** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_multi_tenancy-END
Managed by OSS Test Policy | closed | 2025-03-19T00:07:58Z | 2025-03-19T21:53:33Z | https://github.com/ray-project/ray/issues/51506 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
ivy-llc/ivy | pytorch | 28,219 | Fix Frontend Failing Test: torch - tensor.torch.Tensor.__gt__ | closed | 2024-02-07T22:37:13Z | 2024-02-25T10:38:42Z | https://github.com/ivy-llc/ivy/issues/28219 | [
"Sub Task"
] | jacksondm33 | 0 | |
deepset-ai/haystack | pytorch | 8,190 | 🧪 Tools: support for tools in OllamaChatGenerator | closed | 2024-08-09T14:40:08Z | 2024-10-02T13:35:41Z | https://github.com/deepset-ai/haystack/issues/8190 | [
"P2"
] | anakin87 | 0 | |
tatsu-lab/stanford_alpaca | deep-learning | 127 | epoch | closed | 2023-03-23T01:31:56Z | 2023-03-23T01:32:02Z | https://github.com/tatsu-lab/stanford_alpaca/issues/127 | [] | zhengzangw | 0 | |
google-research/bert | tensorflow | 1,149 | Question about fine-tuning BERT on domain-specific dataset | ### Hi.
### I'm asking <ins>do we need to do a random search for hyperparameters - learning rate, batch size, weight decay - when we fine-tune pretrained BERT on a domain-specific dataset?</ins> A common practice introduced in the paper is that we can set the learning rate to 2e-5, batch size to 32, 64. Does this mean there is no need to do hyperparameter tuning by ourselves?
### Through experiments, I see that on a 6k dataset for binary classification, all hyperparameters tend to make the model overfit on the dataset. But produce slightly different test performance. In this case, is there a need to do a random search?
### The paper says during pretraining, the learning rate was set to 1e-4 and weight decay was set to 0.01. Could you give some explanation of why setting a large weight decay? <ins>How to set weight decay when fine-tuning?</ins>
### Thanks a lot!
# Please feel free to leave a comment! | open | 2020-09-16T07:15:15Z | 2020-09-23T01:59:19Z | https://github.com/google-research/bert/issues/1149 | [] | guoxuxu | 0 |
vimalloc/flask-jwt-extended | flask | 152 | Token becomes invalid after reload of service | Hi, I notice next issue:
If you serve flask app with uwsgi, and run your application with "service" command (create service file, put it to /etc/systemd/system), after service reload all tokens become invalid (e.g. for all requests with previously generated tokens returned response with 422 error).
If you need any examples(my configuration) - please, tell me. | closed | 2018-05-09T11:47:00Z | 2023-12-16T10:33:00Z | https://github.com/vimalloc/flask-jwt-extended/issues/152 | [] | mudrila | 5 |
FlareSolverr/FlareSolverr | api | 1,424 | [TorrentLeech] Error solving the challenge. Timeout after 60.0 seconds. | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Have you ACTUALLY checked all these?
YES
### Environment
```markdown
- FlareSolverr version: 3.3.21
- Last working FlareSolverr version: 3.3.21
- Operating system: Linux (Docker Container)
- Are you using Docker: Yes
- FlareSolverr User-Agent (see log traces or / endpoint): 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: NO
- Are you using a Proxy: NO (Not sure if nginx-proxy counts)
- Are you using Captcha Solver: NO
- If using captcha solver, which one:
- URL to test this issue: https://www.torrentleech.org/user/account/login/
```
### Description
Getting timeout as its not finding the challenge. Tried adding the language and time zone to the docker-compose.yml file with no luck. Checked existing 1218 PR but I saw it was already included into 3.3.20 so it should work. Tried with YGG and even google without luck.
### Logged Error Messages
```text
2024-12-19 20:48:23 INFO ReqId 139075663169280 172.18.0.1 POST http://localhost:8191/v1 200 OK
2024-12-19 20:49:14 INFO ReqId 139075652683520 Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'https://www.ygg.re/auth/login', 'maxTimeout': 60000}
2024-12-19 20:49:14 DEBUG ReqId 139075652683520 Launching web browser...
2024-12-19 20:49:14 DEBUG ReqId 139075652683520 Started executable: `/app/chromedriver` in a child process with pid: 1152
2024-12-19 20:49:14 DEBUG ReqId 139075652683520 New instance of webdriver has been created to perform the request
2024-12-19 20:49:14 DEBUG ReqId 139075621226240 Navigating to... https://www.ygg.re/auth/login
2024-12-19 20:49:15 INFO ReqId 139075621226240 Challenge detected. Title found: Just a moment...
2024-12-19 20:49:15 DEBUG ReqId 139075621226240 Waiting for title (attempt 1): Just a moment...
2024-12-19 20:49:16 DEBUG ReqId 139075621226240 Timeout waiting for selector
2024-12-19 20:49:16 DEBUG ReqId 139075621226240 Try to find the Cloudflare verify checkbox...
2024-12-19 20:49:16 DEBUG ReqId 139075621226240 Cloudflare verify checkbox not found on the page.
2024-12-19 20:49:16 DEBUG ReqId 139075621226240 Try to find the Cloudflare 'Verify you are human' button...
2024-12-19 20:49:16 DEBUG ReqId 139075621226240 The Cloudflare 'Verify you are human' button not found on the page.
2024-12-19 20:49:18 DEBUG ReqId 139075621226240 Waiting for title (attempt 2): Just a moment...
2024-12-19 20:49:19 DEBUG ReqId 139075621226240 Timeout waiting for selector
2024-12-19 20:49:19 DEBUG ReqId 139075621226240 Try to find the Cloudflare verify checkbox...
2024-12-19 20:49:19 DEBUG ReqId 139075621226240 Cloudflare verify checkbox not found on the page.
2024-12-19 20:49:19 DEBUG ReqId 139075621226240 Try to find the Cloudflare 'Verify you are human' button...
2024-12-19 20:49:19 DEBUG ReqId 139075621226240 The Cloudflare 'Verify you are human' button not found on the page.
2024-12-19 20:49:21 DEBUG ReqId 139075621226240 Waiting for title (attempt 3): Just a moment...
```
### Screenshots
_No response_ | closed | 2024-12-20T01:49:40Z | 2024-12-22T00:57:18Z | https://github.com/FlareSolverr/FlareSolverr/issues/1424 | [
"duplicate"
] | fansollo22 | 11 |
open-mmlab/mmdetection | pytorch | 11,930 | get_flops in Co-DETR ERROR!!!!!!!!!!! | I just run the get_flops.py with config is co_dino_5scale_r50_lsj_8xb2_1x_coco.py
the work done, but the resule is error like this:

I print the state table like this:

so, why the flops is 0???? parameters can be caculated!!!!!!!!!!!! | open | 2024-08-29T14:32:24Z | 2024-09-14T08:32:11Z | https://github.com/open-mmlab/mmdetection/issues/11930 | [] | NewBeeMrz | 2 |
ageitgey/face_recognition | python | 985 | Fast Inferencing on Embedded board in real time Face Recognition using Tensor RT | * face_recognition version: 1.2.3
* Python version: 3.6.8
* Operating System: Ubuntu 18.04(Jetson Nano Optimized)
Hello Adam sir! First of all, let me tell you that face_recognition is an extremely Robust and Highly Accurate API till date for Facial Detection/ Recognition. KUDOS to your work !!!
I was wondering whether we can take the performance of real-time inference of Face Recognition even further using Nvidia's TensorRT on Nvidia's Embedded boards(Jetson Nano, AGX Xavier, etc.)???
If yes then how?? I have tried exploring Tensor RT with reference to this API but I am unable to get to a satisfactory output!
All help appreciated!
| open | 2019-11-23T07:13:54Z | 2022-05-20T01:42:07Z | https://github.com/ageitgey/face_recognition/issues/985 | [] | DixitIshan | 2 |
microsoft/unilm | nlp | 1,066 | The project of E5: How to fine-tune on the E5 model? | Dear author:
The following informations is about the paper and project of E5: [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
I want to fine-tune on my dataset. However, I do not know how to do it, there is no more information on the main site of GitHub: https://github.com/microsoft/unilm/tree/master/e5.
1. I don't know how to prepare my data. Same as what is the data structure should I design.
2. What the tools should I use?
Can you provide more information or train_script / finetune_script on the GitHub .
Thank you very much.
| closed | 2023-04-14T07:26:54Z | 2024-04-25T07:51:26Z | https://github.com/microsoft/unilm/issues/1066 | [] | EasyLuck | 14 |
BMW-InnovationLab/BMW-YOLOv4-Training-Automation | rest-api | 35 | No such file or directory | Hi I have the same issue as others in this issue history.
I have tried solution to set DOWNLOAD_ALL=1 in dockerfile but not works for me.
I have yolov4.weights in the right folder under config/darknet/yolov4_default_weights/
Any help? Thank you. Robert
<img width="1203" alt="image" src="https://user-images.githubusercontent.com/34579968/155623375-733a4c94-9184-4959-8229-eed873fc5c59.png"> | closed | 2022-02-24T23:19:27Z | 2022-12-14T09:40:23Z | https://github.com/BMW-InnovationLab/BMW-YOLOv4-Training-Automation/issues/35 | [
"enhancement",
"help wanted"
] | rsicak | 7 |
voila-dashboards/voila | jupyter | 692 | start Jupyter Notebook as Voila app from the OS Windows manager | Similarly to nbopen I would like to start a Jupyter Notebook from e.g. my Windows explorer with a double-click in voila. Has somebody seen something like that? Is it possible? | open | 2020-09-03T19:13:01Z | 2020-12-31T13:18:19Z | https://github.com/voila-dashboards/voila/issues/692 | [] | 1kastner | 4 |
hzwer/ECCV2022-RIFE | computer-vision | 309 | 如何对视频流中特定的某一段进行插帧 | open | 2023-04-21T01:39:13Z | 2023-04-21T02:30:01Z | https://github.com/hzwer/ECCV2022-RIFE/issues/309 | [] | HAL900000 | 0 | |
ansible/awx | django | 15,773 | job_wait times out when job ends with status error | ### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
When a AAP job ends with status error the module will still wait for event_processing_finished and this can take long time (hours) hence the job will wait until timeout is reached.
### AWX version
4.5.15
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [x] Collection
- [ ] CLI
- [ ] Other
### Installation method
N/A
### Modifications
no
### Ansible version
2.16
### Operating system
RHEL 9
### Web browser
_No response_
### Steps to reproduce
force a job to end with status error. ensure event processing time take longer than job_wait timeout setting.
- name: Configure Windows test VM
ansible.controller.job_launch:
job_template: "This Job will fail with status error"
register: Job
- name: Wait for job to fail with status error
ansible.controller.job_wait:
job_id: "{{ Job.id }}"
timeout: 7200
interval: 30
### Expected results
wait_job should not timeout it should report back that job failed sooner.
### Actual results
TASK [Wait for server to be configured] ****************************************
task path: /runner/project/automated_tests/tasks/windows_configure_vm.yml:24
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 7204.267789, "finished": null, "id": 352663, "msg": "Monitoring of Job - 352663 aborted due to timeout", "started": "2025-01-23T09:00:18.690410Z", "status": "running"}
### Additional information
_No response_ | closed | 2025-01-24T07:59:17Z | 2025-02-05T18:16:36Z | https://github.com/ansible/awx/issues/15773 | [
"type:bug",
"needs_triage",
"community"
] | mhallin2 | 1 |
home-assistant/core | python | 141,160 | Zone triggers should be based on state transitions rather than GPS location updates | ### The problem
The device_tracker may not always determine the state based on the GPS location, because GPS location is not always accurate.
I use a third-party APP for positioning. It determines whether I am at home by scanning nearby WiFi. For me, GPS only tells my family my approximate location.
However, today I found that my device_tracker and person entities had no records of leaving home, but the zone trigger was still triggered. After investigation, I found that it was caused by GPS positioning deviation.
This seems unreasonable because it trusts GPS positioning too much and it is difficult for users to correct this problem (even if I find that there is a GPS deviation, I have no ability to correct the latitude and longitude positioning, all I can do is correct the state).
---
The method to reproduce is very simple. Define an automation with a zone trigger.
```
alias: 离家
description: ""
triggers:
- trigger: zone
entity_id: person.nxy
zone: zone.home
event: leave
enabled: false
conditions: []
actions:
- action: light.turn_off
metadata: {}
data: {}
target:
entity_id: light.nxy_bedroom_xi_ding_deng
mode: single
```
Next, change the latitude and longitude of device_tracker in devtools to somewhere else, but do not change the state.
At this time, the zone trigger will be triggered.
---
My English is not good, so I use Google Translate. If the content is not clear enough, please tell me.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
alias: 离家
description: ""
triggers:
- trigger: zone
entity_id: person.nxy
zone: zone.home
event: leave
enabled: false
conditions: []
actions:
- action: light.turn_off
metadata: {}
data: {}
target:
entity_id: light.nxy_bedroom_xi_ding_deng
mode: single
```
### Anything in the logs that might be useful for us?
```txt
this:
entity_id: automation.nxy_leaving
state: 'on'
attributes:
id: '1737546509257'
last_triggered: '2025-03-22T20:32:00.932080+00:00'
mode: single
current: 0
friendly_name: 离家
last_changed: '2025-03-21T02:01:57.797939+00:00'
last_reported: '2025-03-22T20:32:01.198223+00:00'
last_updated: '2025-03-22T20:32:01.198223+00:00'
context:
id: 01JPZRWBD34NH0W0GHA5FBKFCS
parent_id: 01JPZRWBD3AJ6CFMG41NZP5KYQ
user_id: null
trigger:
id: '0'
idx: '0'
alias: null
platform: zone
entity_id: person.nxy
from_state:
entity_id: person.nxy
state: home
attributes:
editable: true
id: nxy
device_trackers:
- device_tracker.nxy
latitude: <home>
longitude: <home>
gps_accuracy: 31
source: device_tracker.nxy
user_id: 140211cfb8e04740afb8b6fcb7aa0684
entity_picture: /api/image/serve/7a2ef1e7650fa9aa6d66f9ada96e4267/512x512
friendly_name: NXY
last_changed: '2025-03-22T14:26:24.124404+00:00'
last_reported: '2025-03-22T20:42:01.200256+00:00'
last_updated: '2025-03-22T20:42:01.200256+00:00'
context:
id: 01JPZSENKG0HXHFEYF0QS65KQW
parent_id: null
user_id: null
to_state:
entity_id: person.nxy
state: home
attributes:
editable: true
id: nxy
device_trackers:
- device_tracker.nxy
latitude: <not_home>
longitude: <not_home>
gps_accuracy: 8
source: device_tracker.nxy
user_id: 140211cfb8e04740afb8b6fcb7aa0684
entity_picture: /api/image/serve/7a2ef1e7650fa9aa6d66f9ada96e4267/512x512
friendly_name: NXY
last_changed: '2025-03-22T14:26:24.124404+00:00'
last_reported: '2025-03-22T20:57:00.880796+00:00'
last_updated: '2025-03-22T20:57:00.880796+00:00'
context:
id: 01JPZTA46GBR665HNNFPZ2DX20
parent_id: null
user_id: null
zone:
entity_id: zone.home
state: '2'
attributes:
latitude: <home>
longitude: <home>
radius: 20
passive: false
persons:
- person.nxy
editable: true
icon: mdi:home
friendly_name: <hide>
last_changed: '2025-03-22T14:28:32.080153+00:00'
last_reported: '2025-03-22T14:28:32.080153+00:00'
last_updated: '2025-03-22T14:28:32.080153+00:00'
context:
id: 01JPZ42SPG340H0F80ASGH9J98
parent_id: null
user_id: null
event: leave
description: person.nxy leaving <hide>
```
### Additional information
_No response_ | open | 2025-03-23T01:56:27Z | 2025-03-23T07:45:41Z | https://github.com/home-assistant/core/issues/141160 | [] | NXY666 | 0 |
roboflow/supervision | tensorflow | 1,182 | HaloAnnotator does not work | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
can you give me a complete code? when i use HaloAnnotator, nothing changes in the image, the detections.mask is None
```python
import cv2
import supervision as sv
from ultralytics import YOLO
filename = '../src/static/dog.png'
image = cv2.imread(filename)
model = YOLO('../models/yolov8x.pt')
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)
annotated_image = sv.HaloAnnotator().annotate(
scene=image.copy(), detections=detections)
sv.plot_image(annotated_image)
```
### Additional
_No response_ | closed | 2024-05-09T07:33:38Z | 2024-05-09T08:11:32Z | https://github.com/roboflow/supervision/issues/1182 | [
"question"
] | wilsonlv | 1 |
httpie/cli | api | 1,461 | Failing tests with responses ≥ 0.22.0 | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. `git clone https://github.com/httpie/httpie; cd httpie`
2. `pip install 'responses>=0.22.0' .[test]`
3. `pytest`
## Current result
A multitude of failures in `tests/test_encoding.py`, `tests/test_json.py`, etc. in the vein of https://hydra.nixos.org/build/202035507: `KeyError: 0` on httpie/models.py line 82.
## Expected result
A passing test suite.
---
## Additional information, screenshots, or code examples
I wrote some of this up in https://github.com/NixOS/nixpkgs/pull/205270#issuecomment-1361147904, but the problem is not NixOS-specific. The short version is that before https://github.com/getsentry/responses/pull/585, the reference to `httpie.models.HTTPResponse()._orig.raw._original_response.version` in [the implementation](https://github.com/httpie/httpie/blob/621042a0486ceb3afaf47a013c4f2eee4edc1a1d/httpie/models.py#L72) of `httpie.models.HTTPResponse.headers` found the then-extant `responses.OriginalResponseShim` object, which does not have a `version` attribute, and therefore successfully defaulted to 11, whereas now that that class has been removed it finds a `urllib3.HTTPResponse` object instead, which [defaults](https://urllib3.readthedocs.io/en/stable/reference/urllib3.response.html#urllib3.response.HTTPResponse) to `version`=0, and it’s not prepared to handle that.
Given the amount of groveling into internal data structures that goes on here (I don’t think `requests` even documents `Request.raw` as being a `urllib3.HTTPResponse` object), I’m not sure if this is a bug in the `httpie` test suite or a regression in `responses`, so I’m filing it here for you to decide.
For reference, the following change makes the tests pass for me:
```diff
diff --git a/httpie/models.py b/httpie/models.py
index d97b55e..a3ec6e7 100644
--- a/httpie/models.py
+++ b/httpie/models.py
@@ -77,6 +77,8 @@ class HTTPResponse(HTTPMessage):
else:
raw_version = raw.version
except AttributeError:
+ raw_version = 0
+ if not raw_version:
# Assume HTTP/1.1
raw_version = 11
version = {
``` | closed | 2022-12-25T18:36:06Z | 2023-01-15T16:58:59Z | https://github.com/httpie/cli/issues/1461 | [
"bug",
"new"
] | alexshpilkin | 1 |
howie6879/owllook | asyncio | 56 | 我又用debian9试着装,出现这个问题 | 我又用debian9试着装,
debian9上安装了宝塔面板,搭建了lnmp
在安装你的教程,安装,都蛮顺利的,到这里
(python36) root@ip-172-26-13-169:~/owllook# pipenv run pip install pip==18.0
Creating a virtualenv for this project…
Pipfile: /root/owllook/Pipfile
Using /root/anaconda3/envs/python36/bin/python3.6m (3.6.8) to create virtualenv…
⠦ Creating virtual environment...Using base prefix '/root/anaconda3/envs/python36'
New python executable in /root/.local/share/virtualenvs/owllook-kjRL-ddR/bin/python3.6m
Also creating executable in /root/.local/share/virtualenvs/owllook-kjRL-ddR/bin/python
Installing setuptools, pip, wheel...
done.
Running virtualenv with interpreter /root/anaconda3/envs/python36/bin/python3.6m
✔ Successfully created virtual environment!
Virtualenv location: /root/.local/share/virtualenvs/owllook-kjRL-ddR
Collecting pip==18.0
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 19.0MB/s
Installing collected packages: pip
Found existing installation: pip 19.0
Uninstalling pip-19.0:
Successfully uninstalled pip-19.0
Successfully installed pip-18.0
(python36) root@ip-172-26-13-169:~/owllook# pipenv install
Installing dependencies from Pipfile.lock (39f5cc)…
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 15/52
进行到15/52这里时,不动了,不知道是什么情况? | closed | 2019-01-23T02:33:11Z | 2019-01-23T03:36:17Z | https://github.com/howie6879/owllook/issues/56 | [] | aff2018 | 7 |
thtrieu/darkflow | tensorflow | 410 | Anyone knows Conv2d-kernel's format is 'HWCN' or 'WHCN' ? | I am trying to simplify tiny-yolo net.
Conv2d-kernel weights were collected, using [CheckpointReader.get_tensor(key)](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/inspect_checkpoint.py)
from official sample
and I got ndarray shaped like (3, 3, 128, 256), for example.
I want to know the exact meaning of the shape is HWCN or WHCN. | open | 2017-09-28T14:54:08Z | 2017-09-28T15:00:59Z | https://github.com/thtrieu/darkflow/issues/410 | [] | Serbipunk | 0 |
ydataai/ydata-profiling | jupyter | 1,197 | Does pandas-profiling work in Jupyter Notebooks on AWS? | Does pandas-profiling work in Jupyter Notebooks on AWS? I understand there are a lot of configuration differences that can lead to issues but whenever I try to produce a profiling report, I get the following errors when I run:
```
profile = ProfileReport(df, 'myreport')
profile.to_file('s3://myfolder/myreport.html')
```
```
Summarize dataset: 97%|█████████▋| 427/438 [01:14<00:01, 8.03it/s, Calculate auto correlation] /home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py:315: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
return func(*args, **kwargs)
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:4881: ConstantInputWarning: An input array is constant; the correlation coefficient is not defined.
warnings.warn(stats.ConstantInputWarning(warn_msg))
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/correlations.py:67: UserWarning: There was an attempt to calculate the auto correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"auto": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/ydataai/pandas-profiling/issues
(include the error message: 'No data; `observed` has size 0.')
warnings.warn(
Summarize dataset: 98%|█████████▊| 428/438 [28:20<32:48, 196.80s/it, Calculate spearman correlation]/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py:315: FutureWarning: The default value of numeric_only in DataFrame.corr is deprecated. In a future version, it will default to False. Select only valid columns or specify the value of numeric_only to silence this warning.
return func(*args, **kwargs)
Summarize dataset: 98%|█████████▊| 430/438 [30:55<21:07, 158.47s/it, Calculate kendall correlation] /home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:5218: RuntimeWarning: overflow encountered in long_scalars
(2 * xtie * ytie) / m + x0 * y0 / (9 * m * (size - 2)))
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:5219: RuntimeWarning: invalid value encountered in sqrt
z = con_minus_dis / np.sqrt(var)
Summarize dataset: 99%|█████████▊| 432/438 [45:40<00:38, 6.34s/it, Calculate phi_k correlation]
---------------------------------------------------------------------------
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/queues.py", line 125, in _feed
obj_ = dumps(obj, reducers=reducers)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/reduction.py", line 211, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/reduction.py", line 204, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 632, in dump
return Pickler.dump(self, obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/_memmapping_reducer.py", line 446, in __call__
for dumped_filename in dump(a, filename):
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 553, in dump
NumpyPickler(f, protocol=protocol).dump(value)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/pickle.py", line 487, in dump
self.save(obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 352, in save
wrapper.write_array(obj, self)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 134, in write_array
pickler.file_handle.write(chunk.tobytes('C'))
OSError: [Errno 28] No space left on device
"""
The above exception was the direct cause of the following exception:
PicklingError Traceback (most recent call last)
<ipython-input-9-34649000e9e9> in <module>
1 profile = ProfileReport(df_perf_18, title="MyReport")
----> 2 profile.to_file(f"s3://sf-puas-prod-use1-pc/fire/research/home_telematics/adt/analysis/MyReport.html")
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in to_file(self, output_file, silent)
307 create_html_assets(self.config, output_file)
308
--> 309 data = self.to_html()
310
311 if output_file.suffix != ".html":
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in to_html(self)
418
419 """
--> 420 return self.html
421
422 def to_json(self) -> str:
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in html(self)
229 def html(self) -> str:
230 if self._html is None:
--> 231 self._html = self._render_html()
232 return self._html
233
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in _render_html(self)
337 from pandas_profiling.report.presentation.flavours import HTMLReport
338
--> 339 report = self.report
340
341 with tqdm(
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in report(self)
223 def report(self) -> Root:
224 if self._report is None:
--> 225 self._report = get_report_structure(self.config, self.description_set)
226 return self._report
227
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in description_set(self)
205 def description_set(self) -> Dict[str, Any]:
206 if self._description_set is None:
--> 207 self._description_set = describe_df(
208 self.config,
209 self.df,
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/describe.py in describe(config, df, summarizer, typeset, sample)
93 pbar.total += len(correlation_names)
94
---> 95 correlations = {
96 correlation_name: progress(
97 calculate_correlation, pbar, f"Calculate {correlation_name} correlation"
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/describe.py in <dictcomp>(.0)
94
95 correlations = {
---> 96 correlation_name: progress(
97 calculate_correlation, pbar, f"Calculate {correlation_name} correlation"
98 )(config, df, correlation_name, series_description)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/utils/progress_bar.py in inner(*args, **kwargs)
9 def inner(*args, **kwargs) -> Any:
10 bar.set_postfix_str(message)
---> 11 ret = fn(*args, **kwargs)
12 bar.update()
13 return ret
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/correlations.py in calculate_correlation(config, df, correlation_name, summary)
105 correlation = None
106 try:
--> 107 correlation = correlation_measures[correlation_name].compute(
108 config, df, summary
109 )
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/pandas/correlations_pandas.py in pandas_phik_compute(config, df, summary)
152 from phik import phik_matrix
153
--> 154 correlation = phik_matrix(df[selected_cols], interval_cols=list(intcols))
155
156 return correlation
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/phik/phik.py in phik_matrix(df, interval_cols, bins, quantile, noise_correction, dropna, drop_underflow, drop_overflow, verbose, njobs)
254 verbose=verbose,
255 )
--> 256 return phik_from_rebinned_df(
257 data_binned,
258 noise_correction,
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/phik/phik.py in phik_from_rebinned_df(data_binned, noise_correction, dropna, drop_underflow, drop_overflow, njobs)
164 ]
165 else:
--> 166 phik_list = Parallel(n_jobs=njobs)(
167 delayed(_calc_phik)(co, data_binned[list(co)], noise_correction)
168 for co in itertools.combinations_with_replacement(
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/parallel.py in __call__(self, iterable)
1096
1097 with self._backend.retrieval_context():
-> 1098 self.retrieve()
1099 # Make sure that we get a last message telling us we are done
1100 elapsed_time = time.time() - self._start_time
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/parallel.py in retrieve(self)
973 try:
974 if getattr(self._backend, 'supports_timeout', False):
--> 975 self._output.extend(job.get(timeout=self.timeout))
976 else:
977 self._output.extend(job.get())
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
565 AsyncResults.get from multiprocessing."""
566 try:
--> 567 return future.result(timeout=timeout)
568 except CfTimeoutError as e:
569 raise TimeoutError from e
~/SageMaker/.envs/mykernel/lib/python3.9/concurrent/futures/_base.py in result(self, timeout)
436 raise CancelledError()
437 elif self._state == FINISHED:
--> 438 return self.__get_result()
439
440 self._condition.wait(timeout)
~/SageMaker/.envs/mykernel/lib/python3.9/concurrent/futures/_base.py in __get_result(self)
388 if self._exception:
389 try:
--> 390 raise self._exception
391 finally:
392 # Break a reference cycle with the exception in self._exception
PicklingError: Could not pickle the task to send it to the workers.
```
I'm on the latest version of pandas-profiling (just installed it today). | open | 2022-12-05T18:01:37Z | 2022-12-20T12:12:24Z | https://github.com/ydataai/ydata-profiling/issues/1197 | [
"question/discussion ❓",
"information requested ❔"
] | JohnTravolski | 3 |
aio-libs/aiohttp | asyncio | 9,878 | Interacting with aiohttp.request results in error (version 3.11.0) | ### Describe the bug
We're using the aiohttp.request coroutine, however since updating to v3.11.0 we see the following error:
```python
TypeError: RequestInfo.__new__() missing 1 required positional argument: 'real_url'
```
### To Reproduce
1. Make a call using aiohttp.request, e.g.,
```python
aiohttp.request(method, url, data=body, headers=headers, auth=None, **kwargs)
```
### Expected behavior
No error would be raised
### Logs/tracebacks
```python-traceback
TypeError: RequestInfo.__new__() missing 1 required positional argument: 'real_url'
```
```
### Python Version
```console
$ python --version
Python 3.12.7
```
### aiohttp Version
```console
$ python -m pip show aiohttp
3.11.0
```
### multidict Version
```console
$ python -m pip show multidict
6.1.0
```
### propcache Version
```console
$ python -m pip show propcache
0.2.0
```
### yarl Version
```console
$ python -m pip show yarl
1.17.1
```
### OS
Mac OS X, Alpine container
### Related component
Client
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | closed | 2024-11-14T16:48:41Z | 2024-11-14T18:18:24Z | https://github.com/aio-libs/aiohttp/issues/9878 | [
"bug"
] | brent-cybrid | 4 |
roboflow/supervision | computer-vision | 809 | No module name 'supervision', installed supervision==0.18.0 and imported supervision as sv | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Traceback (most recent call last):
File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/object_det/object_det_node", line 33, in <module>
sys.exit(load_entry_point('object-det==0.0.0', 'console_scripts', 'object_det_node')())
File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/object_det/object_det_node", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 171, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/shivakrishnakarnati/Documents/Programming/ROS/ros_supervision_obj/install/object_det/lib/python3.10/site-packages/object_det/object_det_node.py", line 19, in <module>
import supervision as sv
ModuleNotFoundError: No module named 'supervision'
[ros2run]: Process exited with failure 1
### Environment
- supervision 0.18.0
### Minimal Reproducible Example
import supervision as sv
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-01-29T14:29:43Z | 2024-01-29T15:21:54Z | https://github.com/roboflow/supervision/issues/809 | [
"bug"
] | shivakarnati | 7 |
schemathesis/schemathesis | pytest | 2,136 | [BUG] Parametrization incompatibility with falcon ASGI apps | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
We've recently upgraded from schemathesis 3.19.5, and our tests that involve parametrized schemas from pytest fixtures have begun to hang indefinitely when using `case.call_asgi`. I noticed that the `call_asgi` method was just recently deprecated, so I switched the tests to use just `case.call` instead, and they no longer hang but instead error out due to calling `run_wsgi_app` with an ASGI app. The issue is present for both OpenAPI and GraphQL schemas.
Digging in to the issue a little bit, I discovered that these issues only present themselves when using falcon as the ASGI framework. When using FastAPI instead, for example, everything works just fine. I've been unable to dig any further and determine whether this is a bug in falcon, so my apologies if this is not the right venue for this bug, but it seems to have something to do with the way the starlette testclient interacts with the falcon app. Specifically, the hanging seems to occur in the testclient's `__exit__` method, where it's waiting for everything to shutdown.
### To Reproduce
The following test code should demonstrate both the failing test with the new `case.call` method, and the hanging test with the old `case.call_asgi` method. In addition, there are tests using the exact same code/schema via a fastapi app that will pass:
```python
import falcon.asgi
import fastapi
import pytest
import schemathesis
async def get_schema():
schema = {
"openapi": "3.0.2",
"info": {"description": ""},
"paths": {
"/info": {
"get": {
"summary": "",
"parameters": [],
"responses": {
"200": {"description": ""}
},
},
},
},
}
return schema
@pytest.fixture
def falcon_app_schema():
class InfoResource:
async def on_get(self, req, resp):
resp.media = await get_schema()
falcon_app = falcon.asgi.App()
falcon_app.add_route("/info", InfoResource())
return schemathesis.from_asgi("/info", falcon_app)
@pytest.fixture
def fastapi_app_schema():
fastapi_app = fastapi.FastAPI()
fastapi_app.get("/info")(get_schema)
return schemathesis.from_asgi("/info", fastapi_app)
fastapi_schema = schemathesis.from_pytest_fixture("fastapi_app_schema")
falcon_schema = schemathesis.from_pytest_fixture("falcon_app_schema")
@fastapi_schema.parametrize()
def test_fastapi_call(case):
response = case.call()
case.validate_response(response)
@fastapi_schema.parametrize()
def test_fastapi_call_asgi(case):
response = case.call_asgi()
case.validate_response(response)
@falcon_schema.parametrize()
def test_falcon_call(case):
response = case.call()
case.validate_response(response)
@falcon_schema.parametrize()
def test_falcon_call_asgi(case):
response = case.call_asgi()
case.validate_response(response)
```
Please include a minimal API schema causing this issue:
```yaml
{'openapi': '3.0.2', 'info': {'description': ''}, 'paths': {'/info': {'get': {'summary': '', 'parameters': [], 'responses': {'200': {'description': ''}}}}}}
```
### Expected behavior
The result of the above tests should be independent of the ASGI framework used.
### Environment
```
- OS: Linux
- Python version: 3.9.15
- Schemathesis version: 3.27.0
- Spec version: Open API 3.0.2
```
### Additional context
Here are the versions of the libraries directly involved with running the tests that I've been using to reproduce the issue:
```
falcon 3.1.3
fastapi 0.110.1
pytest 8.1.1
pytest-subtests 0.7.0
schemathesis 3.27.0
```
| closed | 2024-04-16T20:33:12Z | 2024-04-29T19:48:47Z | https://github.com/schemathesis/schemathesis/issues/2136 | [
"Priority: High",
"Type: Bug",
"Python: ASGI"
] | BrandonWiebe | 5 |
huggingface/datasets | tensorflow | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
│ │
│ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │
│ py:1377 in <listcomp> │
│ │
│ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │
│ 1375 │ │ │ │ │ break │
│ 1376 │ │ # we get the result in case there's an error to raise │
│ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │
│ 1378 │
│ │
│ ╭──────────────────────────────── locals ─────────────────────────────────╮ │
│ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │
│ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │
│ in get │
│ │
│ 768 │ │ if self._success: │
│ 769 │ │ │ return self._value │
│ 770 │ │ else: │
│ ❱ 771 │ │ │ raise self._value │
│ 772 │ │
│ 773 │ def _set(self, i, obj): │
│ 774 │ │ self._success, self._value = obj │
│ │
│ ╭────────────────────────────── locals ──────────────────────────────╮ │
│ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ │ timeout = None │ │
│ ╰────────────────────────────────────────────────────────────────────╯ │
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9 | open | 2024-07-22T21:18:12Z | 2024-09-09T14:48:07Z | https://github.com/huggingface/datasets/issues/7061 | [] | hahmad2008 | 0 |
matplotlib/matplotlib | data-visualization | 29,732 | [Bug]: Unit tests: MacOS 14 failures: gi-invoke-error-quark | ### Bug summary
During a recent GitHub Actions workflow for unrelated pull request #29721, the following error appeared:
```
E gi._error.GError: gi-invoke-error-quark: Could not locate g_option_error_quark: dlopen(libglib-2.0.0.dylib, 0x0009): tried: 'libglib-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OSlibglib-2.0.0.dylib' (no such file), '/usr/lib/libglib-2.0.0.dylib' (no such file, not in dyld cache), 'libglib-2.0.0.dylib' (no such file) (1)
```
### Code for reproduction
This occurred for Python3.12 and Python3.13 on MacOS 14:
* https://github.com/matplotlib/matplotlib/actions/runs/13788406404/job/38561789129
* https://github.com/matplotlib/matplotlib/actions/runs/13788406404/job/38561789705
### Actual outcome
See the error in the main description and linked build logs.
### Expected outcome
The tests should pass as expected (the error appears to be a ~~transient~~ failure to locate/load a dynamic library file).
### Additional information
N/A
### Operating system
MacOS 14
### Matplotlib Version
86fd11f6997c653fcf9e20f56418c7fc92ddd638
### Matplotlib Backend
N/A
### Python version
_No response_
### Jupyter version
N/A
### Installation
None
Edit: remove code-block markup surrounding non-code details
Edit: the failure appears to occur fairly reliably, so remove the description of it as transient | open | 2025-03-11T14:10:28Z | 2025-03-19T11:37:08Z | https://github.com/matplotlib/matplotlib/issues/29732 | [] | jayaddison | 10 |
ray-project/ray | data-science | 51,188 | [<Ray component: Data] Async map_batches return empty result when execution_options.preserve_order = True | ### What happened + What you expected to happen
### The bug
With async generator, `ray.data.map_batches` will return empty result if we set `ray.data.DataContext.execution_options.preserve_order = True`.
### Expected behavior
The result should not be empty, and is expected to have the same amount of data as the input in my use case.
### Versions / Dependencies
Ray: 3.0.0.dev0
Python: 3.12.3
OS: Ubuntu 24.04.2 LTS
### Reproduction script
```
import asyncio
import ray
ray.init(num_cpus=10)
async def yield_output(i):
return {"input": [i], "output": [2**i]}
class AsyncActor:
def __init__(self):
pass
async def __call__(self, batch):
tasks = [asyncio.create_task(yield_output(i)) for i in batch["id"]]
for task in tasks:
yield await task
ctx = ray.data.DataContext.get_current()
ctx.execution_options.preserve_order = True
n = 10
ds = ray.data.range(n, override_num_blocks=2)
ds = ds.map(lambda x: x)
ds = ds.map_batches(AsyncActor, batch_size=1, concurrency=1, max_concurrency=2)
output = ds.take_all()
expected_output = [{"input": i, "output": 2**i} for i in range(n)]
assert len(output) == len(expected_output), (output, expected_output)
```
### Issue Severity
None | open | 2025-03-09T03:21:02Z | 2025-03-10T17:27:31Z | https://github.com/ray-project/ray/issues/51188 | [
"bug",
"triage",
"data"
] | Drice1999 | 0 |
apachecn/ailearning | python | 347 | 图像压缩 | 在使用SVD对图像压缩的时候,为什么最后32*32显示的全是数字“0”, | closed | 2018-04-11T14:19:55Z | 2018-04-15T06:25:51Z | https://github.com/apachecn/ailearning/issues/347 | [] | cjr0106 | 4 |
QuivrHQ/quivr | api | 3,055 | Display all knowledge in a knowledge management system tab | For Integrations, each integration tool (Gdrive, Notion, etc) is a folder, and each time, each account is a subfolder. | closed | 2024-08-22T14:37:29Z | 2024-10-23T08:04:01Z | https://github.com/QuivrHQ/quivr/issues/3055 | [
"Feature"
] | linear[bot] | 1 |
microsoft/MMdnn | tensorflow | 491 | Failed to convert tf1.4 model to caffe mode,TypeError | Platform (like ubuntu 16.04/win10):
Ubuntu14.04
Python version:
2.7
Source framework with version (like Tensorflow 1.4.1 with GPU):
TensorFlow1.4 gpu
Destination framework with version (like CNTK 2.3 with GPU):
caffe gpu
Pre-trained model path (webpath or webdisk path):
https://github.com/Jackiq/TF-1.4-MODEL
Running scripts:
$ mmconvert -sf tensorflow -in model.ckpt.meta -iw model.ckpt -df caffe -om tf_to_caffe.proto --dump_tag SERVING
Running detail:
$ mmconvert -sf tensorflow -in model.ckpt.meta -iw model.ckpt -df caffe -om tf_to_caffe.proto --dump_tag SERVING
Parse file [model.ckpt.meta] with binary format successfully.
Tensorflow model file [model.ckpt.meta] loaded successfully.
Tensorflow checkpoint file [model.ckpt] loaded successfully. [301] variables loaded.
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 11, in <module>
load_entry_point('mmdnn==0.2.3', 'console_scripts', 'mmconvert')()
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 66, in _convert
parser = TensorflowParser(args.network, args.weights, args.dstNodeName)
File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 248, in __init__
dest_nodes, transforms)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/tools/graph_transforms/__init__.py", line 46, in TransformGraph
outputs_string = compat.as_bytes(",".join(outputs))
TypeError
| open | 2018-11-08T10:16:57Z | 2018-11-08T11:56:23Z | https://github.com/microsoft/MMdnn/issues/491 | [] | Jackiq | 1 |
ray-project/ray | pytorch | 50,961 | [Feedback] Feedback for ray + uv | Hello everyone! As of [Ray 2.43.0](https://github.com/ray-project/ray/releases/tag/ray-2.43.0), we have launched a new integration with `uv run` that we are super excited to share with you all. This will serve as the main Github issue to track any issues or feedback that ya'll might have while using this.
Please share any success stories, configs, or just cool discoveries that you might have while running uv + Ray! We are excited to hear from you.
To read more about uv + Ray, check out [our new blog post here](https://www.anyscale.com/blog/uv-ray-pain-free-python-dependencies-in-clusters). | open | 2025-02-27T21:33:22Z | 2025-03-21T17:54:50Z | https://github.com/ray-project/ray/issues/50961 | [] | cszhu | 17 |
jupyter/nbgrader | jupyter | 1,357 | [ERROR] No notebooks were matched by autograded/* | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### macOS catalina 10.15.6
### `0.6.1`
### `jupyterhub --version` (if used with JupyterHub)
### `6.1.1`
###
I am trying to setup nbgrader for being able to grade students notebook assignments. I have my directory with the source folder contains the assignment folder (source/lab/lab.ipynb) with the associated target notebook (with the solutions). Then according to my nbgrader_config:
`c.ZipCollectApp.archive_directory = 'archive'
c.ZipCollectApp.collect_directory_structure = '{downloaded}/{assignment_id}/{collect_step}'
c.ZipCollectApp.collector_plugin = 'nbgrader.plugins.zipcollect.FileNameCollectorPlugin'
c.ZipCollectApp.downloaded_directory = 'downloaded'
c.ZipCollectApp.extracted_directory = 'extracted'`
I add the student's notebooks according to the config file to the following path /downloaded/lab/extracted/test.ipynb
Then I run the command `nbgrader zip_collect lab --force` which created the submitted folder then a folder submitted/test/lab/lab.ipynb I guess as it should.
When I go to the FormGrader though I can see the submission, however, there is a generated feedback with the log out error: `[ERROR] No notebooks were matched by ... ../autograded/*/lab`
Why it did not work? Is it due to paths?
| open | 2020-08-14T08:48:32Z | 2020-08-14T16:48:50Z | https://github.com/jupyter/nbgrader/issues/1357 | [] | ghost | 0 |
robotframework/robotframework | automation | 5,248 | [CLI error] No command line arguments are accepted after the Argumentfile | Robot framework versions: 6.1.1 and 7.1.1
Python version: 3.10.x
OS: Windows (not tested on any other OS)
When the "robot" command is called from command line interface and an argument file is provided through "-A" or "--argumentfile", any arguments passed **after** the argumentfile cause an error (including another --argumentfile).
For example, consider an argument file "args.robot" which calls a suite file "suiteA.robot". "suiteA.robot" has 2 test cases - "Test 1" and "Test 2". The following command line example calls explain how to reproduce it.
# Causes error
robot -A args.robot -t "Test 1"
# No error
robot -t "Test 1" -A args.robot
This violates the description given in the "Using argument file" section of the user guide (https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using-argument-files):
"... When an argument file is used with other arguments, its contents are placed into the original list of arguments to the same place where the argument file option was. This means that options in argument files can override options before it, and its options can be overridden by options after it. It is possible to use --argumentfile option multiple times or even recursively:
...
robot --argumentfile default_options.txt --name Example my_tests.robot
robot -A first.txt -A second.txt -A third.txt tests.robot"
| closed | 2024-10-28T00:32:08Z | 2024-10-29T11:10:02Z | https://github.com/robotframework/robotframework/issues/5248 | [] | Mushahar | 1 |
opengeos/streamlit-geospatial | streamlit | 58 | Link is broken | The streamlit app doesn't load | closed | 2022-07-18T20:23:09Z | 2022-07-19T14:30:46Z | https://github.com/opengeos/streamlit-geospatial/issues/58 | [] | osbm | 2 |
keras-team/keras | deep-learning | 20,763 | Use Multi GPU(Mirrored Strategy) training with XLA and AMP(Mixed Precision) | Hi I've encountered some issues while trying to perform **multi-GPU training with XLA** (Accelerated Linear Algebra), **and AMP** (Automatic Mixed Precision).
I'm reaching out to understand if it's possible to use **multi-GPU training with XLA and AMP** together.
If so, I'd like guidance on which versions of tensorflow and keras should I use or how to modify my code to make this work.
**Background:**
In earlier versions of TensorFlow (prior to 2.11), we were able to successfully train models using multiple GPUs with both XLA and AMP enabled. However, with versions beyond tensorflow 2.11 versions, I've not been able to run Training with multi gpu+xla+amp.
**Issues Encountered with Different Versions:**
I use **tf-keras=2.15** for all these tests,
**1. tensorflow=2.17.1/2.16.2 and keras=3.8.0:**
**Error Message**
``
RuntimeError: Exception encountered when calling Cond.call() merge_call called while defining a new graph or a tf.function. This can often happen if the function fn passed to strategy.run() contains a nested @tf.function, and the nested @tf.function contains a synchronization point, such as aggregating gradients (e.g, optimizer.apply_gradients), or if the function fn uses a control flow statement which contains a synchronization point in the body. Such behaviors are not yet supported. Instead, please avoid nested tf.functions or control flow statements that may potentially cross a synchronization boundary, for example, wrap the fn passed to strategy.run or the entire strategy.run inside a tf.function or move the control flow out of fn. If you are subclassing a tf.keras.Model, please avoid decorating overridden methods test_step and train_step in tf.function``
**2. tensorflow=2.17.1/2.16.2 and keras=3.5.0:**
**Issue:** `Training gets stuck after few epochs and not progress.`
**3. tensorflow=2.17.1/2.16.2 and keras=3.0.5**
**Error Message:**
`UnimplementedError: We failed to lift variable creations out of this tf.function, so this tf.function cannot be run on XLA. A possible workaround is to move variable creation outside of the XLA compiled function.`
**4. tensorflow=2.18.0 Also gives similar error with keras versions 3.0,3.5 and 3.6**
**5. Using `TF_USE_LEGACY_KERAS=1`**
Training gets stuck after some time similarly as with keras>3. I have tried this with various tensorflow versions but got same training stuck.
**Code Snippet:**
Here's a simplified version of the code I'm using. This example is adapted from the [Keras documentation on distributed training](https://keras.io/guides/distributed_training_with_tensorflow/):
```
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
def get_compiled_model():
# Make a simple 2-layer densely-connected neural network.
keras.mixed_precision.set_global_policy("mixed_float16")
inputs = keras.Input(shape=(784,))
x = keras.layers.Dense(256, activation="relu")(inputs)
x = keras.layers.Dense(256, activation="relu")(x)
outputs = keras.layers.Dense(10)(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-5),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
return model
def get_dataset():
batch_size = 32
num_val_samples = 10000
# Return the MNIST dataset in the form of a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are Numpy arrays)
x_train = x_train.reshape(-1, 784).astype("float32") / 255
x_test = x_test.reshape(-1, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve num_val_samples samples for validation
x_val = x_train[-num_val_samples:]
y_val = y_train[-num_val_samples:]
x_train = x_train[:-num_val_samples]
y_train = y_train[:-num_val_samples]
return (
tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size),
tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(batch_size),
tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size),
)
# Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
print("Number of devices: {}".format(strategy.num_replicas_in_sync))
# Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model = get_compiled_model()
# Train the model on all available devices.
train_dataset, val_dataset, test_dataset = get_dataset()
model.fit(train_dataset, epochs=20, validation_data=val_dataset)
# Test the model on all available devices.
model.evaluate(test_dataset)
```
Can Someone Suggest what changes I should make in code or which version of keras and tensorflow to use to make training with **multi gpu+xla+amp** work?
or is it not possible train using **multi GPU+XLA+AMP**? | open | 2025-01-15T09:09:15Z | 2025-01-24T04:46:42Z | https://github.com/keras-team/keras/issues/20763 | [
"type:Bug"
] | keshusharmamrt | 4 |
HumanSignal/labelImg | deep-learning | 558 | Find unlabeled images | <!--
I've labeled over 5000 image in a folder but the were around 11 images that i messed. Is there a way to quickly find unlabeled images amongst all the labeled images?
-->
- **OS: Mac**
- **PyQt version:**
| open | 2020-02-25T22:25:16Z | 2020-04-06T19:12:38Z | https://github.com/HumanSignal/labelImg/issues/558 | [] | rojackson13 | 2 |
ageitgey/face_recognition | machine-learning | 1,122 | Getting error while installing sparse module | * face_recognition version:
* Python version:2.7
* Operating System: macOS Catalina version 10.15.3
### Description
I am trying to run face_recognition on my system. Installing sparse Module getting error.
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Defaulting to user installation because normal site-packages is not writeable
Collecting sparse
Using cached sparse-0.6.0-py2.py3-none-any.whl (47 kB)
Requirement already satisfied: numpy>=1.13 in /Users/varungupta/Library/Python/2.7/lib/python/site-packages (from sparse) (1.16.6)
Requirement already satisfied: scipy>=0.19 in /Users/varungupta/Library/Python/2.7/lib/python/site-packages (from sparse) (1.2.2)
Collecting numba>=0.39
Using cached numba-0.47.0-cp27-cp27m-macosx_10_9_x86_64.whl (2.0 MB)
Requirement already satisfied: funcsigs; python_version < "3.3" in /Users/varungupta/Library/Python/2.7/lib/python/site-packages (from numba>=0.39->sparse) (1.0.2)
Collecting llvmlite>=0.31.0dev0
Using cached llvmlite-0.32.0.tar.gz (103 kB)
Requirement already satisfied: enum34; python_version < "3.4" in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from numba>=0.39->sparse) (1.1.6)
Requirement already satisfied: singledispatch; python_version < "3.4" in /Users/varungupta/Library/Python/2.7/lib/python/site-packages (from numba>=0.39->sparse) (3.4.0.3)
Requirement already satisfied: setuptools in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from numba>=0.39->sparse) (41.0.1)
Requirement already satisfied: six in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from singledispatch; python_version < "3.4"->numba>=0.39->sparse) (1.12.0)
Building wheels for collected packages: llvmlite
Building wheel for llvmlite (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"'; __file__='"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-wheel-Enug0e
cwd: /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/
Complete output (11 lines):
running bdist_wheel
/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py
LLVM version... Traceback (most recent call last):
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 168, in <module>
main()
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 162, in main
main_posix('osx', '.dylib')
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 109, in main_posix
"to the path for llvm-config" % (llvm_config,))
RuntimeError: /usr/bin/llvm-config-6.0 failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for llvmlite
Running setup.py clean for llvmlite
Failed to build llvmlite
Installing collected packages: llvmlite, numba, sparse
Running setup.py install for llvmlite ... error
ERROR: Command errored out with exit status 1:
command: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"'; __file__='"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-record-tz529G/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /Users/varungupta/Library/Python/2.7/include/python2.7/llvmlite
cwd: /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/
Complete output (14 lines):
running install
running build
got version from file /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/llvmlite/_version.py {'version': '0.32.0', 'full': '26059d238f4ba23dff74703dd27168591d889edd'}
running build_ext
/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py
LLVM version... Traceback (most recent call last):
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 168, in <module>
main()
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 162, in main
main_posix('osx', '.dylib')
File "/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/ffi/build.py", line 109, in main_posix
"to the path for llvm-config" % (llvm_config,))
RuntimeError: /usr/bin/llvm-config-6.0 failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"'; __file__='"'"'/private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-install-lsxU6M/llvmlite/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/mn/rgnylh413g96j9x5vjs4rtx80000gn/T/pip-record-tz529G/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /Users/varungupta/Library/Python/2.7/include/python2.7/llvmlite Check the logs for full command output.
Also I clone this repo on my system . Do i need to install every module one by one ? is there any command to install all module alltogether ? I am new to python. | open | 2020-04-23T16:42:40Z | 2020-05-02T20:01:49Z | https://github.com/ageitgey/face_recognition/issues/1122 | [] | harshitKyal | 0 |
huggingface/datasets | pandas | 6,887 | FAISS load to None | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64)
ds_with_embeddings.add_faiss_index(column='embeddings')
ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss')
```
# 2.
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Expected behavior
Add column in Datasets.
### Environment info
Google Colab, SageMaker Notebook | open | 2024-05-09T02:43:50Z | 2024-05-16T20:44:23Z | https://github.com/huggingface/datasets/issues/6887 | [] | brainer3220 | 1 |
marcomusy/vedo | numpy | 825 | Create a mesh in the y-axis direction with partial mesh. | Hi, I looked at the mesh generation method I wanted.

Can I use this library to make a part of the mesh full? Like the image above.
Or is it using AI?
Or Is there a way to create the mesh above?
I attach the sample file.
[sample.zip](https://github.com/marcomusy/vedo/files/10893568/sample.zip)
| closed | 2023-03-06T01:19:45Z | 2023-03-10T00:59:18Z | https://github.com/marcomusy/vedo/issues/825 | [] | ack9437 | 3 |
JoeanAmier/TikTokDownloader | api | 412 | 下载视频后没有声音 | **问题描述**
下载视频后没有声音
**重现步骤**
重现该问题的步骤:
Steps to reproduce the behavior:
1. 选择终端交互
2. 批量下载链接作品(抖音)
3. 下载地址:https://v.douyin.com/i5rhkCP8
**预期结果**
正常播放视频应该有声音 | closed | 2025-02-28T13:02:00Z | 2025-03-01T00:27:59Z | https://github.com/JoeanAmier/TikTokDownloader/issues/412 | [] | shiyigit312 | 3 |
yt-dlp/yt-dlp | python | 12,484 | Faster Way to Check for Manually Created Subtitles | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
I'm trying to crawl YouTube channels to find videos with manually created subtitles in a specific language.
Right now, I'm using the following command to get the full video info:
```
yt-dlp --skip-download --dump-json VIDEO_ID
```
Then, I manually check whether my language code appears in the subtitles list of the returned JSON object.
The issue is that this method is inefficient because the API is slow, and the JSON response contains a lot of unnecessary data. At this stage, **I only need to check whether the video has manually created subtitles or not**—if it does, I will fetch the necessary data using other APIs.
Is there a faster API or method to quickly determine if a given video has manual subtitles in a specific language?
Thanks in advance! | closed | 2025-02-26T07:40:16Z | 2025-02-26T16:23:52Z | https://github.com/yt-dlp/yt-dlp/issues/12484 | [
"question"
] | srezasm | 3 |
widgetti/solara | fastapi | 550 | How to disable dynamic rendering/event driven rendering for some components | Hello me again,
I'm continuing to build my application with Solara 😍😍😍, and some of the elements I want to chart takes quite some time to display due a lot of data points, around 30-60 seconds. (The charting library library also needs to be optimised more ). Having dynamic rendering is awesome feature 99% of the time but for some cases it would be nice to have the rendering and all the code present in the solara components to be triggered only on a event.
What I want to do is basically select all my options for the charting and then click on a button that triggers all compoenent rendering, code processing in the compoenent and chartsd display. The rendering should not occur dynamically, but only when the button is clicked.
Is there a way I could disable the automatic rendering and processing of the charts ?
From what I have looked in the docs and I haven't seen how i could achieve this. It might be dead simple but I haven't found anything yet.
Here is a small basic example of what
``` annotate python
import solara
from highcharts_core.chart import Chart
from highcharts_core.options.series.area import LineSeries
exponent = solara.reactive(1.2)
series_type = solara.reactive('line')
button_state =
def on_click_button_chart():
@solara.component
def Page():
with solara.Sidebar():
with solara.Card("Select Charting options"):
solara.Select('Type', value=series_type, values=['line', 'bar'])
solara.SliderFloat(label='exponent', value=exponent, min=-1, max=2)
solara.Button(label=f"Click to chart", on_click=on_click_button_chart)
#Could something here dictates how the dynamic redendering occurs ?
with solara.Card(f'Demo: exponent={exponent}'):
exp = exponent.value
my_chart = Chart(data = [[1, 1*exp], [2, 2*exp], [3, 3**exp]],
series_type = series_type.value)
my_chart.display()
```
Thanks again guys,
cheers BFAGIT | closed | 2024-03-11T13:03:40Z | 2024-03-24T10:30:34Z | https://github.com/widgetti/solara/issues/550 | [] | BFAGIT | 7 |
deezer/spleeter | tensorflow | 407 | Error message | I just installed Spleeter v2.5 and got this message.
Starting processing of all songs
Processing D:\~ STUDIO ONE\Backing Tracks\songs\Senza Fine\Spleeter\Monica Mancini - Senza Fine - Reference track.mp3
Traceback (most recent call last):
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\imp.py", line 242, in load_module
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\imp.py", line 342, in load_dynamic
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\runpy.py", line 193, in _run_module_as_main
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\runpy.py", line 85, in _run_code
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\spleeter\__main__.py", line 58, in <module>
entrypoint()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\spleeter\__main__.py", line 36, in main
enable_logging()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\spleeter\utils\logging.py", line 60, in enable_logging
tf_logger = get_tensorflow_logger()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\spleeter\utils\logging.py", line 27, in get_tensorflow_logger
from tensorflow.compat.v1 import logging
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow\__init__.py", line 99, in <module>
from tensorflow_core import *
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
module = self._load()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow\__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\__init__.py", line 127, in import_module
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\richa\AppData\Roaming\SpleeterGUI\python\Lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\imp.py", line 242, in load_module
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\imp.py", line 342, in load_dynamic
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Finished processing all songs
Run complete
I have no idea what it is
| closed | 2020-05-31T23:44:15Z | 2020-05-31T23:49:06Z | https://github.com/deezer/spleeter/issues/407 | [] | wsaxmm | 1 |
mars-project/mars | numpy | 2,769 | [BUG] Failed to create Mars DataFrame when mars object exists in a list | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Failed to create Mars DataFrame when mars object exists in a list.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
In [1]: import mars
In [2]: mars.new_session()
Web service started at http://0.0.0.0:24172
Out[2]: <mars.deploy.oscar.session.SyncSession at 0x7f8ada249370>
In [3]: import mars.dataframe as md
In [5]: s = md.Series([1, 2, 3])
In [6]: df2 = md.DataFrame({'a': [s.sum()]})
100%|█████████████████████████████████████| 100.0/100 [00:00<00:00, 1592.00it/s]
In [7]: df2
Out[7]: DataFrame <op=DataFrameDataSource, key=5a704fd6d6ab7aee6f31d874c2f11347>
In [12]: df2.execute()
0%| | 0/100 [00:00<?, ?it/s]Failed to run subtask 00lm3BBKMBsieIow3LtlHrwv on band numa-0
Traceback (most recent call last):
File "/Users/qinxuye/Workspace/mars/mars/services/scheduling/worker/execution.py", line 331, in internal_run_subtask
subtask_info.result = await self._retry_run_subtask(
File "/Users/qinxuye/Workspace/mars/mars/services/scheduling/worker/execution.py", line 420, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/qinxuye/Workspace/mars/mars/services/scheduling/worker/execution.py", line 107, in _retry_run
raise ex
File "/Users/qinxuye/Workspace/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/qinxuye/Workspace/mars/mars/services/scheduling/worker/execution.py", line 373, in _run_subtask_once
return await asyncio.shield(aiotask)
File "/Users/qinxuye/Workspace/mars/mars/services/subtask/api.py", line 68, in run_subtask_in_slot
return await ref.run_subtask.options(profiling_context=profiling_context).send(
File "/Users/qinxuye/Workspace/mars/mars/oscar/backends/context.py", line 183, in send
future = await self._call(actor_ref.address, message, wait=False)
File "/Users/qinxuye/Workspace/mars/mars/oscar/backends/context.py", line 61, in _call
return await self._caller.call(
File "/Users/qinxuye/Workspace/mars/mars/oscar/backends/core.py", line 95, in call
await client.send(message)
File "/Users/qinxuye/Workspace/mars/mars/oscar/backends/communication/base.py", line 258, in send
return await self.channel.send(message)
File "/Users/qinxuye/Workspace/mars/mars/oscar/backends/communication/socket.py", line 73, in send
buffers = await serializer.run()
File "/Users/qinxuye/Workspace/mars/mars/serialization/aio.py", line 80, in run
return self._get_buffers()
File "/Users/qinxuye/Workspace/mars/mars/serialization/aio.py", line 37, in _get_buffers
headers, buffers = serialize(self._obj)
File "/Users/qinxuye/Workspace/mars/mars/serialization/core.py", line 363, in serialize
gen_result = gen_serializer.serialize(gen_to_serial, context)
File "/Users/qinxuye/Workspace/mars/mars/serialization/core.py", line 72, in wrapped
return func(self, obj, context)
File "/Users/qinxuye/Workspace/mars/mars/serialization/core.py", line 151, in serialize
return {}, pickle_buffers(obj)
File "/Users/qinxuye/Workspace/mars/mars/serialization/core.py", line 88, in pickle_buffers
buffers[0] = cloudpickle.dumps(
File "/Users/qinxuye/miniconda3/envs/mars3.8/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/Users/qinxuye/miniconda3/envs/mars3.8/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle 'weakref' object
100%|█████████████████████████████████████| 100.0/100 [00:00<00:00, 3581.29it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-fe40b754f95d> in <module>
----> 1 df2.execute()
~/Workspace/mars/mars/core/entity/tileables.py in execute(self, session, **kw)
460
461 def execute(self, session=None, **kw):
--> 462 result = self.data.execute(session=session, **kw)
463 if isinstance(result, TILEABLE_TYPE):
464 return self
~/Workspace/mars/mars/core/entity/executable.py in execute(self, session, **kw)
96
97 session = _get_session(self, session)
---> 98 return execute(self, session=session, **kw)
99
100 def _check_session(self, session: SessionType, action: str):
~/Workspace/mars/mars/deploy/oscar/session.py in execute(tileable, session, wait, new_session_kwargs, show_progress, progress_update_interval, *tileables, **kwargs)
1777 session = get_default_or_create(**(new_session_kwargs or dict()))
1778 session = _ensure_sync(session)
-> 1779 return session.execute(
1780 tileable,
1781 *tileables,
~/Workspace/mars/mars/deploy/oscar/session.py in execute(self, tileable, show_progress, *tileables, **kwargs)
1575 fut = asyncio.run_coroutine_threadsafe(coro, self._loop)
1576 try:
-> 1577 execution_info: ExecutionInfo = fut.result(
1578 timeout=self._isolated_session.timeout
1579 )
~/miniconda3/envs/mars3.8/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
437 raise CancelledError()
438 elif self._state == FINISHED:
--> 439 return self.__get_result()
440 else:
441 raise TimeoutError()
~/miniconda3/envs/mars3.8/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
386 def __get_result(self):
387 if self._exception:
--> 388 raise self._exception
389 else:
390 return self._result
~/Workspace/mars/mars/deploy/oscar/session.py in _execute(session, wait, show_progress, progress_update_interval, cancelled, *tileables, **kwargs)
1757 # set cancelled to avoid wait task leak
1758 cancelled.set()
-> 1759 await execution_info
1760 else:
1761 return execution_info
~/Workspace/mars/mars/deploy/oscar/session.py in wait()
100
101 async def wait():
--> 102 return await self._aio_task
103
104 self._future_local.future = fut = asyncio.run_coroutine_threadsafe(
~/Workspace/mars/mars/deploy/oscar/session.py in _run_in_background(self, tileables, task_id, progress, profiling)
905 )
906 if task_result.error:
--> 907 raise task_result.error.with_traceback(task_result.traceback)
908 if cancelled:
909 return
~/Workspace/mars/mars/services/scheduling/worker/execution.py in internal_run_subtask(self, subtask, band_name)
329
330 batch_quota_req = {(subtask.session_id, subtask.subtask_id): calc_size}
--> 331 subtask_info.result = await self._retry_run_subtask(
332 subtask, band_name, subtask_api, batch_quota_req
333 )
~/Workspace/mars/mars/services/scheduling/worker/execution.py in _retry_run_subtask(self, subtask, band_name, subtask_api, batch_quota_req)
418 # any exceptions occurred.
419 if subtask.retryable:
--> 420 return await _retry_run(subtask, subtask_info, _run_subtask_once)
421 else:
422 try:
~/Workspace/mars/mars/services/scheduling/worker/execution.py in _retry_run(subtask, subtask_info, target_async_func, *args)
105 )
106 else:
--> 107 raise ex
108
109
~/Workspace/mars/mars/services/scheduling/worker/execution.py in _retry_run(subtask, subtask_info, target_async_func, *args)
65 while True:
66 try:
---> 67 return await target_async_func(*args)
68 except (OSError, MarsError) as ex:
69 if subtask_info.num_retries < subtask_info.max_retries:
~/Workspace/mars/mars/services/scheduling/worker/execution.py in _run_subtask_once()
371 subtask_api.run_subtask_in_slot(band_name, slot_id, subtask)
372 )
--> 373 return await asyncio.shield(aiotask)
374 except asyncio.CancelledError as ex:
375 # make sure allocated slots are traced
~/Workspace/mars/mars/services/subtask/api.py in run_subtask_in_slot(self, band_name, slot_id, subtask)
66 ProfilingContext(task_id=subtask.task_id) if enable_profiling else None
67 )
---> 68 return await ref.run_subtask.options(profiling_context=profiling_context).send(
69 subtask
70 )
~/Workspace/mars/mars/oscar/backends/context.py in send(self, actor_ref, message, wait_response, profiling_context)
181 ):
182 detect_cycle_send(message, wait_response)
--> 183 future = await self._call(actor_ref.address, message, wait=False)
184 if wait_response:
185 result = await self._wait(future, actor_ref.address, message)
~/Workspace/mars/mars/oscar/backends/context.py in _call(self, address, message, wait)
59 self, address: str, message: _MessageBase, wait: bool = True
60 ) -> Union[ResultMessage, ErrorMessage, asyncio.Future]:
---> 61 return await self._caller.call(
62 Router.get_instance_or_empty(), address, message, wait=wait
63 )
~/Workspace/mars/mars/oscar/backends/core.py in call(self, router, dest_address, message, wait)
93 with Timer() as timer:
94 try:
---> 95 await client.send(message)
96 except ConnectionError:
97 try:
~/Workspace/mars/mars/oscar/backends/communication/base.py in send(self, message)
256 @implements(Channel.send)
257 async def send(self, message):
--> 258 return await self.channel.send(message)
259
260 @implements(Channel.recv)
~/Workspace/mars/mars/oscar/backends/communication/socket.py in send(self, message)
71 compress = self.compression or 0
72 serializer = AioSerializer(message, compress=compress)
---> 73 buffers = await serializer.run()
74
75 # write buffers
~/Workspace/mars/mars/serialization/aio.py in run(self)
78
79 async def run(self):
---> 80 return self._get_buffers()
81
82
~/Workspace/mars/mars/serialization/aio.py in _get_buffers(self)
35
36 def _get_buffers(self):
---> 37 headers, buffers = serialize(self._obj)
38
39 def _is_cuda_buffer(buf): # pragma: no cover
~/Workspace/mars/mars/serialization/core.py in serialize(obj, context)
361 gen_to_serial = gen.send(last_serial)
362 gen_serializer = _serial_dispatcher.get_handler(type(gen_to_serial))
--> 363 gen_result = gen_serializer.serialize(gen_to_serial, context)
364 if isinstance(gen_result, types.GeneratorType):
365 # when intermediate result still generator, push its contexts
~/Workspace/mars/mars/serialization/core.py in wrapped(self, obj, context)
70 else:
71 context[id(obj)] = obj
---> 72 return func(self, obj, context)
73
74 return wrapped
~/Workspace/mars/mars/serialization/core.py in serialize(self, obj, context)
149 @buffered
150 def serialize(self, obj, context: Dict):
--> 151 return {}, pickle_buffers(obj)
152
153 def deserialize(self, header: Dict, buffers: List, context: Dict):
~/Workspace/mars/mars/serialization/core.py in pickle_buffers(obj)
86 buffers.append(memoryview(x))
87
---> 88 buffers[0] = cloudpickle.dumps(
89 obj,
90 buffer_callback=buffer_cb,
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py in dumps(obj, protocol, buffer_callback)
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
75
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py in dump(self, obj)
561 def dump(self, obj):
562 try:
--> 563 return Pickler.dump(self, obj)
564 except RuntimeError as e:
565 if "recursion" in e.args[0]:
TypeError: cannot pickle 'weakref' object
```
| closed | 2022-03-01T08:09:12Z | 2022-03-02T10:32:47Z | https://github.com/mars-project/mars/issues/2769 | [
"type: bug",
"mod: dataframe",
"task: medium"
] | qinxuye | 0 |
quokkaproject/quokka | flask | 214 | start error ,cannot load library 'libcairo.so.2 ,Error importing flask-weasyprint! | # python manage.py run0
```
Error importing flask-weasyprint!
PDF support is temporarily disabled.
Manual dependencies may need to be installed.
See,
`http://weasyprint.org/docs/install/#by-platform`_
`https://github.com/Kozea/WeasyPrint/issues/79`_
cannot load library 'libcairo.so.2': libcairo.so.2: cannot open shared object file: No such file or directory
```
2015-06-16T05:57:21.702+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:40963 #9 (1 connection now open)
/data/quokka_venv/lib/python2.7/site-packages/flask_script/**init**.py:153: UserWarning: Options will be ignored.
warnings.warn("Options will be ignored.")
- Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
- Restarting with stat
Error importing flask-weasyprint!
PDF support is temporarily disabled.
Manual dependencies may need to be installed.
See,
`http://weasyprint.org/docs/install/#by-platform`_
`https://github.com/Kozea/WeasyPrint/issues/79`_
cannot load library 'libcairo.so.2': libcairo.so.2: cannot open shared object file: No such file or directory
2015-06-16T05:57:22.383+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:40964 #10 (2 connections now open)
/data/quokka_venv/lib/python2.7/site-packages/flask_script/**init**.py:153: UserWarning: Options will be ignored.
warnings.warn("Options will be ignored.")
| closed | 2015-06-16T06:21:36Z | 2015-07-16T02:56:10Z | https://github.com/quokkaproject/quokka/issues/214 | [] | netqyq | 1 |
miguelgrinberg/microblog | flask | 171 | I'm getting an error on Chapter 16 | Hello, I spent few hours but couldn't find any solution. Here is my problem:
When I used reindex method with Post model, I got this error:

Also I get this when I call query from Post or User model, I don't know what to do | closed | 2019-07-05T23:18:10Z | 2019-07-06T07:18:53Z | https://github.com/miguelgrinberg/microblog/issues/171 | [
"question"
] | alperiox | 2 |
CTFd/CTFd | flask | 2,099 | Theme template project | Vite supports a project template system. We need to create a theme template to create CTFd themes so people can create themes more easily.
https://vitejs.dev/guide/#community-templates | open | 2022-04-22T19:46:07Z | 2024-01-24T06:41:12Z | https://github.com/CTFd/CTFd/issues/2099 | [] | ColdHeat | 1 |
deeppavlov/DeepPavlov | nlp | 1,312 | TorchBertClassifier does not use token_type_ids | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
Please enter all the information below, otherwise your issue may be closed without a warning.
**Issue**:
TorchBertClassifier does not use token_type_ids in [call](https://github.com/deepmipt/DeepPavlov/blob/master/deeppavlov/models/torch_bert/torch_bert_classifier.py#L139) and [train_on_batch](https://github.com/deepmipt/DeepPavlov/blob/master/deeppavlov/models/torch_bert/torch_bert_classifier.py#L107) methods
They should use token_types_ids from features.
It's not a problem in case of single segment, but it is a bug for classification of two text segments.
| closed | 2020-09-07T12:45:25Z | 2020-11-13T11:16:23Z | https://github.com/deeppavlov/DeepPavlov/issues/1312 | [
"bug"
] | yurakuratov | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 757 | Issue with Colab | Hi! I'm using CycleGAN in Google Colab. In some reason when training it doesn't show anything in Checkpoints folder - no folders and no files. I tried with different browsers and different accounts. The files themselves are there and model continues to train after stop with no problem. Anybody knows why that is?
And also any chance to visualize the progress in Colab? Visdom server seems not to work there and html file not displayed. | closed | 2019-09-05T17:38:02Z | 2019-09-30T21:23:15Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/757 | [] | muxgt | 8 |
Miserlou/Zappa | flask | 1,847 | Zappa deployment throwing 404 error | <!--- Provide a general summary of the issue in the Title above -->
## Context
I am using connexion for my Flask application. I am getting a 404 error \ while accessing teh endpoint. I have tried to dummy down the issue I am facing at https://github.com/ctippur/zappa-connexion-sample
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
I have had success deploying applications using connexion and zappa. I am trying to use newer versions of the libraries to deploy code. Looks like the app gets initialized but I get a 404 error on any endpoint.
I am able to test locally without any issues.
I also downloaded the lambda code and was able to run the application locally as well.
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
My python version is 3.6.3
Zappa version - 0.48.2
## Expected Behavior
<!--- Tell us what should happen -->
The '/' endpoint should return the current time.
## Actual Behavior
<!--- Tell us what happens instead -->
I get a 404 error.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
https://github.com/ctippur/zappa-connexion-sample/blob/master/README.md
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: MacOS High Sierra 10.13.2
* The output of `pip freeze`:https://github.com/ctippur/zappa-connexion-sample/blob/master/requirements.txt
* Link to your project (optional): https://github.com/ctippur/zappa-connexion-sample
* Your `zappa_settings.py`: https://github.com/ctippur/zappa-connexion-sample/blob/master/zappa_settings.json
| open | 2019-04-03T16:30:40Z | 2022-05-25T04:12:34Z | https://github.com/Miserlou/Zappa/issues/1847 | [] | ctippur | 6 |
nalepae/pandarallel | pandas | 49 | groupby and apply does not work | ```python
def cumulate_asset_scores(dataset):
dataset['counts'] = list(range(len(dataset)))
dataset['corrects'] = dataset['score'].cumsum()
return dataset
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True, shm_size_mb=30000)
df = dataframe.groupby('user_id').parallel_apply(cumulate_asset_scores)
```
yields
```bash
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.7/site-packages/multiprocess/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/local/lib/python3.7/site-packages/pathos/helpers/mp_helper.py", line 15, in <lambda>
func = lambda args: f(*args)
File "/usr/local/lib/python3.7/site-packages/pandarallel/dataframe_groupby.py", line 15, in worker
df = client.get(object_id)
File "pyarrow/_plasma.pyx", line 579, in pyarrow._plasma.PlasmaClient.get
File "pyarrow/_plasma.pyx", line 572, in pyarrow._plasma.PlasmaClient.get
File "pyarrow/serialization.pxi", line 470, in pyarrow.lib.deserialize
File "pyarrow/serialization.pxi", line 433, in pyarrow.lib.deserialize_from
File "pyarrow/serialization.pxi", line 275, in pyarrow.lib.SerializedPyObject.deserialize
File "pyarrow/serialization.pxi", line 183, in pyarrow.lib.SerializationContext._deserialize_callback
File "/usr/local/lib/python3.7/site-packages/pyarrow/serialization.py", line 175, in _deserialize_pandas_dataframe
return pdcompat.serialized_dict_to_dataframe(data)
File "/usr/local/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 640, in serialized_dict_to_dataframe
for block in data['blocks']]
File "/usr/local/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 640, in <listcomp>
for block in data['blocks']]
File "/usr/local/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 661, in _reconstruct_block
dtype = make_datetimetz(item['timezone'])
File "/usr/local/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 676, in make_datetimetz
return _pandas_api.datetimetz_type('ns', tz=tz)
TypeError: 'NoneType' object is not callable
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-12-6a762afbe0c2> in <module>
2
3 pandarallel.initialize(shm_size_mb=30000)
----> 4 df = dataframe.groupby('user_id').parallel_apply(cumulate_asset_scores)
5 #res = []
6 #for _, g in tqdm.tqdm(list(dataframe.groupby('user_id'))):
/usr/local/lib/python3.7/site-packages/pandarallel/utils.py in wrapper(*args, **kwargs)
78 """Please see the docstring of this method without `parallel`"""
79 try:
---> 80 return func(*args, **kwargs)
81
82 except _PlasmaStoreFull:
/usr/local/lib/python3.7/site-packages/pandarallel/dataframe_groupby.py in closure(df_grouped, func, *args, **kwargs)
36 with ProcessingPool(nb_workers) as pool:
37 result_workers = pool.map(
---> 38 DataFrameGroupBy.worker, workers_args)
39
40 if len(df_grouped.grouper.shape) == 1:
/usr/local/lib/python3.7/site-packages/pathos/multiprocessing.py in map(self, f, *args, **kwds)
135 AbstractWorkerPool._AbstractWorkerPool__map(self, f, *args, **kwds)
136 _pool = self._serve()
--> 137 return _pool.map(star(f), zip(*args)) # chunksize
138 map.__doc__ = AbstractWorkerPool.map.__doc__
139 def imap(self, f, *args, **kwds):
/usr/local/lib/python3.7/site-packages/multiprocess/pool.py in map(self, func, iterable, chunksize)
266 in a list that is returned.
267 '''
--> 268 return self._map_async(func, iterable, mapstar, chunksize).get()
269
270 def starmap(self, func, iterable, chunksize=None):
/usr/local/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: 'NoneType' object is not callable
```
In a jupyter notebook | closed | 2019-10-08T12:21:01Z | 2020-01-29T12:15:04Z | https://github.com/nalepae/pandarallel/issues/49 | [] | JonasRSV | 2 |
python-restx/flask-restx | api | 506 | support for OpenAPI Version 3.0.3 - (Swagger) | **Question**
Are you willing to add support for open api spec version 3 guys ?
flask-restx seems to support only swagger spec 2.0 (seen in code here: https://github.com/python-restx/flask-restx/blob/4c748d00eccef675afbde457d43bca5062715a5c/flask_restx/swagger.py#L295)
| open | 2023-01-09T12:08:56Z | 2024-10-11T00:32:08Z | https://github.com/python-restx/flask-restx/issues/506 | [
"question"
] | nskley | 2 |
streamlit/streamlit | python | 10,160 | Expose OAuth errors during `st.login` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
We're soon launching native authentication in Streamlit (see #8518). One thing we left out for now is handling errors that appear during the OAuth flow. It would be great to either handle them automatically (e.g. by showing a dialog or toast) or exposing them programmatically.
### Why?
These errors should be very rare in Streamlit because many errors are handled directly in the OAuth flow by the identity provider and [most possible errors that are propagated back to the app](https://www.oauth.com/oauth2-servers/server-side-apps/possible-errors/) are due to a) wrong configuration (which we usually catch before even initiating the OAuth flow), b) wrong implementation (which we control), or c) the server of the identity provider being down (which shouldn't happen often for the major providers).
But errors can still happen – the most prominent example we found during testing is when the user clicks "Cancel" on the consent screen shown when logging in for the first time. And there might be others we didn't think about yet.
### How?
Two possible ways:
1. Automatically show a dialog or toast with the error code and potentially error description and error URI. Note that OAuth recommends showing a custom error message to the user instead of showing the error code and error description directly. But I think in our case (where these errors are very rare), it might be fine to just show that and not require the developer to implement it themselves. We should probably have a parameter on `st.login` to disable this automatic notification in case the developer wants to handle the error themselves (see 2).
2. Expose the error details programmatically. One way would be to put it into `st.user` as keys `error`, `error_description` (optional), and `error_uri` (optional). In that case, we should automatically clear these items on the next rerun, otherwise it becomes very hard to only show the error when it happens. Another possible approach would be to have an `on_error` callback on `st.login`. But a) we'd need to pass the error details to this callback, which would make it work a bit differently than our callbacks (currently) work and b) it's a bit more cumbersome to work with this in practice because you often have to stick the error message into `st.session_state` if you want to show it somewhere within the app.
### Additional Context
_No response_ | open | 2025-01-10T23:32:58Z | 2025-01-10T23:33:46Z | https://github.com/streamlit/streamlit/issues/10160 | [
"type:enhancement",
"feature:st.user",
"feature:st.login"
] | jrieke | 1 |
allenai/allennlp | pytorch | 5,703 | Is AllenNLP biased towards BERT model type? | Hi there! I asked a [question](https://stackoverflow.com/questions/73310991/is-allennlp-biased-towards-bert) on stack overflow, as suggested, about a week ago. Just wanted to ping in here in case no one saw it yet. | closed | 2022-08-17T17:32:00Z | 2022-09-01T16:10:13Z | https://github.com/allenai/allennlp/issues/5703 | [
"question",
"stale"
] | pvcastro | 1 |
modelscope/modelscope | nlp | 960 | download_mode = "force_redownload" 未清理干净.arrow缓存 | MsDataset.load(...,download_mode="force_redownload",...
只能保证重新下载最新版附件,
但是解压后generate split的时候若以往有遗留的 .arrow 缓存它还是会去自动用的
若这个 .arrow 是旧附件解压生成的映射,新附件配旧映射会报错(报split内数据量不匹配啥的)
需要让 download_mode="force_redownload" 自动清理.arrow 缓存
调试数据集的时候会遇到这类问题,若更新数据集的zip压缩包或者jsonl附件后,测试调用时即使加了download_mode="force_redownload"也会出现这个问题。一般有数据量变化的同名附件会出现这个问题
问题版本 modelscope 1.15+,以往版本未尝试,另外这两个缓存文件夹未清C:/Users/xxx/.cache/modelscope, C:/Users/xxx/.cache/huggingface 貌似也会对使用download_mode="force_redownload"有影响(不是很确定,可能不是报错split内数据量不匹配,是另一个错误当时未记录),总之这个download_mode="force_redownload"该清的缓存都清理不干净,设计斟酌一下如何保证不影响其它数据集和子集的缓存的前提下redownload 仅对指定的这个数据集子集生效且清理干净后再重下
测试环境 win10 python 3.9-10
Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
* What command or script did you run?
> A placeholder for the command.
* Did you make any modifications on the code or config? Did you understand what you have modified?
* What dataset did you use?
**Your Environments (__required__)**
* OS: `uname -a`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
* You may add addition that may be helpful for locating the problem, such as
* How you installed PyTorch [e.g., pip, conda, source]
* Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
Please @ corresponding people according to your problem:
Model related: @wenmengzhou @tastelikefeet
Model hub related: @liuyhwangyh
Dataset releated: @wangxingjun778
Finetune related: @tastelikefeet @Jintao-Huang
Pipeline related: @Firmament-cyou @wenmengzhou
Contribute your model: @zzclynn
| closed | 2024-08-28T01:48:39Z | 2024-10-01T05:32:20Z | https://github.com/modelscope/modelscope/issues/960 | [] | monetjoe | 1 |
roboflow/supervision | tensorflow | 1,769 | Include additional file format in load_yolo_annotations | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi,
I wanted to load my yolo dataset which uses `.bmp` images via `sv.DetectionDataset.from_yolo()`. Currently only `["jpg", "jpeg", "png"]` are allowed in [load_yolo_annotations](https://github.com/roboflow/supervision/blob/develop/supervision/dataset/formats/yolo.py#L156). I just tested to include `.bmp` in the list and it seemed to work. Would it be possible to extend the available file formats?
Also I think yolo allows [even more image formats](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py#L38)
### Additional
_No response_ | closed | 2025-01-07T14:31:50Z | 2025-01-08T09:29:23Z | https://github.com/roboflow/supervision/issues/1769 | [
"question"
] | pirnerjonas | 5 |
deeppavlov/DeepPavlov | nlp | 1,358 | 👩💻📞 DeepPavlov Community Call #4 | > Update: [DeepPavlov Community Call #4 Recording](http://bit.ly/DPCommunityCall4_Video)
> Subscribe for future calls here (last Thursday of the month, 8am Pacific/7pm MSK):
[http://bit.ly/MonthlyDPCommunityCall2021](http://bit.ly/MonthlyDPCommunityCall2021)
Dear DeepPavlov community,
The online DeepPavlov Community Call is scheduled for December 24th! This is the last call this year, yet it is the first one to feature external speakers. We're super excited, and we can't wait to reconnect again!
As always, you are welcome to suggest topics and call in!
**DeepPavlov Community Call #4 (December 24)**
**We’ll hold the fourth one on December 24 at 8:00am Pacific (7pm MSK/4 or 5pm UTC depending on DST).**
**Add to your calendar: https://bit.ly/FourthDeepPavlovCommunityCall**
As you know, our passion lies within paving the future of AI Assistants through the concept of the Multiskill AI Assistants. There are just a few organizations in the world that specialize building Multiskill AI Assistants — Amazon (Alexa), Google (Google Assistant), Microsoft (Cortana with its Skills Kit for Enterprise), Baidu (DuerOS), Yandex (Alice) and a few other companies that build their own AI Assistants. Building such Multiskill AI Assistants is the challenge incomparable to simpler chatbots that are usually deployed inside of the organizations and on web sites, or individual skills designed for those Multiskill AI Assistants.
We firmly believe that while today’s custom AI assistants are relatively simple, a typical organization needs a Multiskill AI Assistant to properly respond to their user needs across a number of organizational functions.
With this Community Call, we are starting our brand new series on “Building Multiskill AI Assistants”.
Yandex’s AI Assistant is called Alice (Wikipedia). It has been uniquely designed to masterfully blend internal scenarios, its custom chit-chat technology known as Chatter (“Boltalka”), as well as third-party skills together to provide an integrated user experience to the consumers. Alice is widely used across mobile phones, smart speakers, and other smart devices in Russia, and it’s usage is growing steadily since its launch in October 2017.
Today it is an honor for us to have Mr. David Dale, Former Software Engineer of Yandex AI Assistant Alice, with us!
During this upcoming call you’ll learn from Mr. Dale the basics of how Yandex’s Alice works, as well as how it orchestrates these different components together.
You will have a chance to ask your questions to Mr. Dale, and we will use these questions to guide his second talk as part of our brand new series called “Building Multiskill AI Assistants”.
This call is open to all conversational AI enthusiasts regardless of their background and experience. Don’t miss a chance and join us!
**Agenda for the DeepPavlov Community Call #4:**
> 7:00pm – 7:10pm | Welcome and the overview of what’s new to DeepPavlov.
>
> 7:10pm – 7:20pm | DeepPavlov Library 0.14.0 Release Announcement by Daniel Kornev, CPO at DeepPavlov .
>
> 7:20pm – 7:50pm | Yandex Alice, Part I: Practical Experience of Building Multiskill AI Assistant for Russian Market by David Dale, Former Software Engineer @ Yandex Alice, currently Freelancer & Research Engineer at Skoltech.
>
> 7:50pm – 8:10pm | Form Filling with Go-Bot by Oleg Serikov, Software Engineer at DeepPavlov.
In case you’ve missed the **third one**, we’ve uploaded a record — [see the playlist](https://bit.ly/DPCommunityCall3_Video). Check it out!
**Updated Time For 2021**
Starting with January 2021, we are shifting our Community Calls to start 1 hour later to make sure those who also participate in our regular Seminars have a chance to participate in the Community Calls, and vice versa.
_Download a recurring calendar invite here (.ics):
[http://bit.ly/MonthlyDPCommunityCall2021](http://bit.ly/MonthlyDPCommunityCall2021)
Remember that our updated calls will start at 7pm MSK (8am Pacific DST) starting with January 28, 2021!_
**Interested?**
Please let us know and leave a comment with any topics or questions you’d like to hear about!
We can’t promise to cover everything but we’ll do our best next week or in a future call.
After calling in or watching, please do fill in the survey to let us know if you have any feedback on the format or content: [https://bit.ly/dpcallsurvey](https://bit.ly/dpcallsurvey )
See you!
The DeepPavlov team | closed | 2020-12-14T15:17:54Z | 2021-01-21T12:08:38Z | https://github.com/deeppavlov/DeepPavlov/issues/1358 | [
"discussion"
] | moryshka | 0 |
nonebot/nonebot2 | fastapi | 2,672 | Plugin: nonebot-plugin-sanyao | ### PyPI 项目名
nonebot-plugin-sanyao
### 插件 import 包名
nonebot_plugin_sanyao
### 标签
[{"label":"占卜","color":"#52e5ea"}]
### 插件配置项
_No response_ | closed | 2024-04-21T07:46:24Z | 2024-04-22T04:34:36Z | https://github.com/nonebot/nonebot2/issues/2672 | [
"Plugin"
] | afterow | 2 |
supabase/supabase-py | flask | 1,073 | Installing via Conda forgets h2 | # Bug report
<!--
⚠️ We receive a lot of bug reports which have already been solved or discussed. If you are looking for help, please try these first:
- Docs: https://docs.supabase.com
- Discussions: https://github.com/supabase/supabase/discussions
- Discord: https://discord.supabase.com
Before opening a bug report, please verify the following:
-->
- [X] I confirm this is a bug with Supabase, not with my own application.
- [X] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
When installing supabase-py via Conda, the `h2` dependency is somehow forgotten so an error message occurs when using something that requires `httpx`.
## To Reproduce
1. `conda create -n test && conda activate test`
2. `conda install supabase`
3. `pip list`
4. h2 is missing
To compare, you can install `supabase` using pip in the same environment, 4 missing packages will be installed:
> Installing collected packages: hyperframe, hpack, h2, aiohttp
> Attempting uninstall: aiohttp
> Found existing installation: aiohttp 3.11.10
> Uninstalling aiohttp-3.11.10:
> Successfully uninstalled aiohttp-3.11.10
> Successfully installed aiohttp-3.11.13 h2-4.2.0 hpack-4.1.0 hyperframe-6.1.0
## Expected behavior
`h2` should be installed with supabase when using Conda.
## System information
- OS: Windows 10
- Version of supabase-py: 2.13.0
- Version of Python: 3.12
## Additional context
PS: Your issue template for bug report is asking for Node.js version, it should be Python.
| open | 2025-03-10T16:52:42Z | 2025-03-10T16:52:50Z | https://github.com/supabase/supabase-py/issues/1073 | [
"bug"
] | PierreMesure | 1 |
zappa/Zappa | django | 960 | Retained versions and alias | ## Context
When a `num_retained_versions` is set in the zappa_settings.py and an alias has been created manually, a boto3 error trigger.
## Expected Behavior
Versions referenced by alias should be skipped from the deletion process.
## Actual Behavior
The following error trigger and the `zappa update` fails :
> botocore.errorfactory.ResourceConflictException: An error occurred (ResourceConflictException) when calling the DeleteFunction operation: Unable to delete version because the following aliases reference it: [test]
## Possible Fix
Add a try/except to avoid the errror to trigger and making the command to fail.
## Steps to Reproduce
1. Set the `num_retained_versions` to something else than `null`
2. Create an alias from the AWS Console on the Lambda
3. Use `zappa update stage`
## Your Environment
* Zappa version used: 0.52.0
* Python version: Python 3.7
* Your `zappa_settings.json`:
```
"stage" : {
"app_function": "app.app",
"project_name": "stage",
"num_retained_versions": 5
}
``` | closed | 2021-04-07T09:25:57Z | 2024-04-13T19:37:02Z | https://github.com/zappa/Zappa/issues/960 | [
"no-activity",
"auto-closed"
] | Yaronn44 | 2 |
ipython/ipython | data-science | 13,970 | Auto suggestions cannot be completed when in the middle of multi-line input | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
Completing an ipython 8 [autosuggestion](https://ipython.readthedocs.io/en/stable/whatsnew/version8.html#autosuggestions) by pressing `ctrl-f` or `ctrl-e` works perfectly for code on the final line of multi-line input (the following two screenshots show before and after pressing `ctrl-f` or `ctrl-e`):
<img width="357" alt="Screenshot 2023-03-12 at 14 09 48" src="https://user-images.githubusercontent.com/19657652/224570634-fb856132-4b73-46b4-bdf0-576f9601633f.png">
<img width="363" alt="Screenshot 2023-03-12 at 14 10 00" src="https://user-images.githubusercontent.com/19657652/224570647-40ccf5c1-00d6-4d5e-b965-691ca12ffe51.png">
However pressing `ctrl-f` or `ctrl-e` fails when trying to complete the suggestion in the middle of multi-line input -- instead, the only thing that works is holding down right arrow until the suggestion is completed:
<img width="361" alt="Screenshot 2023-03-12 at 14 11 27" src="https://user-images.githubusercontent.com/19657652/224570743-37756db5-989a-46d4-8447-bb23649d7309.png">
Note this problem persists across different terminals (apple Terminal and iTerm2) and after disabling custom `bashrc` and `inputrc`. Is there any way suggestions can be completed with one keystroke in the middle of multi-line input? | closed | 2023-03-12T20:15:57Z | 2023-03-30T08:29:51Z | https://github.com/ipython/ipython/issues/13970 | [
"bug",
"autosuggestions"
] | lukelbd | 3 |
nvbn/thefuck | python | 1,449 | Having a hard time getting set up on Ubuntu 24.04 | The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
```
pblanton@ThreadRipper:~$ thefuck --version
Traceback (most recent call last):
File "/home/pblanton/.local/bin/thefuck", line 5, in <module>
from thefuck.entrypoints.main import main
File "/home/pblanton/.local/lib/python3.12/site-packages/thefuck/entrypoints/main.py", line 8, in <module>
from .. import logs # noqa: E402
^^^^^^^^^^^^^^^^^^^
File "/home/pblanton/.local/lib/python3.12/site-packages/thefuck/logs.py", line 8, in <module>
from .conf import settings
File "/home/pblanton/.local/lib/python3.12/site-packages/thefuck/conf.py", line 1, in <module>
from imp import load_source
ModuleNotFoundError: No module named 'imp'
```
Your system (Debian 7, ArchLinux, Windows, etc.):
Ubuntu 24.04
How to reproduce the bug:
install on Ubuntu 24.04 and try to run `thefuck --version`
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
Same error
| open | 2024-05-22T19:03:56Z | 2024-09-20T14:07:08Z | https://github.com/nvbn/thefuck/issues/1449 | [] | pblanton | 18 |
dynaconf/dynaconf | fastapi | 207 | [RFC] Allow python.module.path on INCLUDES_FOR_DYNACONF | Dynaconf right now can load includes, the includes should be a list of paths like `['/path/to/file.yaml', '/path/to/glob/*.toml']`
**Problem**
It should also allow includes by `['python.module.paths'..]` instead of only `glob-able` paths.
**Describe the solution you'd like**
I want to be able to use:
```py
INCLUDES_FOR_DYNACONF=['python_module.submodule.settings', ...]
```
**Additional context**
This is request by https://pulp.plan.io/issues/5290
The changes should go on lines:
https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/base.py#L757-L799
And dynaconf already has ability to load Python modules on: https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/loaders/py_loader.py
| closed | 2019-08-16T16:43:05Z | 2019-08-22T12:33:22Z | https://github.com/dynaconf/dynaconf/issues/207 | [
"in progress",
"Not a Bug",
"RFC"
] | rochacbruno | 0 |
allenai/allennlp | nlp | 5,024 | A google colab for the guide | **Is your feature request related to a problem? Please describe.**
(Continuing the discussion from here https://github.com/allenai/allennlp/issues/5017)
- While going through the guide, I felt that it would be a great addition if a google colab notebook can be created.
- As one begins to go through a new ML library / framework, it is very helpful to run the code side by side as going through the guide.
- A lot of people are in general more acquainted with Jupyter notebooks, and thus a simple Jupyter notebook can really onboard the new users very fast.
- We can also add content in the `markdown` cells, so one can learn while going through it!
**Describe the solution you'd like**
Creation of a Colab for the guide.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2021-02-26T16:51:47Z | 2021-02-26T18:38:55Z | https://github.com/allenai/allennlp/issues/5024 | [
"Contributions welcome",
"Feature request"
] | ekdnam | 4 |
dropbox/sqlalchemy-stubs | sqlalchemy | 238 | Column comment type | Running the SQLAlchemy [`inspect`](https://docs.sqlalchemy.org/en/14/core/inspection.html#sqlalchemy.inspect) method on an existing MySQL server on a table with a column with an empty comment field is returning `comment=None` so I guess `comment` to be `Optional[str]` but mypy output complains that it needs to be `str`. Actually I don't know why since the repo code shows:
https://github.com/dropbox/sqlalchemy-stubs/blob/80f89322c3a58c1a8c19588b17869c5f49a1e72b/sqlalchemy-stubs/sql/schema.pyi#L88
Here is a MWE:
````python
from sqlalchemy import Column
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class MyTable(Base):
id = Column('id', comment=None)
````
and output:
````console
test.py:10: error: No overload variant of "Column" matches argument types "str", "None"
test.py:10: note: Possible overload variants:
test.py:10: note: def [_T] Column(self, name: str, type_: Type[TypeEngine[_T]], *args: Any, autoincrement: Union[bool, str] = ..., default: Any = ..., doc: str = ..., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
test.py:10: note: def [_T] Column(self, name: str, type_: TypeEngine[_T], *args: Any, autoincrement: Union[bool, str] = ..., default: Any = ..., doc: str = ..., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
test.py:10: note: def [_T] Column(self, name: str, type_: ForeignKey, *args: Any, autoincrement: Union[bool, str] = ..., default: Any = ..., doc: str =
..., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
.., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
test.py:10: note: def [_T] Column(self, type_: TypeEngine[_T], *args: Any, autoincrement: Union[bool, str] = ..., default: Any = ..., doc: str = ..., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
test.py:10: note: def [_T] Column(self, type_: ForeignKey, *args: Any, autoincrement: Union[bool, str] = ..., default: Any = ..., doc: str = ..., key: str = ..., index: bool = ..., info: Mapping[str, Any] = ..., nullable: bool = ..., onupdate: Any = ..., primary_key: bool = ..., server_default: Any = ..., server_onupdate: Union[FetchedValue, FunctionElement[Any]] = ..., quote: Optional[bool] = ..., unique: bool = ..., system: bool = ..., comment: str = ...) -> Column[_T]
Found 1 error in 1 file (checked 1 source file)
```` | open | 2022-02-10T11:39:33Z | 2022-02-10T12:49:03Z | https://github.com/dropbox/sqlalchemy-stubs/issues/238 | [] | sdfordham | 0 |
sqlalchemy/alembic | sqlalchemy | 706 | Newly created Oracle Indexes are trying to be added as new. | How do I debug Alembic to find out why it is trying to add existing newly-created indexes?
I have a table defined as such:
```
class EditEOB(BASE):
'''This is the Edit EOB cross reference table.'''
__tablename__ = 'edit_eob'
__table_args__ = (Index('eob_edit_idx', 'eob_rid', 'edit_rid', 'relationship_type', 'effective_date', unique=True),
Index('pk_edit_eob', 'edit_rid', 'eob_rid', 'relationship_type', 'effective_date', unique=True))
edit_rid = Column(Numeric(15, 0), ForeignKey('edit_base.edit_rid'), primary_key=True)
eob_rid = Column(Numeric(15, 0), ForeignKey('eob_base.eob_rid'), primary_key=True)
effective_date = Column(Date, primary_key=True)
expiration_date = Column(Date)
relationship_type = Column(String(3), primary_key=True)
entry_date = Column(DateTime)
entry_user_id = Column(String(15))
dlm = Column(DateTime)
ulm = Column(String(15))
```
The two indexes are already existing. From the debug:
```
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EOB_RID', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EDIT_RID', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'RELATIONSHIP_TYPE', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EFFECTIVE_DATE', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EOB_RID', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EDIT_RID', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'RELATIONSHIP_TYPE', u'NORMAL', u'UNIQUE', u'DISABLED', None)
DEBUG [sqlalchemy.engine.base.Engine] Row (u'EOB_EDIT_IDX', u'EFFECTIVE_DATE', u'NORMAL', u'UNIQUE', u'DISABLED', None)
INFO [alembic.autogenerate.compare] Detected added index 'eob_edit_idx' on '['eob_rid', 'edit_rid', 'relationship_type', 'effective_date']'
```
I don't know if this is showing that the index exists or not.
Using pysql and describing the table and index shows:
```
Name Type Null? Comments Indexes
----------------- ------------ ----- -------- -------------------------------
EDIT_RID NUMBER(22) N EOB_EDIT_IDX(2), PK_EDIT_EOB(1)
EOB_RID NUMBER(22) N EOB_EDIT_IDX(1), PK_EDIT_EOB(2)
EFFECTIVE_DATE DATE(7) N PK_EDIT_EOB(4), EOB_EDIT_IDX(4)
EXPIRATION_DATE DATE(7) Y
RELATIONSHIP_TYPE VARCHAR2(3) N EOB_EDIT_IDX(3), PK_EDIT_EOB(3)
ENTRY_DATE DATE(7) Y
ENTRY_USER_ID VARCHAR2(15) Y
DLM DATE(7) Y
ULM VARCHAR2(15) Y
base@BWCUAT desc eob_edit_idx
Property Value
----------------------- ----------------------------------------------------------------
TABLE_OWNER BASE
TABLE_NAME EDIT_EOB
INDEX_TYPE NORMAL
UNIQUENESS UNIQUE
COMPRESSION DISABLED
LEAF_BLOCKS 4
DISTINCT_KEYS 803
AVG_LEAF_BLOCKS_PER_KEY 1
Indexed Columns EOB_RID(1), EDIT_RID(2), RELATIONSHIP_TYPE(3), EFFECTIVE_DATE(4)
``` | closed | 2020-06-25T02:50:55Z | 2020-06-25T15:33:38Z | https://github.com/sqlalchemy/alembic/issues/706 | [
"bug",
"external SQLAlchemy issues",
"oracle"
] | davebyrnew | 14 |
jupyter-book/jupyter-book | jupyter | 1,554 | Add --builder singlehtml to docs? | Would a contribution to the docs that `--builder singlehtml` now exists and what it does be welcome? I spent a while looking for something like this and only discovered that it exists through @TomasBeuzen kindly pointing it out in the CHANGELOG. I am happy to put together a small PR if so. | closed | 2021-12-02T22:14:33Z | 2021-12-04T00:12:16Z | https://github.com/jupyter-book/jupyter-book/issues/1554 | [] | ttimbers | 1 |
marcomusy/vedo | numpy | 545 | did actor.scalar_colors disappear? | I have some old code that used actor.scalar_colors to specify the color for the vertices.
Workes fine in vedo 2021.0.3 but breaks in later versions. Has this been replaced by something else? | closed | 2021-11-23T14:54:32Z | 2021-11-23T15:53:19Z | https://github.com/marcomusy/vedo/issues/545 | [] | RubendeBruin | 4 |
pandas-dev/pandas | data-science | 61,123 | DOC: `read_excel` `nrows` parameter reads extra rows when tables are adjacent (no blank row) | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Two tables, each header row + 3 data rows
file1 = "test1.xlsx" # blank row between the tables
file2 = "test2.xlsx" # no blank row between the tables
df1 = pd.read_excel(file1, header=0, nrows=4)
df2 = pd.read_excel(file2, header=0, nrows=4)
print(df1)
print(df2)
assert df1.shape == df2.shape
# df2 includes the header row of the following table
```
### Issue Description
Consider two Excel files with nearly identical data: two tables, each with a header row and 3 data rows. The only difference is that the first has a blank row between the tables and the second does not.
It seems that the blank line makes a difference, even when `nrows` is specified. I expect `nrows=4` to always parse 4 rows, yielding a data frame with a header and 3 data rows. Yet without a blank line, `read_excel` also includes the next row, which is the header for the next table.
[test1.xlsx](https://github.com/user-attachments/files/19254818/test1.xlsx)
[test2.xlsx](https://github.com/user-attachments/files/19254817/test2.xlsx)
### Expected Behavior
I expect `nrows=4` to always parse 4 rows regardless of context: a header and 3 data rows.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.0rc1
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : 8.12.3
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.3
lxml.etree : None
matplotlib : 3.8.4
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.2.0
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.2
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| closed | 2025-03-14T21:30:43Z | 2025-03-19T20:37:24Z | https://github.com/pandas-dev/pandas/issues/61123 | [
"Docs",
"IO Excel",
"good first issue"
] | robertutterback | 1 |
litestar-org/litestar | pydantic | 3,737 | Bug: NameError: name 'BigInteger' is not defined | ### Description
subclassing
```python
class BigIntAuditBase(CommonTableAttributes, BigIntPrimaryKey, AuditColumns, DeclarativeBase):
"""Base for declarative models with BigInt primary keys and audit columns."""
registry = orm_registry
```
cause the mentioned error
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
class User(BigIntAuditBase):
__tablename__ = "core.user_account"
__table_args__ = {"comment": "User accounts for application access"}
__pii_columns__ = {"name", "email", "avatar_url"}
email: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
name: Mapped[str | None] = mapped_column(nullable=True, default=None)
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
(.venv) cbdiesse@MacStudodeKasor appsys % make migrations
ATTENTION: This operation will create a new database migration for any defined models changes.
Migration message: DB Migration 1024-09-14
Using Litestar app from env: 'app.asgi:app'
Loading environment configuration from .env
Traceback (most recent call last):
File "/Users/cbdiesse/work/kas3/appsys/.venv/bin/app", line 8, in <module>
sys.exit(run_cli())
^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/kaspy/app/__main__.py", line 26, in run_cli
run_litestar_cli()
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/__main__.py", line 6, in run_cli
litestar_group()
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/rich_click/rich_command.py", line 367, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/rich_click/rich_command.py", line 151, in main
with self.make_context(prog_name, args, **extra) as ctx:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/cli/_utils.py", line 224, in make_context
self._prepare(ctx)
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/cli/_utils.py", line 206, in _prepare
env = ctx.obj = LitestarEnv.from_env(ctx.params.get("app_path"), ctx.params.get("app_dir"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/cli/_utils.py", line 112, in from_env
loaded_app = _load_app_from_path(app_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/cli/_utils.py", line 277, in _load_app_from_path
module = importlib.import_module(module_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/cbdiesse/work/kas3/appsys/kaspy/app/asgi.py", line 52, in <module>
app = create_app()
^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/kaspy/app/asgi.py", line 33, in create_app
return Litestar(
^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/app.py", line 486, in __init__
self.register(route_handler)
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/app.py", line 687, in register
route_handler.on_registration(self)
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/http_handlers/base.py", line 561, in on_registration
super().on_registration(app)
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/base.py", line 538, in on_registration
self._validate_handler_function()
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/http_handlers/base.py", line 572, in _validate_handler_function
super()._validate_handler_function()
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/base.py", line 549, in _validate_handler_function
self.parsed_data_field is not None
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/base.py", line 241, in parsed_data_field
self._parsed_data_field = self.parsed_fn_signature.parameters.get("data")
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/handlers/base.py", line 226, in parsed_fn_signature
self._parsed_fn_signature = ParsedSignature.from_fn(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/utils/signature.py", line 215, in from_fn
fn_type_hints = get_fn_type_hints(fn, namespace=signature_namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/litestar/utils/signature.py", line 168, in get_fn_type_hints
hints = get_type_hints(fn_to_inspect, globalns=namespace, include_extras=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cbdiesse/work/kas3/appsys/.venv/lib/python3.12/site-packages/typing_extensions.py", line 1230, in get_type_hints
hint = typing.get_type_hints(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/typing.py", line 2310, in get_type_hints
hints[name] = _eval_type(value, globalns, localns, type_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/typing.py", line 415, in _eval_type
return t._evaluate(globalns, localns, type_params, recursive_guard=recursive_guard)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/lib/python3.12/typing.py", line 947, in _evaluate
eval(self.__forward_code__, globalns, localns),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
NameError: name 'BigInteger' is not defined
make: *** [migrations] Error 1
```
### Litestar Version
litestar 2.11.0
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-09-14T12:41:10Z | 2025-03-20T15:54:55Z | https://github.com/litestar-org/litestar/issues/3737 | [
"Bug :bug:"
] | cbdiesse | 3 |
slackapi/bolt-python | fastapi | 583 | Is it possible to handle OAuth flow entirely with websockets? | Our team has an app that currently uses Flask and the OAuth flow to allow our users to set up incoming webhooks in a self-service fashion. We are trying to rewrite the app to use Socket Mode. Using OAuth flow seems to require a redirect URL, but the redirect URL requires HTTP/S. We verified that the SocketModeHandler client's wss_uri is not accepted as the redirect URL. Is there a way to complete the OAuth flow without exposing some HTTP endpoint? It seems almost everything can be done except for passing the generated authorization code back to the socket mode app. If we had a way to get that code, it seems we would be able to use `app.client.oauth_v2_access` to finish the flow.
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.6.1
slack-sdk==3.6.0
slackclient==2.5.0
#### Python runtime version
Python 3.9.9
#### OS info
ProductName: macOS
ProductVersion: 11.6.2
BuildVersion: 20G314
Darwin Kernel Version 20.6.0: Wed Nov 10 22:23:07 PST 2021; root:xnu-7195.141.14~1/RELEASE_X86_64
#### Steps to reproduce:
There is nothing to reproduce as this is a question about whether the above is possible. In all our searching in the docs and others' questions we have not found an answer.
### Expected result:
Bolt OAuth functionality works regardless of connection type.
### Actual result:
There appears to be no way to have the websocket-based Slack app receive the generated OAuth authorization code upon user authorization.
## Requirements
An answer to this question would be appreciated; a method/example for completing the OAuth flow with websocket connections would be even better.
| closed | 2022-01-31T21:26:46Z | 2024-04-27T00:21:13Z | https://github.com/slackapi/bolt-python/issues/583 | [
"enhancement"
] | nimjor | 11 |
ageitgey/face_recognition | python | 1,364 | return np.linalg.norm(face_encodings - face_to_compare, axis=1) | * face_recognition version:
* Python version: 3.9
* Operating System: Windows 10
### Description
i was trying this code by sentdex in my compiler but i was getting a error, sentdex in his code have assigned [0] in this line
encoding = face_recognition.face_encodings(image)[0]
while my system is giving error like 'list index out of range' when i copied code from sentdex. but when i tried to remove [0] and made the line like encoding = face_recognition.face_encodings(image) i now am getting another errors what should i do
### What I Did
import face_recognition
import os
import cv2
KNOWN_FACES_DIR = "C:/Users/Admin/Desktop/KnownFaces"
TOLERANCE = 0.5
FRAME_THICKNESS = 3
FONT_THICKNESS = 2
MODEL = "cnn"
video = cv2.VideoCapture(0)
print('Loading known faces...')
known_faces = []
known_names = []
for name in os.listdir(KNOWN_FACES_DIR):
for filename in os.listdir(f'{KNOWN_FACES_DIR}/{name}'):
image = face_recognition.load_image_file(f'{KNOWN_FACES_DIR}/{name}/{filename}')
encoding = face_recognition.face_encodings(image)[0]
known_faces.append(encoding)
known_names.append(name)
print('Processing unknown faces...')
while True:
ret, image = video.read()
locations = face_recognition.face_locations(image, model=MODEL)
encodings = face_recognition.face_encodings(image, locations)
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
print(f', found {len(encodings)} face(s)')
for face_encoding, face_location in zip(encodings, locations):
results = face_recognition.compare_faces(known_faces, face_encoding, TOLERANCE)
match = None
if True in results:
match = known_names[results.index(True)]
print(f' - {match} from {results}')
top_left = (face_location[3], face_location[0])
bottom_right = (face_location[1], face_location[2])
color = [0, 255, 0]
cv2.rectangle(image, top_left, bottom_right, color, FRAME_THICKNESS)
top_left = (face_location[3], face_location[2])
bottom_right = (face_location[1], face_location[2] + 22)
cv2.rectangle(image, top_left, bottom_right, color, cv2.FILLED)
cv2.putText(image, match, (face_location[3] + 10, face_location[2] + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (200, 200, 200), FONT_THICKNESS)
cv2.imshow(video, image)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cv2.destroyWindow(filename)
```
Traceback (most recent call last):
File "C:\Users\Admin\PycharmProjects\pythonProject1\venv\FT.py", line 18, in <module>
encoding = face_recognition.face_encodings(image)[0]
IndexError: list index out of range
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
this happend when i didnt remove [0]
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\videoio\src\cap_msmf.cpp (1022) CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638
Traceback (most recent call last):
File "C:\Users\Admin\PycharmProjects\pythonProject1\venv\FT.py", line 27, in <module>
locations = face_recognition.face_locations(image, model=MODEL)
File "C:\Users\Admin\PycharmProjects\pythonProject1\venv\lib\site-packages\face_recognition\api.py", line 119, in face_locations
return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
File "C:\Users\Admin\PycharmProjects\pythonProject1\venv\lib\site-packages\face_recognition\api.py", line 103, in _raw_face_locations
return cnn_face_detector(img, number_of_times_to_upsample)
TypeError: __call__(): incompatible function arguments. The following argument types are supported:
1. (self: _dlib_pybind11.cnn_face_detection_model_v1, imgs: list, upsample_num_times: int=0, batch_size: int=128) -> std::vector<std::vector<dlib::mmod_rect,std::allocator<dlib::mmod_rect> >,std::allocator<std::vector<dlib::mmod_rect,std::allocator<dlib::mmod_rect> > > >
2. (self: _dlib_pybind11.cnn_face_detection_model_v1, img: array, upsample_num_times: int=0) -> std::vector<dlib::mmod_rect,std::allocator<dlib::mmod_rect> >
Invoked with: <_dlib_pybind11.cnn_face_detection_model_v1 object at 0x0000019063CB1BB0>, None, 1
Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
<pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
conversions are optional and require extra headers to be included
when compiling your pybind11 module.
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
this happend when i tried to remove [0]
```
| open | 2021-08-30T11:30:02Z | 2022-10-12T12:01:24Z | https://github.com/ageitgey/face_recognition/issues/1364 | [] | TanishqRajawat12 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.