repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
tatsu-lab/stanford_alpaca | deep-learning | 180 | Training field corpora | Can I train field corpora based on the LLaMA/Alpaca Model for use in the field. what should I do? Thanks | open | 2023-04-06T01:31:38Z | 2023-04-06T01:31:38Z | https://github.com/tatsu-lab/stanford_alpaca/issues/180 | [] | lisa563 | 0 |
plotly/dash | flask | 2,411 | [BUG] Changing `dcc.Checklist.options` triggers callbacks with inputs `dcc.Checklist.value` | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
```
dash 2.8.1
dash-bootstrap-components 1.3.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-html-components 2.0.0
dash-table 5.0.0
```
_Note that this also occurs with Dash 2.8.0, but not 2.7.0._
- if frontend related, tell us your Browser, Version and OS
- OS: Ubuntu 22.04
- Browser: Chrome
- Version: 109.0.5414.119
**Describe the bug**
When outputting to the `options` property of a `dcc.Checklist`, it triggers callbacks with inputs `dcc.Checklist.value`.
**Expected behavior**
Callbacks with input `dcc.Checklist.value` should only be triggered if that checklist's value is explicitly changed (either through callback or user input), _not_ if only the checklist options are changed.
**Minimal reproducible example**
I've tried to create a simple example app to show exactly what the issue is. I tried taking a screen recording though unfortunately Ubuntu is not helping out currently.
In essence, when running with Dash <= 2.7.0, _only_ selecting/deselecting a city should increment the counter below the checklist. With Dash >= 2.8.0, however, the counter is also incremented when typing in the search bar above the checklist (not desired behaviour)
[DashBugMRE.zip](https://github.com/plotly/dash/files/10573514/DashBugMRE.zip)
| closed | 2023-02-02T21:19:49Z | 2023-02-22T20:27:43Z | https://github.com/plotly/dash/issues/2411 | [] | mclarty3 | 5 |
mckinsey/vizro | pydantic | 508 | Issue with dark/light mode and the graphics in mobile version | ### Description
When we change the light/dark mode on mobile version one of the graphics disappears, like you can see in the gif.

I saw this erro following this tutorial -> https://www.youtube.com/watch?v=wmQ6_GZ0zSk
### Expected behavior
We expect both graphics to remain on the screen.
### Which package?
vizro
### Package version
0.1.17
### Python version
3.12.2
### OS
Windows
### How to Reproduce
1. Install Python, Dash, Pandas and Vizro with the 'pip install'
2. Execute the app above with 'py _file_name_.py'
**Code:**
```
import vizro.plotly.express as px
from vizro import Vizro
import vizro.models as vm
df = px.data.iris()
print(df.head())
page = vm.Page(
title="My first dashboard",
components=[
# components consist of vm.Graph or vm.Table
vm.Graph(id="scatter_chart", figure=px.scatter(df, x="sepal_length", y="petal_width", color="species")),
vm.Graph(id="hist_chart", figure=px.histogram(df, x="sepal_width", color="species")),
],
controls=[
# controls consist of vm.Filter or vm.Parameter
# filter the dataframe (df) of the target graph (histogram), by column sepal_width, using the dropdown
vm.Filter(column="sepal_width", selector=vm.Dropdown(), targets=["hist_chart"]),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-05-31T23:11:06Z | 2024-06-03T15:25:32Z | https://github.com/mckinsey/vizro/issues/508 | [
"Bug Report :bug:"
] | RoniAlvesArt | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 903 | power supplies, or user case can generate noise | closed | 2023-10-16T08:59:59Z | 2023-10-16T09:10:16Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/903 | [] | IngesebTN | 0 | |
tensorflow/datasets | numpy | 5,028 | [data request] <Kinetics 700> | * Name of dataset: Kinetics 700
* URL of dataset: <url>
* License of dataset: Google Inc. under a Creative Commons Attribution 4.0 International License.
* Short description of dataset and use case(s): A collection of large-scale, high-quality datasets of URL links of up to 650,000 video clips that cover 700 human action classes. The videos include human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands and hugging. Each action class has at least 400/600/700 video clips. Each clip is human annotated with a single action class and lasts around 10 seconds.
Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize.
And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md). | open | 2023-07-25T01:43:06Z | 2023-07-25T12:27:18Z | https://github.com/tensorflow/datasets/issues/5028 | [
"dataset request"
] | XinyangHan | 1 |
graphistry/pygraphistry | jupyter | 167 | [BUG] handling of bytes cols | Currently, a bytestring col in api=3 returns the following non-obvious error:
```
Exception: {'data': {'args': {'compression': None, 'dataset_id': 'ef83f4f0ecb442e082d0edfb974f5f95'}, 'error_message': 'cuDF failure at: /conda/conda-bld/libcudf_1591199195844/work/cpp/src/column/column_view.cpp:48: Compound (parent) columns cannot have data'}, 'error_code': 400, 'message': 'Failed to parse', 'success': False}
```
At a minimum, we should give a more friendly error. Even better, we may be able to detect & coerce to `str` in the backend. Finally, should probably check w/ cuDF on expected behavior.
Originally brought up by Vinayaka | open | 2020-08-17T05:54:04Z | 2020-08-18T11:04:33Z | https://github.com/graphistry/pygraphistry/issues/167 | [
"bug"
] | lmeyerov | 1 |
Yorko/mlcourse.ai | matplotlib | 608 | Potentially incorrect statement about .map vs .replace in topic1_pandas_data_analysis.ipynb | In **Applying Functions to Cells, Columns and Rows** section of topic1_pandas_data_analysis.ipynb exercise when explaining how to replace values in column it is stated that `.replace` does the same thing as `.map` that, I think, is just partially correct.
While `.map` is applied to the dataframe produces NaN values for keys not found in the map, `.replace` updates values matching keys in the map only. | closed | 2019-09-04T19:06:22Z | 2019-09-08T13:59:36Z | https://github.com/Yorko/mlcourse.ai/issues/608 | [] | andrei-khveras | 1 |
plotly/dash | dash | 3,131 | Remove server=app.server requirement for Gunicorn | Our usual recommendation for deployment is `gunicorn` as the production server, and you set up a `Procfile` or similar with:
```console
gunicorn app:server --workers 4
```
But wait... what's `server`? Well, you need to add another line to your `app.py` (that's the module `app` referred to before the colon):
```python
app = Dash()
...
server = app.server
```
Ugh, a code change! Ripe for problems. Instead we could add a method to Dash, e.g. `wsgi_app` which returns the flask app, and this way the `Procfile` can be written as:
```console
gunicorn app:app --workers 4
```
And the `app` object already in your Dash app is picked up. Simpler, one fewer code change. nice. Vizro does this: https://vizro.readthedocs.io/en/stable/pages/user-guides/run/#jupyter . | open | 2025-01-23T16:23:40Z | 2025-01-23T20:27:00Z | https://github.com/plotly/dash/issues/3131 | [
"feature",
"P1"
] | ndrezn | 0 |
tox-dev/tox | automation | 2,473 | Support arbitrary arguments in the environment | Similar to how pip and pytest allows arbitrary arguments to be inferred from the environment, I'd like for tox to do the same.
Today, I'd like to be able in one CI environment to declare `--site-packages`, but to do so, I have to update all CI environments to take a `{matrix.toxargs}` and then set `toxargs` only in the relevant environment to `--site-packages`. Better would be to allow for `TOX_ARGS=--site-packages` or `TOX_SITE_PACKAGES=true` to indicate in that environment to act is if `--site-packages` had been passed. I prefer the latter option (variable per option, the pip approach) because it generalizes better, although I recognize that would take more work than `TOX_ARGS` (the pytest approach). | closed | 2022-08-07T20:28:40Z | 2022-08-26T17:41:40Z | https://github.com/tox-dev/tox/issues/2473 | [
"feature:new"
] | jaraco | 3 |
PablocFonseca/streamlit-aggrid | streamlit | 47 | `gridOptions` dict example | First, thank you for this much needed component!
Please, consider the MWE below. I cannot figure out the syntax to pass the list of options directly to `gridOptions`, could you provide an example?
```
from st_aggrid import AgGrid
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/airline-safety/airline-safety.csv')
AgGrid(df, gridOptions= {'configure_selection':{'selection_mode':'multiple', 'use_checkbox':True}})
``` | closed | 2021-11-17T15:32:14Z | 2022-01-24T02:22:23Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/47 | [] | GitHunter0 | 2 |
apache/airflow | python | 47,886 | Backfills || Unexpected Scheduled DAG Runs Created After Backfill | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
I observed an issue with backfills in Airflow. I have a DAG with start_date=datetime(2025, 1, 20) and catchup=True. When triggering a backfill for the date range 2025-01-01 to 2025-01-03, backfill DAG runs were correctly created for 2025-01-02 and 2025-01-03. However, I also noticed scheduled DAG runs being generated for dates from 2025-01-04 to 2025-01-19, which was unexpected.

### What you think should happen instead?
_No response_
### How to reproduce
1. Unpause below mentioned DAG.
2. Execute backfill for the date range 2025-01-01 to 2025-01-03.
3. Now notice unexcepted DAGRUN for 2025-01-04 to 2025-01-19
```
from airflow import DAG
from airflow.providers.standard.sensors.bash import BashSensor
from datetime import datetime
date = "{{ ds }}"
with DAG(
"example_bash_sensor",
tags=[ "sensor"],
schedule="@daily",
catchup=True,
start_date=datetime(2025, 1, 20),
) as dag:
task1 = BashSensor(task_id="sleep_10", bash_command="sleep 1")
task2 = BashSensor(
task_id="sleep_total",
bash_command="echo $EXECUTION_DATE",
env={"EXECUTION_DATE": date},
)
```
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-18T05:06:20Z | 2025-03-19T16:10:52Z | https://github.com/apache/airflow/issues/47886 | [
"kind:bug",
"priority:high",
"area:core",
"area:backfill",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 0 |
jstrieb/github-stats | asyncio | 84 | Option to disable stats from private repos | I think it's a cool feature that data from private repos is also considered. However, I would like to only include data from my public repos. Is this possible? | open | 2022-12-13T14:59:57Z | 2023-05-02T08:50:39Z | https://github.com/jstrieb/github-stats/issues/84 | [] | fritzrehde | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,406 | Nginx + eventlet + https | Following the official guide: https://flask-socketio.readthedocs.io/en/latest/ I am trying to serve my app using nginx and https:
This is my configuration so far:
```
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name myserver.com;
ssl_certificate /etc/letsencrypt/live/myserver.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myserver.com/privkey.pem;
access_log /var/log/nginx/myserver.com.access.log;
error_log /var/log/nginx/myserver.com.error.log;
location / {
include proxy_params;
proxy_pass http://127.0.0.1:5000;
}
location /static {
alias /home/src/static;
expires 30d;
}
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
}
```
However , when trying to connect the flask server reports:
```
Server initialized for eventlet.
79a87f9d10764559b2f166a8269e394a: Sending packet OPEN data {'sid': '79a87f9d10764559b2f166a8269e394a', 'upgrades': ['websocket'], 'pingTimeout': 60000, 'pingInterval': 25000}
79a87f9d10764559b2f166a8269e394a: Sending packet MESSAGE data 0
https://myserver.com is not an accepted origin.
```
Nginx reports:
```
*1422 connect() failed (111: Connection refused) while connecting to upstream, client: my.ip, server: myserver.com, request: "GET /socket.io/?EIO=3&transport=polling&t=NMut1JN HTTP/2.0", upstream: "http://127.0.0.1:5000/socket.io/?EIO=3&transport=polling&t=NMut1JN", host: "myserver.com", referrer: "https://myserver.com/master"
```
The clients give me:
```
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is established.
```
Using http instead of https, the app works as intended.
Any help would me much appreciated !
| closed | 2020-11-11T20:21:47Z | 2020-11-12T11:12:19Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1406 | [
"question"
] | brunakov | 5 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 304 | 音频文件规范问题 | 作者您好,我在运行您给出的可以直接使用的代码也就是给出了预训练模型的项目时,使用我自己的音频进行预测时,出现了以下错误,
<img width="893" alt="image" src="https://user-images.githubusercontent.com/72587415/200839263-623f3138-908d-4cb6-9898-36ff30eaaf0c.png">
我怀疑时音频的长度问题,我之前已经通过ffmpeg进行了预处理,如下:
<img width="424" alt="image" src="https://user-images.githubusercontent.com/72587415/200839438-590397f2-b81b-4494-98e6-b633f34e30ef.png">
采样率应该没有问题,不知道是不是音频长度问题,如果是的话,可以麻烦作者告知一下怎么样可以规范化一下输入的音频,我尝试更改numpy数组的长度也不行,还希望大佬指教一下。非常感谢! | open | 2022-11-09T13:13:54Z | 2022-11-10T08:37:35Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/304 | [] | YUZHIWANG-bug | 4 |
ivy-llc/ivy | tensorflow | 28,575 | Fix Frontend Failing Test: tensorflow - creation.paddle.tril | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-03-13T00:18:59Z | 2024-03-21T19:50:06Z | https://github.com/ivy-llc/ivy/issues/28575 | [
"Sub Task"
] | ZJay07 | 0 |
FactoryBoy/factory_boy | sqlalchemy | 271 | Deprecate Fuzzy Attributes that are already supported in Faker | I'd like to start a conversation around deprecating and eventually removing the Fuzzy Attributes that are already supported by Faker: http://factoryboy.readthedocs.org/en/latest/fuzzy.html
Reasoning:
I'd prefer to limit the scope of FactoryBoy to handling turning input data into models, including complicated scenarios like chaining. And then delegate the fuzzing of the data as much as possible to external libraries like Faker.
Since we now wrap Faker, there's no reason for us to provide FuzzyAttributes that are already provided by Faker--it's just duplicated work. For any duplicate FuzzyAttributes that are more powerful than their Faker equivalents, where possible, I'd rather see that code moved over to Faker.
@rbarrois your thoughts?
| open | 2016-02-17T21:07:59Z | 2024-10-10T10:02:37Z | https://github.com/FactoryBoy/factory_boy/issues/271 | [
"DesignDecision"
] | jeffwidman | 34 |
zihangdai/xlnet | nlp | 212 | Procedure to process data from ClueWeb 2012-B and Common Crawl | Thank you very much for releasing the code! I have a question related to the data processing part. In the paper, you mentioned that
> We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 78GB text respectively.
Could you please explain what types of tools or steps to clean the data from ClueWeb 2012-B and Common Crawl?
Thanks a lot in advance for your help! | open | 2019-08-14T12:45:47Z | 2019-08-14T12:45:47Z | https://github.com/zihangdai/xlnet/issues/212 | [] | formiel | 0 |
STVIR/pysot | computer-vision | 517 | Unable to get DET dataset | It seems the original link has expired. Anyone could help with getting the DET dataset? Thanks | open | 2021-03-26T13:38:28Z | 2021-04-09T21:14:26Z | https://github.com/STVIR/pysot/issues/517 | [
"critical"
] | 0CTA0 | 1 |
PaddlePaddle/ERNIE | nlp | 899 | No module named 'erniekit.common' | 当我执行下面语句的时候
!python run_trainer_ernie_gen.py --param_path ./examples/cls_ernie_gen_infilling_ch.json
报错了,
我是按照官方文档写的代码,原封不动的
Traceback (most recent call last):
File "run_trainer_ernie_gen.py", line 8, in <module>
from erniekit.common.register import RegisterSet
ModuleNotFoundError: No module named 'erniekit.common'
这是版本兼容的问题吗?? | closed | 2023-04-03T13:23:13Z | 2023-08-13T05:30:35Z | https://github.com/PaddlePaddle/ERNIE/issues/899 | [
"wontfix"
] | BAOKAIGE | 1 |
Neoteroi/BlackSheep | asyncio | 43 | Upgrade to httptools 0.1.1 | closed | 2020-09-08T22:53:13Z | 2020-09-11T17:27:26Z | https://github.com/Neoteroi/BlackSheep/issues/43 | [] | RobertoPrevato | 0 | |
psf/black | python | 3,874 | Don't wrap `| None` on a new line | **Describe the style change**
I don't like current behavior on documented FastAPI handlers.
```python
@router.get("/path")
async def some_handler(
can_restrict_members: bool | None = Query(None, description="Can restrict members?"), # the line is toooooo long
):
...
```
**Examples in the current _Black_ style**
```python
async def some_handler(
can_restrict_members: bool
| None = Query(
None,
description="Can restrict members?",
),
):
...
```
**Desired style**
<!-- How do you think _Black_ should format the above snippets: -->
```python
async def some_handler(
can_restrict_members: bool | None = Query(
None,
description="Can restrict members?",
),
):
...
```
**Additional context**
```txt
black==23.9.0
```
| closed | 2023-09-10T16:07:42Z | 2023-09-10T21:44:06Z | https://github.com/psf/black/issues/3874 | [
"T: style"
] | Olegt0rr | 1 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 99 | uiautomation.py获取的控件树,有的控件ID识别不到 | app的页面定义文件,有一部分是这样的:
<tr>
<td></td>
<td height="50">
<div align="center" class="prompt_11" id="prompt">请输入身份证号</div>
</td>
</tr>
<tr>
<td name="handTd" id="iDCard_handTd"></td>
<td background="../images/sub/keyboard_input2.gif" width="344" align="center">
<input type="text" id="iDCard" name="iDCard" class="input_2" maxlength="18" size="25" value="" onclick="initKey(this, 18);" >
</td>
</tr>
这部分定义的界面,在@AutomationLog.txt里,只有id="iDCard_handTd"和id="iDCard"这两个控件的id能识别出来,id="prompt"控件的id就识别不到。
使用中发现,大部分的id都识别不到。
操作系统是windows xp,打了补丁。
这是@AutomationLog.txt对应的部分:
ControlType: DataItemControl ClassName: AutomationId: Rect: (647,172,1003,222)[356x50] Name: 请输入身份证号 Handle: 0x0(0) Depth: 16 GridItemPattern.Row: 1 GridItemPattern.Column: 1 SupportedPattern: GridItemPattern LegacyIAccessiblePattern ScrollItemPattern
ControlType: TextControl ClassName: AutomationId: Rect: (741,180,916,214)[175x34] Name: 请输入身份证号 Handle: 0x0(0) Depth: 17 ValuePattern.Value: SupportedPattern: LegacyIAccessiblePattern ValuePattern
ControlType: DataItemControl ClassName: AutomationId: iDCard_handTd Rect: (569,222,647,306)[78x84] Name: Handle: 0x0(0) Depth: 16 GridItemPattern.Row: 0 GridItemPattern.Column: 0 SupportedPattern: GridItemPattern LegacyIAccessiblePattern ScrollItemPattern
ControlType: ImageControl ClassName: AutomationId: Rect: (569,256,632,272)[63x16] Name: Handle: 0x0(0) Depth: 17 SupportedPattern: LegacyIAccessiblePattern ScrollItemPattern
ControlType: DataItemControl ClassName: AutomationId: Rect: (647,222,1003,306)[356x84] Name: Handle: 0x0(0) Depth: 16 GridItemPattern.Row: 1 GridItemPattern.Column: 1 SupportedPattern: GridItemPattern LegacyIAccessiblePattern ScrollItemPattern
ControlType: EditControl ClassName: AutomationId: iDCard Rect: (671,250,966,276)[295x26] Name: Handle: 0x0(0) Depth: 17 TextPattern.Text: ValuePattern.Value: SupportedPattern: LegacyIAccessiblePattern ScrollItemPattern TextPattern ValuePattern | closed | 2019-10-12T06:53:55Z | 2019-10-13T01:14:27Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/99 | [] | lwwang-lw | 1 |
browser-use/browser-use | python | 779 | Improve extraction tests | ### Problem Description
Current dom extraction layer test don't contain assertions. So, I'll try to improve the tests by adding assertions and adding more tests to it to verify correct elements are being extracted from doms of various website.
### Proposed Solution
Create new tests and add assertions to existng tests.
### Alternative Solutions
_No response_
### Additional Context
_No response_ | open | 2025-02-20T03:07:19Z | 2025-02-20T03:07:19Z | https://github.com/browser-use/browser-use/issues/779 | [
"enhancement"
] | PaperBoardOfficial | 0 |
AutoGPTQ/AutoGPTQ | nlp | 507 | [BUG] Qwen-14B-Chat-Int4 GPTQ model is slower than original model Qwen-14B-Chat greatly | **Describe the bug**
Qwen-14B-Chat-Int4 GPTQ model is slower than original model Qwen-14B-Chat greatly.
**Hardware details**
A100 80G
**Software version**
Version of relevant software such as operation system, cuda toolkit, python, auto-gptq, pytorch, transformers, accelerate, etc.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| open | 2024-01-08T11:15:08Z | 2024-03-15T06:24:30Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/507 | [
"bug"
] | micronetboy | 2 |
lucidrains/vit-pytorch | computer-vision | 42 | What if I want to do imagenet transfer learning on my own data set | Do I only need to download the model parameters of VIT-H_14? If this is the case, the connection you provided seems to have no model parameters.https://github.com/rwightman/pytorch-image-models | open | 2020-12-08T07:58:11Z | 2021-01-12T08:19:22Z | https://github.com/lucidrains/vit-pytorch/issues/42 | [] | tianle-BigRice | 4 |
xlwings/xlwings | automation | 2,108 | Imprt Error | Import xlwings as xw
C: "ProgranDat aWAnaconda3"l iblsite-packages lxl w ingslprolembedded_ code. py in <module>
9 from . .main inport Book
11 LicenseHandler . val idate_l icense(pro")
12
13
C: WProgranDat alAnaconda3"l iblsite-packageslxl w indsbrolut i ls. py in val idate_l icense(product , lic
49
def val idate_l icense(product , license_type-None):
50
51
cipher -suite - LicenseHandler . get_cipher()
key = LicenseHandler . get_l i cense()
53
\ icense_info - json . loads( cipher _suite . decrypt (key.encode() ). decode())
C: WProgramDat aWAnaconda3Wl ibWsite- dackages"lxl wingstprowut i ls. py in get_l icense()
34
35
if os.path.exists(config_file):
36
with open(config_file, 'r') as f:
config - f.readl ines()
38
for line in config:
**UnicodeDecodeError:
C949" codec can ' t decode** byte 0x9b in position 0: i llegal mult ibyte seauence
| closed | 2022-12-01T05:15:33Z | 2023-01-18T19:30:45Z | https://github.com/xlwings/xlwings/issues/2108 | [] | SungO-Kim | 2 |
fastapi-users/fastapi-users | fastapi | 1,448 | All parameters shown on requests on register | ## Describe the bug
All parameter of user create is shown on register route

## Expected behavior
Only mail, password, username and role should be shown
## Configuration
- Python version :3.10.12
- fastapi==0.112.2
- fastapi-users==13.0.0
### FastAPI Users configuration
```py
app.include_router(
fastapi_users.get_register_router(user_schema=UserRead, user_create_schema= UserCreate),
prefix="/auth",
tags=["auth"],
)
class UserCreate(schemas.BaseUserCreate):
"""
UserCreate class for creating a new user.
Attributes:
username (str): The username of the user.
role (UserRole): The role assigned to the user. Defaults to UserRole.FREE.
"""
username: str
role: UserRole = UserRole.FREE
```
| closed | 2024-10-11T19:15:19Z | 2024-11-03T12:33:21Z | https://github.com/fastapi-users/fastapi-users/issues/1448 | [
"bug"
] | CamilleBarnier | 2 |
deezer/spleeter | tensorflow | 133 | [Bug] F tensorflow/stream_executor/cuda/cuda_fft.cc:436] failed to initialize batched cufft plan with customized allocator: | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
I'm not able to get stems getting `F tensorflow/stream_executor/cuda/cuda_fft.cc:436] failed to initialize batched cufft plan with customized allocator:` error when I try to run
`spleeter separate -i BillieJean.mp3 -o audio_output -p spleeter:4stems`.
What I'm trying to achieve is to just be able to separate stems from the songs. As the result I want to be able to [remove only drum stem](https://github.com/deezer/spleeter/issues/8#issuecomment-549121231) so that I can play along with it. I'm not sure if I should train something or I can separate stems right away, so please help me understand what is the correct way of doing this.
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using the documentation
2. Run as `spleeter separate -i BillieJean.mp3 -o audio_output -p spleeter:4stems`
3. Got `F tensorflow/stream_executor/cuda/cuda_fft.cc:436] failed to initialize batched cufft plan with customized allocator:` error
## Output
```bash
D:\Stems>spleeter separate -i BillieJean.mp3 -o audio_output -p spleeter:4stems
2019-11-24 16:16:12.729386: F tensorflow/stream_executor/cuda/cuda_fft.cc:436] failed to initialize batched cufft plan with customized allocator:
D:\Stems>
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 Enterprise |
| Installation type | Conda |
| RAM available | 16 GB |
| Hardware spec | Nvidia GeForce GT 740 / i7 8700K |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
I read somewhere that this might be caused by a lack of free space, so (even though I run this command from D drive which has 60 GB of free space) I freed up 10 GB on C drive, which didn't help.
It doesn't matter if I run this in regular cmd or in Anaconda Prompt.
I feel pretty uncomfortable with all these cmd stuff, so I hope you'll be polite. Thanks in advance! | closed | 2019-11-24T14:42:14Z | 2021-12-17T09:08:37Z | https://github.com/deezer/spleeter/issues/133 | [
"bug",
"invalid"
] | ilyakonrad | 6 |
roboflow/supervision | machine-learning | 1,004 | AttributeError: type object 'Detections' has no attribute 'from_coco_annotations' | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I copied a tutorial where somebody used the from_coco_annotations function. Now I get this error: Exception has occurred: AttributeError
type object 'Detections' has no attribute 'from_coco_annoations'
File "/home/labor/Dokumente/Studienarbeit_productiv/Code/transformer/own-detr/agar_loader.py", line 65, in <module>
detections = sv.Detections.from_coco_annotations(coco_annotation=annotations)
AttributeError: type object 'Detections' has no attribute 'from_coco_annoations '
I saw that they removed the from_coco_annoations to from_coco but that also didn't work for me.
Any suggestions how to fix this problem?
### Environment
_No response_
### Example
image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
train_dataset = AgarDetection(image_directory_path=train_path, image_processor=image_processor, train=True)
val_dataset = AgarDetection(image_directory_path=val_path, image_processor=image_processor, train=False)
print("Number of training examples:", len(train_dataset))
print("Number of validation examples:", len(val_dataset))
image_ids = train_dataset.coco.getImgIds()
image_id = random.choice(image_ids)
print('Image #{}'.format(image_id))
image = train_dataset.coco.loadImgs(image_id)[0]
annotations = train_dataset.coco.imgToAnns[image_id]
image_path = os.path.join(train_dataset.root, image['file_name'])
image = cv2.imread(image_path)
detections = sv.Detections.from_coco(coco_annotation=annotations)
categories = train_dataset.coco.cats
id2label = {k: v['name'] for k,v in categories.items()}
labels = [
f"{id2label[class_id]}"
for _, _, class_id, _
in detections
]
box_annotator = sv.BoxAnnotator()
frame = box_annotator.annotate(scene=image, detections=detections, labels=labels)
cv2.imshow("random image", image)
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-03-15T11:26:38Z | 2024-03-19T10:48:05Z | https://github.com/roboflow/supervision/issues/1004 | [
"bug"
] | Sebi2106 | 6 |
httpie/cli | python | 1,047 | Feature request: load whole request from a file | It's possible that HTTPie has such a feature already, but I was unsuccessful in trying to find it.
## What enhancement would you like to see?
To have a possibility to load the whole request (URL, HTTP method, headers, payload, etc.) from a file.
## What problem does it solve?
The use case is to have a library of requests stored as a hierarchy of directories and files. It should be platform-independent, so it works with HTTPie on any platform it runs on.
**What is it good for?**
When working on an API project, I want to have a library of API requests consisting of prepared complete request files that I can simply execute using HTTPie, so I don't have to look for URLs, headers, parameters, payloads, etc. in documentation or even in a source code every time I need to make a request.
Such library for a single project may look like the following:
project
|
+ users
| |
| + create.http
| + update.http
| + delete.http
| + list.http
|
+ products
|
...
Currently, I'm always creating such a library for API request payloads, storing them in files. But that's not enough to make a request. I still have to look what URL, headers, and other parameters a specific API endpoint or request requires. I can have a separate library of HTTPie commands for each payload or search them in the shell history, but it would be great to have a single file containing everything needed to make a valid API request, so the only information I need to provide to HTTPie is that file:
http --request project/users/list.http
HTTPie is currently even able to create those request files (see below), I just didn't find a way to load them.
## Provide any additional information, screenshots, or code examples below
### Possible solution using shell script
An easy solution would be to store request commands as shell scripts, for example `project/users/create.sh` might contain the following:
http [parameters] [method] <url> < ./create.json
I would have two files for each request – a shell script and a payload file (if it needs one).
The problem with this solution is that it's platform-dependent. Those shell scripts would work with bash only.
### Proposed solution
I have managed to find a feature that is basically doing the first half of what I need. It's the [Offline mode](https://httpie.io/docs#offline-mode) and the `--offline` parameter. The following command saves the whole request including the payload into the file `request.http` (without actually sending any request):
http --offline [parameters] [method] <url> < payload.json > request.http
The file it produces contains the whole RAW HTTP request as is being sent, everything needed to make a request:
POST /api/endpoint HTTP/1.1
User-Agent: HTTPie/2.3.0
Accept-Encoding: gzip, deflate
Accept: application/json, */*;q=0.5
Connection: keep-alive
Content-Type: application/json
Content-Length: 8
Host: test.dev
payload
It's possible to send such request using for example netcat:
nc <domain> <port> < request.http
But I didn't find a way to do the same with HTTPie. I have tried the analogy to the `nc` command (`http <domain> < request.http`), but HTTPie considers the file to be just a payload.
I would like to have a possibility similar to the following:
http --request request.http
**How I would expect it to work:**
1. If a request file is provided, HTTPie reads all request parameters (URL, method, headers, payload, etc.) from there
2. If any parameter is provided on the command line, it overwrites the value from the request file
**Current issues:**
1. The URL parameter is mandatory. It would be nice to read it from the file also, but if it would be against the HTTPie's design, it wouldn't be a big issue. I can provide only the domain part in the URL parameter and have the whole URL path part loaded from the file.
2. HTTPie can save the request to a file but it cannot read it back, or I didn't find a way to do that.
### Alternative solution
Everything is the same as in the proposed solution, but instead of RAW HTTP request data, the request file contains HTTPie parameters in some form (e.g. similar to default options in the `config.json` file). | closed | 2021-03-27T14:14:39Z | 2021-05-05T11:10:24Z | https://github.com/httpie/cli/issues/1047 | [
"duplicate"
] | ferenczy | 3 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 138 | Cevals评测结果无法复现 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
效果问题
### 基础模型
LLaMA-2-7B
### 操作系统
Linux
### 详细描述问题
```
CUDA_VISIBLE_DEVICES=7 python eval.py \
--model_path ${model_path} \
--cot False \
--few_shot False \
--with_prompt True \
--constrained_decoding True \
--temperature 0.2 \
--n_times 1 \
--ntrain 5 \
--do_save_csv False \
--do_test False \
--output_dir ${output_path} \
```
### 依赖情况(代码类问题务必提供)
```
peft 0.3.0
torch 2.0.0
transformers 4.31.0
```
### 运行日志或截图
您好,我使用repo提供的脚本在cevals的valset上进行评测,模型使用hf上的chinese-llama2(https://huggingface.co/ziqingyang/chinese-llama-2-7b/tree/main),但是zeroshot复现不出相应的结果。
得到的结果如下
```
"All": {
"score": 0.2310549777117385,
"num": 1346,
"correct": 311.0
}
```
请问是由于超参数的原因吗,有没有可供参考的、能复现对应结果的超参数呢 | closed | 2023-08-15T04:43:28Z | 2023-08-20T08:09:48Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/138 | [] | wang99711123 | 7 |
Anjok07/ultimatevocalremovergui | pytorch | 906 | ERROR | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:122 onnxruntime::CudaCall D:\a\_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDA failure 719: unspecified launch failure ; GPU=0 ; hostname=DESKTOP-2VPR9IB ; expr=cudaDeviceSynchronize();
"
Traceback Error: "
File "UVR.py", line 6565, in process_start
File "separate.py", line 450, in seperate
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 381, in _create_inference_session
"
Error Time Stamp [2023-10-17 11:41:33]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 1
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: 64-bit Float
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2023-10-17T08:43:38Z | 2023-10-17T08:43:38Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/906 | [] | kwendondale | 0 |
piskvorky/gensim | nlp | 2,811 | Investigate Py2.7 support | [This PR](https://github.com/RaRe-Technologies/gensim/pull/2630) silently removed Py2.7 support as a side-effect. There is no mention of it in the PR description, commit messages or discussion with the reviewers.
I'm the one who made the change, but it was several months ago, and I myself don't remember what the original intention was.
We need Python 2.7 support for the 3.8.3 release, because a significant number of users (not sure how many) still use that version of Python.
- [x] Get gensim building on Py2.7 again. Try working of the current develop branch, if not, consider alternatives.
- [x] Start a new branch to bring back Py2.7 builds in the gensim-wheels repo (see https://github.com/MacPython/gensim-wheels/pull/24). | closed | 2020-04-27T08:15:39Z | 2020-05-01T02:29:26Z | https://github.com/piskvorky/gensim/issues/2811 | [
"impact HIGH"
] | mpenkov | 2 |
slackapi/bolt-python | fastapi | 1,213 | Error triggered when next() is not returned in a middleware using a shortcut | ### Reproducible in:
Only with a "shortcut" command, when a middleware returns a Boltresponse instead of next(), an error is triggered in the client (app and web), even if the treatment is OK.
Either it should be documented that a middleware should always return next() or it is a bug.
For information, I tried to use a BolResponse to ease my unittests.
#### The `slack_bolt` version
slack-bolt==1.21.2
#### Python runtime version
Python 3.10.15
#### OS info
Linux 6.11.8-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 14 20:38:18 UTC 2024
#### Steps to reproduce:
This call triggers the error in the slack client (app and web), however, the treatment is OK and works with no error.
```python
slack_app = App(token=SLACK_BOT_TOKEN, signing_secret=SLACK_SIGNING_SECRET)
# Middleware to check if the event is from allowed channels
@slack_app.use # Middleware that runs before events
def filter_channels(client, context, logger, payload, next):
channel_id = context.get("channel_id") or context.get("channel")
http_ok = BoltResponse(status=200, body="OK")
if not CHANNELS_ARE_RESTRICTED:
next()
return http_ok
@slack_app.shortcut("shortcut_test")
def handle_shortcut_test(ack, body, respond):
ack()
print("Shortcut test successful!")
```
### Expected result:
No error
### Actual result:

## Workaround
Return next() instead of just calling it
```python
slack_app = App(token=SLACK_BOT_TOKEN, signing_secret=SLACK_SIGNING_SECRET)
# Middleware to check if the event is from allowed channels
@slack_app.use # Middleware that runs before events
def filter_channels(client, context, logger, payload, next):
channel_id = context.get("channel_id") or context.get("channel")
if not CHANNELS_ARE_RESTRICTED:
return next()
@slack_app.shortcut("shortcut_test")
def handle_shortcut_test(ack, body, respond):
ack()
print("Shortcut test successful!")
```
| closed | 2024-12-02T20:13:31Z | 2024-12-05T09:00:35Z | https://github.com/slackapi/bolt-python/issues/1213 | [
"question"
] | mobidyc | 8 |
microsoft/qlib | machine-learning | 1,377 | Hey any guys now the china stock api website? 炒中国股可以接入哪些网站? | ## ❓ Questions and Help
Hey any guys now the china stock api website?
you know like this, i quote some docs below:
> Users can also provide their own data in CSV format. However, the CSV data must satisfies following criterions:
> CSV file is named after a specific stock or the CSV file includes a column of the stock name.
> CSV file must includes a column for the date, and when dumping the data, user must specify the date column name.
Question is: Which website to get the data, and in each way (on each website it seems to be different as the scheme is), How can we format the csv data correctly so as to follow the quote above? Shorter said, which website, and what is the correct scheme?
I'm afraid that the csv convert is ok, but cannot run training!
I think it's quite important especially for Chinese users, which can easily get confused by the flood of alphabets!
I just cannot stand such many words and english searches, and I cannot give more thanks if someone'd like to help me!
And it's also very kind if you can take me further to some python skills related to your solution(and modules)!
Auto-generated by QLib, and I've really read:
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue. | closed | 2022-11-29T18:33:50Z | 2023-05-19T15:02:01Z | https://github.com/microsoft/qlib/issues/1377 | [
"question",
"stale"
] | aquosw | 8 |
Nemo2011/bilibili-api | api | 106 | 找不到 credential.check_valid 方法 | https://nemo2011.github.io/bilibili-api/#/modules/bilibili_api?id=async-def-check_valid


| closed | 2022-11-25T23:15:10Z | 2022-11-25T23:21:26Z | https://github.com/Nemo2011/bilibili-api/issues/106 | [] | z0z0r4 | 3 |
schemathesis/schemathesis | graphql | 2,115 | [BUG] hypothesis.errors.InvalidArgument: test has already been decorated with a settings object. | ### Checklist
- [ X ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ X ] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [ X ] I am using the latest version of Schemathesis
### Describe the bug
After upgrading from schemathesis 3.19.5 to 3.26.0, several tests are now failing with this error message:
```
hypothesis.errors.InvalidArgument: test_graphql_schema has already been decorated with a settings object.
```
### To Reproduce
Here is an example test that produces the above error:
```python
import copy
import falcon
import hypothesis
from hypothesis import HealthCheck
import pytest
import schemathesis
from .util import GRAPHQL_HEADERS, create_database
def graphql_schema_from_asgi(db_setup_factory):
setup = db_setup_factory()
# this slows things down but we need this fixture to not be async
# so schemathesis.from_pytest_fixture can use it correctly
falcon.async_to_sync(create_database, setup.sql_engine_manager)
endpoint = "/v1/graphql"
return schemathesis.graphql.from_asgi(endpoint, setup.app, headers=GRAPHQL_HEADERS)
graphql_schema = schemathesis.from_pytest_fixture("graphql_schema_from_asgi")
@graphql_schema.parametrize()
@hypothesis.settings(
deadline=None,
suppress_health_check=[HealthCheck.too_slow, HealthCheck.filter_too_much],
)
def test_graphql_schema(case):
response = case.call_asgi()
case.validate_response(response)
```
The problem seems to be the fact that we are decorating our test with both the `@graphql_schema` decorator generated by schemathesis and the `@hypothesis.settings` decorator. But we need to do this, in order to customize the behavior of hypothesis for our test.
### Expected behavior
The expected behaviour is that a test which passes without issue in 3.19.5 of schemathesis would continue to work in 3.26.0
### Environment
```
- OS: EL7
- Python version: 3.8
- Schemathesis version: 3.26.0
```
### Additional context
| closed | 2024-04-03T02:40:29Z | 2024-04-04T17:50:22Z | https://github.com/schemathesis/schemathesis/issues/2115 | [
"Type: Bug",
"Status: Needs Triage"
] | dkbarn | 10 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 380 | fetch_user_post_videos 参数无效 | ***发生错误的平台?***
抖音
***发生错误的端点?***
/api/douyin/web/fetch_user_post_videos
***提交的输入值?***
sec_user_id=MS4wLjABAAAAdA0laclcQnyYhFNhuP3yOWNy1BkeVf3kboZiW9yNIcA
max_cursor 第一次传0,后面全都根据返回的值取
***是否有再次尝试?***
是,当从第一次的返回结果中取到 max_cursor 后再请求,就变成了:
{"code":200,"router":"/api/douyin/web/fetch_user_post_videos","data":{"status_code":0}}
***你有查看本项目的自述文件或接口文档吗?***
有
| closed | 2024-05-03T13:08:28Z | 2024-05-05T14:24:09Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/380 | [
"BUG",
"Fixed"
] | xxccll | 10 |
scrapy/scrapy | python | 6,731 | SyntaxError occurs when dynamically modifying generator function source during runtime | ### Description
When a spider callback function is modified at runtime (e.g., during development with auto-reloading tools), Scrapy's `warn_on_generator_with_return_value` check may fail due to a syntax error caused by inconsistent code parsing. This occurs because `inspect.getsource()` retrieves the updated source code, but line/indentation calculations may reference the original version, leading to incorrect code extraction and AST parsing failures.
### Steps to Reproduce
1. Create a Scrapy spider with a generator-based callback (e.g., using `yield`).
2. Run the spider.
3. During runtime, modify the callback function in a way that changes line breaks or indentation (e.g., adding/removing parentheses across lines).
4. Trigger a new request to the modified callback.
**Expected behavior:** Scrapy should handle code changes gracefully or ignore runtime modifications.
**Actual behavior:** A `SyntaxError` (e.g., `unmatched ')'`) is raised during AST parsing due to mismatched code extraction.
**Reproduces how often:** 100% when code modifications alter line structure during runtime inspection.
### Versions
```
Scrapy : 2.12.0
Python : 3.13.2
Platform : Windows-11
```
### Additional context
**Root Cause**:
The `is_generator_with_return_value()` function in `scrapy/utils/misc.py` uses `inspect.getsource()` to retrieve the current source code of a callback. If the function is modified during runtime (especially changes affecting line breaks/indentation), the parsed code snippet may be truncated incorrectly, causing AST failures.
**Minimal Reproduction (Non-Scrapy)**:
Run this script and modify the `target()` function during execution:
```python
from inspect import getsource
import time
def target():
# Edit this function during runtime
yield 1
while True:
try:
src = getsource(target)
ast.parse(src)
except Exception as e:
print(f"Failed: {e}")
time.sleep(1)
```
**Affected Code**:
The issue originates from [`scrapy/utils/misc.py` L245-L279](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/misc.py#L245-L279), where dynamic source retrieval interacts poorly with runtime code changes. While `inspect.getsource()` works as designed, Scrapy's logic doesn't account for source mutations after spider initialization.
**Suggested Fixes**:
1. Cache the initial function source code during spider setup.
2. Add an option to disable generator return checks.
3. Implement robust error handling around AST parsing. | closed | 2025-03-14T09:20:31Z | 2025-03-20T12:02:42Z | https://github.com/scrapy/scrapy/issues/6731 | [
"enhancement",
"good first issue"
] | TES286 | 3 |
python-restx/flask-restx | api | 147 | Contact and other fields not being placed in swagger.json | ### ***** **BEFORE LOGGING AN ISSUE** *****
I have noticed that some fields in the Api module from flask-restx are not translated into the swagger.json file, and do not show up in the documentation. An example is the `contact` parameter, which in the [docs](https://flask-restx.readthedocs.io/en/latest/api.html) says is used for swagger documentation, but is not actually in the swagger.json file. This is the case for `host` and other parameters as well.
### **Code**
```python
blueprint = Blueprint("api", __name__)
api = Api(blueprint,
version='1.0',
title='API Ref',
contact='contact name'
description="blah blah blah"
)
```
### **Repro Steps** (if applicable)
1. Create a flask_restx API as specified above
2. Look at the swagger.json file, and the fields are missing
### **Expected Behavior**
The "contact" key should show up in swagger.json
### **Actual Behavior**
The "contact" key does not show up in swagger.json
### **Environment**
- Python 3.8
- Flask==1.1.1
- Flask-RESTX=0.2.0
| open | 2020-06-01T21:48:57Z | 2021-07-22T06:18:17Z | https://github.com/python-restx/flask-restx/issues/147 | [
"bug"
] | raguiar2 | 2 |
joerick/pyinstrument | django | 101 | C function calls appear as "self" | When I have functions in the chunk of code im profiling like
```
some_class.some_method()
```
Pyinstrument will aggreggate all of them under `[self]` rendering them undiscernible. | closed | 2020-08-23T07:00:18Z | 2020-09-22T18:13:20Z | https://github.com/joerick/pyinstrument/issues/101 | [] | ghost | 6 |
jupyter/docker-stacks | jupyter | 1,688 | [ENH] - Add a jupyter/minimal-notebook:python-3.10.x image | ### What docker image(s) is this feature applicable to?
minimal-notebook
### What changes are you proposing?
I would like to see a `jupyter/minimal-notebook:python-3.10.x` image.
### How does this affect the user?
They can use a newer Python version
### Anything else?
_No response_ | closed | 2022-04-25T15:28:04Z | 2022-05-30T23:30:36Z | https://github.com/jupyter/docker-stacks/issues/1688 | [
"type:Enhancement"
] | Croydon | 7 |
napari/napari | numpy | 7,234 | [test-bot] pip install --pre is failing | The --pre Test workflow failed on 2024-09-02 12:18 UTC
The most recent failing test was on ubuntu-latest py3.12 pyqt6
with commit: ff0ec52d74164ac8446cb039733f218d67b47095
Full run: https://github.com/napari/napari/actions/runs/10666822866
(This post will be updated if another test fails, as long as this issue remains open.)
| closed | 2024-09-01T12:13:31Z | 2024-09-02T14:44:05Z | https://github.com/napari/napari/issues/7234 | [
"bug"
] | github-actions[bot] | 0 |
ansible/ansible | python | 83,877 | I'm having an issue with "become" | ### Summary
When I try to deploy playbooks using a non-root user, the playbook never runs as root even though I have "become=True" set in ansible.cfg/playbook/cli.
### Issue Type
Bug Report
### Component Name
ansible.builtin.file
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.3]
config file = /home/josh/repos/Redhat Playbooks/ansible.cfg
configured module search path = ['/home/josh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josh/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/josh/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josh/.local/bin/ansible
python version = 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/josh/repos/Redhat Playbooks/ansible.cfg) = True
CONFIG_FILE() = /home/josh/repos/Redhat Playbooks/ansible.cfg
DEFAULT_BECOME(/home/josh/repos/Redhat Playbooks/ansible.cfg) = True
DEFAULT_HOST_LIST(/home/josh/repos/Redhat Playbooks/ansible.cfg) = ['/home/josh/repos/Redhat Playbooks/inventory.yaml']
EDITOR(env: EDITOR) = nano
INTERPRETER_PYTHON(/home/josh/repos/Redhat Playbooks/ansible.cfg) = auto_silent
```
### OS / Environment
Target OS is RedHat 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Download and Update files
hosts: all
gather_facts: true
vars:
txt_file: "somefile.txt"
txt_full_path: "{{ file_path }}/{{ file_name }}"
tasks:
- name: Validate txt file path exists
ansible.builtin.file:
path: "{{ file_path }}"
state: directory
mode: "0755"
owner: someuser
group: someuser
- name: Remove old txt File
ansible.builtin.file:
path: "{{ txt_full_path }}.old"
state: absent
- name: Test for existing txt file
ansible.builtin.stat:
path: "{{ txt_full_path }}"
register: existing_txt_file
- name: Rename existing txt file
when: existing_txt_file.stat.exists
ansible.builtin.copy:
remote_src: true
src: "{{ txt_full_path }}"
dest: "{{ txt_full_path }}.old"
mode: "0600"
owner: someuser
group: someuser
- name: Copy new txt file to remote server
ansible.builtin.copy:
src: "{{ txt_new_file }}"
dest: "{{ txt_full_path }}"
mode: "0600"
owner: someuser
group: someuser
```
### Expected Results
I expect the playbook ssh as our 'ansible_automate' user and execute tasks as 'root'.
### Actual Results
```console
https://gist.github.com/tjc-jhol/6008d52620d8937dfb6fbec647b667c4
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-08-30T20:02:07Z | 2024-09-13T13:00:02Z | https://github.com/ansible/ansible/issues/83877 | [
"module",
"bug",
"affects_2.17"
] | tjc-jhol | 5 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 497 | training encode with LibriSpeech, VoxCeleb1, and VoxCeleb2 is failing | After preprocessing all three datasets, I'm getting the following error while training encoder. I carefully checked the preprocessed directories. They all have `_sources.txt`.
Can anyone help me with this?
```
..Traceback (most recent call last):
File "encoder_train.py", line 46, in <module>
train(**vars(args))
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/train.py", line 67, in train
for step, speaker_batch in enumerate(loader, init_step):
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 971, in _next_dat$
return self._process_data(data)
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process$data
data.reraise()
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
Exception: Caught Exception in DataLoader worker process 4.
Original Traceback (most recent call last):
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worke$_loop
data = fetcher.fetch(index)
File "/usr/local/anaconda3/envs/voice_cloning/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_verification_dataset.py", line 55, in $ollate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_batch.py", line 8, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker_batch.py", line 8, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker.py", line 34, in random_partial
self._load_utterances()
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/speaker.py", line 18, in _load_utterances
self.utterance_cycler = RandomCycler(self.utterances)
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/encoder/data_objects/random_cycler.py", line 14, in __init__
raise Exception("Can't create RandomCycler from an empty collection")
Exception: Can't create RandomCycler from an empty collection
``` | closed | 2020-08-19T06:51:15Z | 2020-08-22T17:06:59Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/497 | [] | amintavakol | 6 |
s3rius/FastAPI-template | graphql | 119 | Add support for db/orm like mongodb and oss like minio | Please add support for db/orm like mongodb and oss like minio, thanks very much!
(mongodb orm maybe can use mongoengine) | open | 2022-08-12T03:37:58Z | 2022-10-31T07:01:57Z | https://github.com/s3rius/FastAPI-template/issues/119 | [] | djun | 5 |
seleniumbase/SeleniumBase | pytest | 3,419 | "Unlimited Free Web-Scraping with GitHub Actions" is now on YouTube | "Unlimited Free Web-Scraping with GitHub Actions" is now on YouTube:
<b>https://www.youtube.com/watch?v=gEZhTfaIxHQ</b>
<a href="https://www.youtube.com/watch?v=gEZhTfaIxHQ"><img src="https://github.com/user-attachments/assets/656977e1-5d66-4d1c-9eec-0aaa41f6522f" title="Unlimited Free Web-Scraping with GitHub Actions" width="600" /></a>
| open | 2025-01-14T19:40:27Z | 2025-02-14T08:54:50Z | https://github.com/seleniumbase/SeleniumBase/issues/3419 | [
"News / Announcements",
"Tutorials & Learning",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
tensorflow/datasets | numpy | 4,996 | [data request] <caltech birds> | * Name of dataset: <name>
* URL of dataset: <url>
* License of dataset: <license type>
* Short description of dataset and use case(s): <description>
Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize.
And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md). | closed | 2023-06-26T04:47:55Z | 2023-07-06T14:14:30Z | https://github.com/tensorflow/datasets/issues/4996 | [
"dataset request"
] | kyuhuikim | 1 |
robotframework/robotframework | automation | 5,125 | Feature request: rebot - Option to embed images in html | Not sure if this is the right place for this feature request?
Feature request to add an option to rebot's command line to embed images (e.g. screenshots) into the html body rather then linking to the image file on the filesystem, so the log.html would be a self contained file.
I'd expect this option would be disabled by default, so images would only be embedded if someone specifically asked for it.
This feature request was inspired by this forum post and other like it I've seen in the past:
https://forum.robotframework.org/t/how-to-add-screenshot-in-allure-report-for-failure-test-cases-allure-repot/7263
| closed | 2024-05-07T14:14:16Z | 2024-11-13T13:08:09Z | https://github.com/robotframework/robotframework/issues/5125 | [] | damies13 | 5 |
gee-community/geemap | streamlit | 743 | zoom_to_object() and centerObject() not fully zooming into small region | ---
I am trying to zoom into an image clipped to a buffered point with a 10m diameter. My understanding from the documentation is that the zoom parameter should go to 24, however increasing values after 18 do not further affect my centerObject() zoom-level. zoom_to_object() also gets to this zoom level and stops. The + button on the interactive map is greyed out at this point. I'm fairly new to earth engine; is there something I'm missing here?
(colab notebook, python v3.7.12, geemap v0.9.4)
---
Simplified example for an arbitrary point.
```
import ee
import geemap
```
```
aoi = ee.Geometry.Point([-93, 42]).buffer(10/2)
image = ee.ImageCollection('USDA/NAIP/DOQQ') \
.filterBounds(aoi) \
.first().clip(aoi)
```
```
Map = geemap.Map()
Map.addLayer(image,{})
Map.centerObject(image)
Map
```
---
Below is the tightest I can get on the map (zoom=18).

| closed | 2021-11-08T19:56:27Z | 2021-11-08T20:47:01Z | https://github.com/gee-community/geemap/issues/743 | [] | vawalker | 5 |
huggingface/datasets | deep-learning | 7,222 | TypeError: Couldn't cast array of type string to null in long json | ### Describe the bug
In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation.
However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk.
Consequently, if all values in the first chunk of a field are, for example, null, the field will be locked as type null, and if a string appears in that field in the second chunk, it will trigger this error:
<details>
<summary>Traceback </summary>
```
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 try:
-> 1869 writer.write_table(table)
1870 except CastError as cast_error:
14 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
579 pa_table = pa_table.combine_chunks()
--> 580 pa_table = table_cast(pa_table, self._schema)
581 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2291 if table.schema != schema:
-> 2292 return cast_table_to_schema(table, schema)
2293 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2244 )
-> 2245 arrays = [
2246 cast_array_to_feature(
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
2245 arrays = [
-> 2246 cast_array_to_feature(
2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str)
2101 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 2102 return array_cast(
2103 array,
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1796 else:
-> 1797 return func(array, *args, **kwargs)
1798
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
-> 1948 raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
1949 return array.cast(pa_type)
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-353-e02f83980611>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dd = load_dataset("json", data_files=["TEST.json"])
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2094
2095 # Download and prepare data
-> 2096 builder_instance.download_and_prepare(
2097 download_config=download_config,
2098 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
997 try:
998 # Prepare split will record examples associated to the split
--> 999 self._prepare_split(split_generator, **prepare_split_kwargs)
1000 except OSError as e:
1001 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1738 job_id = 0
1739 with pbar:
-> 1740 for job_id, done, content in self._prepare_split_single(
1741 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1742 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1894 if isinstance(e, DatasetGenerationError):
1895 raise
-> 1896 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1897
1898 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
### Steps to reproduce the bug
```python
import json
from datasets import load_dataset
with open("TEST.json", "w") as f:
row = {"ballast": "qwerty" * 1000, "b": None}
row_str = json.dumps(row) + "\n"
line_size = len(row_str)
chunk_size = 10 << 20
lines_in_chunk = chunk_size // line_size + 1
print(f"Writing {lines_in_chunk} lines")
for i in range(lines_in_chunk):
f.write(row_str)
null_row = {"ballast": "Gotcha", "b": "Not Null"}
f.write(json.dumps(null_row) + "\n")
load_dataset("json", data_files=["TEST.json"])
```
### Expected behavior
Concatenation of the chunks without errors
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.24.7
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | open | 2024-10-12T08:14:59Z | 2025-02-23T13:01:47Z | https://github.com/huggingface/datasets/issues/7222 | [] | nokados | 4 |
open-mmlab/mmdetection | pytorch | 11,793 | I am trying to replicate the Hybrid Task Cascade | I downloaded the full mmdetection folder and followed the structure but not able to understand the following things
there are other files too not able to figure out which files to keep and which not for HTC
coco dataset I tried to download but not able to
Please guide me through this | open | 2024-06-13T13:17:02Z | 2024-09-18T08:43:59Z | https://github.com/open-mmlab/mmdetection/issues/11793 | [] | jyotiahluwalia | 3 |
docarray/docarray | fastapi | 1,903 | Python 3.10 TypeError: issubclass() arg 1 must be a class | ### Initial Checks
- [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug
### Description
TypeError for issubclass in python 3.10. Works in python 3.11.
I think it is due to usage of `issubclass` that is missed out in the following.
https://github.com/docarray/docarray/blob/75e0033a361a31280709899e94d6f5e14ff4b8ae/docarray/base_doc/doc.py#L328-L330
Related: #1557 #1584
```
Traceback (most recent call last):
File "/.../test.py", line 12, in <module>
item.model_dump()
File "/.../lib/python3.10/site-packages/docarray/base_doc/doc.py", line 520, in model_dump
return _model_dump(super())
File "/.../lib/python3.10/site-packages/docarray/base_doc/doc.py", line 488, in _model_dump
) = self._exclude_doclist(exclude=exclude)
File "/.../lib/python3.10/site-packages/docarray/base_doc/doc.py", line 329, in _exclude_doclist
if isinstance(type_, type) and issubclass(type_, AnyDocArray):
File "/.../lib/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
### Example Code
```Python
from __future__ import annotations
from docarray import BaseDoc, __version__
class Document(BaseDoc):
list_str: list[str]
item = Document(list_str=["a", "b", "c"])
item.model_dump()
```
### Python, DocArray & OS Version
```Text
python: 3.10
docarray: 0.40.0
platform: macOS-14.6.1-x86_64-i386-64bit
```
### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [X] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [ ] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | closed | 2024-08-14T09:47:06Z | 2024-08-17T07:09:25Z | https://github.com/docarray/docarray/issues/1903 | [] | daip-yxtay | 0 |
dunossauro/fastapi-do-zero | sqlalchemy | 156 | Aula 10 - Ausente orientação ou direcionamento sobre instalação/configuração do docker | Ola Duno, estou seguindo com o curso aqui e está ótimo o aprendizado.
Ao realizar essa aula 10 identifiquei algumas situações que precisam de ajustes, vou abrir issues separadas
1. Ausente orientação ou encaminhamento para material de instalação/configuração do docker. Na Aula 01 (configuração do ambiente de dev) consta uma nota de rodapé de que não é preciso inicialmente se preocupar com docker, o que concordo. Mas a aula 10 não consta com orientação sobre de instalação/configuração do docker.
Pelo que reparei os requisitos para seguir com a aula são:
- Instalar e configurar docker. Estou usando vm ubuntu e segui os passos de https://docs.docker.com/engine/install/ubuntu/ e tudo certo.
- Configurar para rodar docker com o seu usuário do terminal (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) | closed | 2024-05-30T14:54:53Z | 2024-06-03T00:06:18Z | https://github.com/dunossauro/fastapi-do-zero/issues/156 | [] | lbmendes | 0 |
Kludex/mangum | fastapi | 128 | API Gateway returns timeout error after conect to WebSockets endpint | API gateway returns an error of timeout after 30 seconds when I connect on WebSocket.
I can drop the error. But, I don't know why the error happens.
the error message is "Endpoint request timed out".
| closed | 2020-06-21T13:56:46Z | 2020-06-23T15:12:59Z | https://github.com/Kludex/mangum/issues/128 | [] | koxudaxi | 4 |
giotto-ai/giotto-tda | scikit-learn | 683 | [BUG] Cannot run Scaler() | I re-ran the Lorenz attractor notebook from giotto-tda examples. In the part using scaler,
diagramScaler = Scaler()
X_scaled = diagramScaler.fit_transform(X_diagrams)
diagramScaler.plot(X_scaled, sample=window_number)
I get the following error:
TypeError: Parameter `function` is of type <class 'numpy._ArrayFunctionDispatcher'> while it should be of type (<class 'function'>, <class 'NoneType'>).
I also ran this using the data that I have and the Scaler method also produced the same error. | closed | 2023-10-20T11:54:17Z | 2024-05-29T20:44:17Z | https://github.com/giotto-ai/giotto-tda/issues/683 | [
"bug"
] | albienaculan14 | 4 |
unionai-oss/pandera | pandas | 1,513 | Passing DataFrameSchema to function that check_output is decorating? | Is there a way to pass a DataFrameSchema as an argument to the function that check_output (or other check decorator) is decorating? In the examples in the documentation (https://pandera.readthedocs.io/en/stable/decorators.html) the function being decorated is only passed the dataframe to be processed (and validated) and the DataFrameSchema is assigned in the global scope. I'd ideally like to be able to load the DataFrameSchema from a yaml file inside the main function of the script and then make calls to helper dataframe processing functions that make use of validation decorators. To do this I'd need to pass the DataFrameSchema I loaded from yaml as an argument to the dataframe processing functions where the decorators could somehow also access the schema. I'm think of something like code below (if it were possible). Not sure if I'm missing some obvious solution or if I need to create a workaround (maybe define an inner function that gets decorated?) to use this workflow in my script.
```
import pandas as pd
from pandera import check_output
from pandera.io import from_yaml
@check_output(output_schema)
def load_data(path_to_data, output_schema):
df = pd.read_csv(path_to_data)
return(df)
def main(config_file):
''' Main function for script. Note that config file is a yaml file with read/write paths for script '''
# load DataFrameSchema from yaml for validating raw data
raw_data_schema = from_yaml(config_file['input_schema'])
# load raw data and pass validation schema to decorator
raw_data_df = load_data(config_file['raw_data_path'], raw_data_schema)
#....do more data processing in further steps after raw data has been validated
```
| open | 2024-02-23T20:45:29Z | 2024-02-24T02:22:18Z | https://github.com/unionai-oss/pandera/issues/1513 | [
"question"
] | dwinski | 0 |
QuivrHQ/quivr | api | 3,591 | Strategy dynamic embedding for chunks | * Vector table should have dynamic size of DIM
* Create EmbeddingModel table with model name and dim | open | 2025-02-11T08:14:25Z | 2025-03-19T08:28:41Z | https://github.com/QuivrHQ/quivr/issues/3591 | [] | AmineDiro | 1 |
PokeAPI/pokeapi | graphql | 328 | Can't get any data from API with emberjs | Hi, I'm studying EmberJS. I downloaded and installed locally to avoid many requests and while I was studying, I used a website that was made emberjs and uses the API to guide me in some parts of the project.
Two weeks ago the site stopped listing the Pokemons. I changed my host to https: // pokeapi and got the same error. Has anything changed in the way to get API data? Because not only in EmberJS does not work anymore, in AngularJS and ReacktJS stopped getting API data
Sorry for my english and thank you for the Pokeapi project. | closed | 2018-03-18T13:32:47Z | 2018-03-20T08:03:35Z | https://github.com/PokeAPI/pokeapi/issues/328 | [
"question",
"cloudflare"
] | cassianpry | 2 |
hbldh/bleak | asyncio | 1,013 | cannot connect multiple devices | * bleak version: 0.17.0
* Python version: 3.10.6
* Operating System: Archlinux
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.65
Network adapter:
```
02:00.0 Network controller [0280]: Intel Corporation Dual Band Wireless-AC 3168NGW [Stone Peak] [8086:24fb] (rev 10)
Subsystem: Intel Corporation Device [8086:2110]
Kernel driver in use: iwlwifi
```
### Description
I am trying to connect to several devices at the same time
### What I Did
Execute the following script. It waits for the first device to disconnect to then connect the second one, while it should connect to both at the same time.
```
import asyncio
from bleak import BleakClient, BleakScanner
from bleak.backends.device import BLEDevice
import logging
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.DEBUG,
datefmt='%Y-%m-%d %H:%M:%S')
async def find_all_devices():
scanner = BleakScanner()
b1 = await scanner.find_device_by_address("D1:A5:FA:F1:96:B2")
b2 = await scanner.find_device_by_address("EA:7E:44:43:9A:80")
devices = [b1, b2]
for d in devices:
async with BleakClient(d) as client:
print(client._device_info)
asyncio.run(find_all_devices())
```
logs:
```
2022-09-19 00:48:19 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_42_6F_EF_74_A6_7B): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -85)>, 'ManufacturerData': <dbus_fast.signature.Variant ('a{qv}', {117: <dbus_fast.signature.Variant ('ay', bytearray(b'\x01\x12\x9a\x84 Z Flip3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'))>})>}, []]
2022-09-19 00:48:19 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_4D_F1_2C_51_1F_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -87)>}, []]
2022-09-19 00:48:19 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_7C_A6_B0_5D_32_27): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -60)>}, []]
2022-09-19 00:48:19 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -59)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_4D_F1_2C_51_1F_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -85)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -58)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_4D_F1_2C_51_1F_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -85)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_7C_A6_B0_5D_32_27): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -60)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_42_6F_EF_74_A6_7B): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -86)>, 'ManufacturerData': <dbus_fast.signature.Variant ('a{qv}', {117: <dbus_fast.signature.Variant ('ay', bytearray(b'\x01\x12\x12\x1e\xe9{\x00\x00\x00\x00\x02T\x80l>\x01\x01Shuai \xe7'))>})>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_7C_A6_B0_5D_32_27): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -59)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -61)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -54)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_C6_4C_3E_22_26_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -85)>}, []]
2022-09-19 00:48:20 DEBUG Connecting to device @ D1:A5:FA:F1:96:B2
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_4D_F1_2C_51_1F_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -86)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_42_6F_EF_74_A6_7B): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -90)>, 'ManufacturerData': <dbus_fast.signature.Variant ('a{qv}', {117: <dbus_fast.signature.Variant ('ay', bytearray(b'\x01\x12\x9a\x84 Z Flip3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'))>})>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_7C_A6_B0_5D_32_27): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -66)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_4D_F1_2C_51_1F_68): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -88)>}, []]
2022-09-19 00:48:20 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -64)>}, []]
2022-09-19 00:48:21 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', True)>}, []]
2022-09-19 00:48:21 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', True)>}, []]
{'Address': 'D1:A5:FA:F1:96:B2', 'AddressType': 'random', 'Name': 'WoHand', 'Alias': 'WoHand', 'Paired': False, 'Bonded': False, 'Trusted': False, 'Blocked': False, 'LegacyPairing': False, 'RSSI': -58, 'Connected': False, 'UUIDs': ['00001800-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000fee7-0000-1000-8000-00805f9b34fb', 'cba20d00-224d-11e6-9fb8-0002a5d5c51b'], 'Adapter': '/org/bluez/hci0', 'ManufacturerData': {89: bytearray(b'\xd1\xa5\xfa\xf1\x96\xb2')}, 'ServiceData': {'00000d00-0000-1000-8000-00805f9b34fb': bytearray(b'\xc8\x90\xb5')}, 'ServicesResolved': False}
2022-09-19 00:48:23 DEBUG Disconnecting (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2)
2022-09-19 00:48:26 DEBUG received D-Bus signal: org.freedesktop.DBus.ObjectManager.InterfacesRemoved (/): ['/org/bluez/hci0/dev_E0_5D_7C_E4_D3_78', ['org.freedesktop.DBus.Properties', 'org.freedesktop.DBus.Introspectable', 'org.bluez.Device1']]
2022-09-19 00:48:26 DEBUG received D-Bus signal: org.freedesktop.DBus.ObjectManager.InterfacesRemoved (/): ['/org/bluez/hci0/dev_45_CC_FE_F4_E8_17', ['org.freedesktop.DBus.Properties', 'org.freedesktop.DBus.Introspectable', 'org.bluez.Device1']]
2022-09-19 00:48:26 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', False)>}, []]
2022-09-19 00:48:26 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', False)>}, []]
2022-09-19 00:48:26 DEBUG Device disconnected (/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2)
2022-09-19 00:48:26 DEBUG _cleanup_all(/org/bluez/hci0/dev_D1_A5_FA_F1_96_B2)
2022-09-19 00:48:26 DEBUG Connecting to device @ EA:7E:44:43:9A:80
2022-09-19 00:48:31 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -72)>}, []]
2022-09-19 00:48:32 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', True)>}, []]
2022-09-19 00:48:32 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', True)>}, []]
{'Address': 'EA:7E:44:43:9A:80', 'AddressType': 'random', 'Name': 'WoHand', 'Alias': 'WoHand', 'Paired': False, 'Bonded': False, 'Trusted': False, 'Blocked': False, 'LegacyPairing': False, 'RSSI': -54, 'Connected': False, 'UUIDs': ['00001800-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000fee7-0000-1000-8000-00805f9b34fb', 'cba20d00-224d-11e6-9fb8-0002a5d5c51b'], 'Adapter': '/org/bluez/hci0', 'ManufacturerData': {89: bytearray(b'\xea~DC\x9a\x80')}, 'ServiceData': {'00000d00-0000-1000-8000-00805f9b34fb': bytearray(b'\xc8\x90\xd3')}, 'ServicesResolved': False}
2022-09-19 00:48:34 DEBUG Disconnecting (/org/bluez/hci0/dev_EA_7E_44_43_9A_80)
2022-09-19 00:48:36 DEBUG received D-Bus signal: org.freedesktop.DBus.ObjectManager.InterfacesRemoved (/): ['/org/bluez/hci0/dev_ED_16_A7_16_E2_7D', ['org.freedesktop.DBus.Properties', 'org.freedesktop.DBus.Introspectable', 'org.bluez.Device1']]
2022-09-19 00:48:37 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', False)>}, []]
2022-09-19 00:48:37 DEBUG received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_EA_7E_44_43_9A_80): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', False)>}, []]
2022-09-19 00:48:37 DEBUG Device disconnected (/org/bluez/hci0/dev_EA_7E_44_43_9A_80)
2022-09-19 00:48:37 DEBUG _cleanup_all(/org/bluez/hci0/dev_EA_7E_44_43_9A_80)
```
| closed | 2022-09-18T22:49:42Z | 2022-09-18T22:52:18Z | https://github.com/hbldh/bleak/issues/1013 | [] | gilcu3 | 0 |
mwaskom/seaborn | matplotlib | 3,362 | Histogram plotting not working as `pandas` option `use_inf_as_null` has been removed. | I am currently unable to use `histplot` as it appears that the `pandas` option `use_inf_as_null` has been removed. Error log below.
```
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/distributions.py:1438, in histplot(data, x, y, hue, weights, stat, bins, binwidth, binrange, discrete, cumulative, common_bins, common_norm, multiple, element, fill, shrink, kde, kde_kws, line_kws, thresh, pthresh, pmax, cbar, cbar_ax, cbar_kws, palette, hue_order, hue_norm, color, log_scale, legend, ax, **kwargs)
1427 estimate_kws = dict(
1428 stat=stat,
1429 bins=bins,
(...)
1433 cumulative=cumulative,
1434 )
1436 if p.univariate:
-> 1438 p.plot_univariate_histogram(
1439 multiple=multiple,
1440 element=element,
1441 fill=fill,
1442 shrink=shrink,
1443 common_norm=common_norm,
1444 common_bins=common_bins,
1445 kde=kde,
1446 kde_kws=kde_kws,
1447 color=color,
1448 legend=legend,
1449 estimate_kws=estimate_kws,
1450 line_kws=line_kws,
1451 **kwargs,
1452 )
1454 else:
1456 p.plot_bivariate_histogram(
1457 common_bins=common_bins,
1458 common_norm=common_norm,
(...)
1468 **kwargs,
1469 )
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/distributions.py:431, in _DistributionPlotter.plot_univariate_histogram(self, multiple, element, fill, common_norm, common_bins, shrink, kde, kde_kws, color, legend, line_kws, estimate_kws, **plot_kws)
428 histograms = {}
430 # Do pre-compute housekeeping related to multiple groups
--> 431 all_data = self.comp_data.dropna()
432 all_weights = all_data.get("weights", None)
434 if set(self.variables) - {"x", "y"}: # Check if we'll have multiple histograms
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/_oldcore.py:1119, in VectorPlotter.comp_data(self)
1117 grouped = self.plot_data[var].groupby(self.converters[var], sort=False)
1118 for converter, orig in grouped:
-> 1119 with pd.option_context('mode.use_inf_as_null', True):
1120 orig = orig.dropna()
1121 if var in self.var_levels:
1122 # TODO this should happen in some centralized location
1123 # it is similar to GH2419, but more complicated because
1124 # supporting `order` in categorical plots is tricky
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:441, in option_context.__enter__(self)
440 def __enter__(self) -> None:
--> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
443 for pat, val in self.ops:
444 _set_option(pat, val, silent=True)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:441, in <listcomp>(.0)
440 def __enter__(self) -> None:
--> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
443 for pat, val in self.ops:
444 _set_option(pat, val, silent=True)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:135, in _get_option(pat, silent)
134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135 key = _get_single_key(pat, silent)
137 # walk the nested dict
138 root, k = _get_root(key)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:121, in _get_single_key(pat, silent)
119 if not silent:
120 _warn_if_deprecated(pat)
--> 121 raise OptionError(f"No such keys(s): {repr(pat)}")
122 if len(keys) > 1:
123 raise OptionError("Pattern matched multiple keys")
OptionError: "No such keys(s): 'mode.use_inf_as_null'"
``` | closed | 2023-05-11T14:32:34Z | 2023-05-15T23:18:18Z | https://github.com/mwaskom/seaborn/issues/3362 | [] | MattWenham | 1 |
TencentARC/GFPGAN | deep-learning | 373 | Jol kkyj | open | 2023-05-02T15:48:06Z | 2023-05-02T15:48:06Z | https://github.com/TencentARC/GFPGAN/issues/373 | [] | akther121 | 0 | |
huggingface/transformers | deep-learning | 36,475 | Load siglip2 error | ### System Info
I load siglip2 model just like follow:
```
import torch
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
# load the model and processor
ckpt = "google/siglip2-base-patch16-512"
model = AutoModel.from_pretrained(ckpt, device_map="auto").eval()
processor = AutoProcessor.from_pretrained(ckpt)
# load the image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
inputs = processor(images=[image], return_tensors="pt").to(model.device)
# run infernece
with torch.no_grad():
image_embeddings = model.get_image_features(**inputs)
print(image_embeddings.shape)
```
But I got an error like this:
```
processor = AutoProcessor.from_pretrained(ckpt)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
# load the model and processor
ckpt = "google/siglip2-base-patch16-512"
model = AutoModel.from_pretrained(ckpt, device_map="auto").eval()
processor = AutoProcessor.from_pretrained(ckpt)
# load the image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
inputs = processor(images=[image], return_tensors="pt").to(model.device)
# run infernece
with torch.no_grad():
image_embeddings = model.get_image_features(**inputs)
print(image_embeddings.shape)
```
pip list:
`transformers 4.49.0`
### Expected behavior
Why error? | closed | 2025-02-28T09:23:31Z | 2025-03-03T02:02:22Z | https://github.com/huggingface/transformers/issues/36475 | [
"bug"
] | yiyexy | 4 |
deeppavlov/DeepPavlov | tensorflow | 801 | Problem with value_error, creating train_model on ner_rus | Hello again
I tried to train ner_rus on some custom dataset, and it failed doing parse_ner_file(self, file_name).
It says:
> parse_ner_file(self, file_name)
> 64 pos_tags.append(pos)
> 65 else:
> ---> 66 token, *_, tag = line.split()
> 67 tags.append(tag)
> 68 tokens.append(token)
>
> ValueError: not enough values to unpack (expected at least 2, got 1)
But I have no information, which file is the reason, or what position in it.
I created my train, test, valid text files exactly as in the folder .deeppavlov/downloads/total_rus ones. If I use only them, code for training from your tutorial runs without any errors.
But even if I add some text from these example files to my custom ones, it still doesn't work with the same error - not enough values to unpack.
So I think that for some reason it see only token without corresponding tag. But if I add original tokens and tags - why it doesn't see them?
I attached a part from my train file:
`Федеральное Oналоговое Oведомство OСША Oпо Oитогам Oаудита Oобвинило Oамериканскую Oкорпорацию OCoca-Cola OFFв Oнеуплате Oналогов Oна O ...`
I separate strings with '\t', as I see the strings in the original files separated.
And, by the way, I use custom tags. But they can't be the reason of error, right? | closed | 2019-04-16T10:35:34Z | 2019-09-24T12:34:30Z | https://github.com/deeppavlov/DeepPavlov/issues/801 | [] | MNCTTY | 8 |
ageitgey/face_recognition | machine-learning | 1,321 | CUDA docker image build: dlib invalid syntax(license_files) | * face_recognition version: from master
* Python version: 3.5
* Operating System: Linux 18.04.05
### Description
I am trying to build dlib with enable CUDA
and get an error:
```
Step 13/22 : RUN cd /dlib; python3 /dlib/setup.py install
---> Running in 2a6f8e09f8d2
Traceback (most recent call last):
File "/dlib/setup.py", line 42, in <module>
from setuptools import setup, Extension
File "/usr/local/lib/python3.5/dist-packages/setuptools/__init__.py", line 18, in <module>
from setuptools.dist import Distribution
File "/usr/local/lib/python3.5/dist-packages/setuptools/dist.py", line 585
license_files: Optional[List[str]] = self.metadata.license_files
^
SyntaxError: invalid syntax
```
### What I Did
I am trying to build docker image using this Docerfile:
```
FROM nvidia/cuda:9.0-cudnn7-devel
# Install face recognition dependencies
RUN apt update -y; apt install -y \
git \
cmake \
libsm6 \
libxext6 \
libxrender-dev \
python3 \
python3-pip
RUN pip3 install scikit-build
# Install compilers
RUN apt install -y software-properties-common
RUN add-apt-repository ppa:ubuntu-toolchain-r/test
RUN apt update -y; apt install -y gcc-6 g++-6
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 50
RUN update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-6 50
#Install dlib
RUN git clone -b 'v19.16' --single-branch https://github.com/davisking/dlib.git
RUN mkdir -p /dlib/build
RUN cmake -H/dlib -B/dlib/build -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
RUN cmake --build /dlib/build
RUN cd /dlib; python3 /dlib/setup.py install
# Install the face recognition package
RUN pip3 install face_recognition
RUN python3 -m pip install grpcio
RUN python3 -m pip install grpcio-tools
COPY face_pb2_grpc.py /root
COPY face_pb2.py /root
COPY identifier.py /root
VOLUME [ "/unknown_people", "/known_people" ]
EXPOSE 50051
CMD [ "python3", "/root/identifier.py" ]
```
I was trying build image via dlib v19.6 / v19.5 / v19.7 and problem still exist.
| closed | 2021-06-03T16:20:22Z | 2021-07-08T12:30:17Z | https://github.com/ageitgey/face_recognition/issues/1321 | [] | GHRik | 3 |
Avaiga/taipy | automation | 1,515 | Possibility to use decimator on Plotly charts | ### Description
The goal would be to allow users to use the decimator alongside Plotly charts.
```python
import yfinance as yf
from taipy.gui import Gui
from taipy.gui.data.decimator import MinMaxDecimator, RDP, LTTB
df_AAPL = yf.Ticker("AAPL").history(interval="1d", period = "max")
df_AAPL["DATE"] = df_AAPL.index.astype('int64').astype(float)
n_out = 50
decimator_instance = MinMaxDecimator(n_out=n_out)
decimate_data_count = len(df_AAPL)
import plotly.express as px
fig = px.line(df_AAPL, x="DATE", y="Open", markers=True)
page = """
# Decimator
From a data length of <|{len(df_AAPL)}|> to <|{n_out}|>
## Without decimator
<|chart|figure={fig}|>
## With decimator
<|chart|figure={fig}|decimator=decimator_instance|>
"""
gui = Gui(page)
gui.run(port=5026, title="Decimator")
```
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-15T08:47:36Z | 2024-10-07T10:02:49Z | https://github.com/Avaiga/taipy/issues/1515 | [
"📈 Improvement",
"🖰 GUI",
"🟨 Priority: Medium",
"💬 Discussion"
] | FlorianJacta | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,635 | Voice question problem with Tor | ### What version of GlobaLeaks are you using?
the issue persist from 412.3.0 until the last one (4.13.9)
### What browser(s) are you seeing the problem on?
Tor Browser
### What operating system(s) are you seeing the problem on?
Windows, macOS
### Describe the issue
When i reach the voice question on the questionnaire i cannot rec my vocal message and i see:
{{seconds}}/{{fields.attrs.max_len.value}}

### Proposed solution
_No response_ | open | 2023-09-15T10:13:18Z | 2023-09-20T06:43:31Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3635 | [
"T: Bug",
"C: Client"
] | simosimi993 | 2 |
yt-dlp/yt-dlp | python | 12,651 | there is a problem with the site bilibili when i try to download a playlist | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Canada
### Provide a description that is worded well enough to be understood
there is a problem with the site bilibili when i try to download a playlist
the command is:
yt-dlp https://www.bilibili.com/list/520819684
the output is:
[BilibiliPlaylist] Extracting URL: https://www.bilibili.com/list/520819684
[BilibiliPlaylist] 520819684: Downloading webpage
ERROR: [BilibiliPlaylist] 520819684: Could not access playlist: None None; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-v', 'https://www.bilibili.com/list/520819684']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.02.19 from yt-dlp/yt-dlp [4985a4041] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-133-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2020.06.20, mutagen-1.47.0, pyxattr-0.7.2, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.2.0, websockets-13.1
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1841 extractors
[BilibiliPlaylist] Extracting URL: https://www.bilibili.com/list/520819684
[BilibiliPlaylist] 520819684: Downloading webpage
ERROR: [BilibiliPlaylist] 520819684: Could not access playlist: None None; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/general/.local/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 746, in extract
ie_result = self._real_extract(url)
File "/home/general/.local/lib/python3.10/site-packages/yt_dlp/extractor/bilibili.py", line 1608, in _real_extract
raise ExtractorError(f'Could not access playlist: {error_code} {error_message}')
``` | closed | 2025-03-18T07:48:29Z | 2025-03-21T23:05:00Z | https://github.com/yt-dlp/yt-dlp/issues/12651 | [
"site-bug",
"patch-available"
] | smallhousebythelake | 2 |
d2l-ai/d2l-en | pytorch | 2,440 | Wrong epanechikov kernel | Chapter 11.2. It is triangular kernel. | open | 2023-02-12T15:50:31Z | 2023-02-12T15:50:31Z | https://github.com/d2l-ai/d2l-en/issues/2440 | [] | yongduek | 0 |
ivy-llc/ivy | pytorch | 28,589 | Fix `count_nonzero` at `tf_frontend` | closed | 2024-03-13T21:20:39Z | 2024-03-15T17:33:54Z | https://github.com/ivy-llc/ivy/issues/28589 | [
"Sub Task"
] | samthakur587 | 0 | |
miguelgrinberg/microblog | flask | 107 | SearchableMixin wont work like this with more than one class | I tried to use the events for my own purposes and I came across the following issue with the `after_commit` method:
```
@classmethod
def after_commit(cls, session):
[...]
session._changes = None
```
The event is triggered and hits the first `after_commit` method. This first `after_commit` will reset the `session._changes` to `None`. So if you have have a second class which is listening on the `after_commit` event (e.g. is using the `SeachableMixin`) and queries the `_changes`, it will run into an error because `_changes` is `None` at that point. At least that happened in my version.
Other than that thanks for the great tutorial. I'd love to see a bit more about testing of the routes. | closed | 2018-05-23T18:55:52Z | 2018-06-03T20:32:34Z | https://github.com/miguelgrinberg/microblog/issues/107 | [
"bug"
] | coolkau | 5 |
OthersideAI/self-operating-computer | automation | 4 | how's the token consume average do a single operate | what is the cost | closed | 2023-11-28T02:50:08Z | 2023-12-02T16:52:08Z | https://github.com/OthersideAI/self-operating-computer/issues/4 | [] | lucasjinreal | 0 |
exaloop/codon | numpy | 124 | CommandLine Error: Option 'use-dbg-addr' registered more than once | i installed codon with install.sh script, to use it in my existing project as @codon.jit
codon==0.15.3
codon-jit==0.1.1
but when i launch my program i get
```
: CommandLine Error: Option 'use-dbg-addr' registered more than once!
LLVM ERROR: inconsistency in registered CommandLine options
[1] 39236 IOT instruction (core dumped) python3 main.py
``` | closed | 2022-12-20T19:06:05Z | 2023-10-13T06:24:30Z | https://github.com/exaloop/codon/issues/124 | [] | DanInSpace104 | 3 |
Kludex/mangum | asyncio | 302 | Wrong domain when using AWS Lambda FunctionUrls and CloudFront with FastAPI | I am trying to deploy a site using AWS CloudFront with an AWS Lambda FunctionUrl as Custom Origin.
The deployment works fine, with one big problem:
When I visit the CloudFront domain, every link on the site which is generated with FastAPIs [`url_for`](https://fastapi.tiangolo.com/advanced/templates/#templates-and-static-files) as the FunctionUrl domain instead of the CloudFront Domain.
Can I somehow set the base_url? | open | 2023-08-17T06:45:26Z | 2023-08-17T06:45:26Z | https://github.com/Kludex/mangum/issues/302 | [] | fabge | 0 |
mckinsey/vizro | plotly | 889 | Localize Vizro: external scripts and custom localization | ### Question
I have two questions related to localization in Vizro:
1. Is it possible to use the same approach as in Plotly Dash to localize Vizro by loading an external JS script?
For example, loading the file [https://cdn.plot.ly/plotly-locale-de-latest.js](https://cdn.plot.ly/plotly-locale-de-latest.js). I tried adding this file directly to `assets/js` or embedding its content into `assets/js/custom.js`, but neither approach seems to work. Is this feature supported or are there specific steps required to enable it?
2. Are there any supported ways to create a custom localization for Vizro with overriding default texts?
If so, could you provide guidance or an example on how to implement it?
It would be helpful to know the current state of localization features in Vizro and any planned enhancements in this area
### Code/Examples
_No response_
### Which package?
vizro
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-11-20T00:18:29Z | 2024-11-21T09:20:35Z | https://github.com/mckinsey/vizro/issues/889 | [
"General Question :question:"
] | baltic-tea | 5 |
scikit-learn-contrib/metric-learn | scikit-learn | 261 | [DOC] Add warning that *_Supervised versions interpret label -1 as "unlabeled" | In the following page of the doc :
http://contrib.scikit-learn.org/metric-learn/supervised.html#supervised-versions-of-weakly-supervised-algorithms
We should add a visible warning for users that supervised versions of weakly-supervised algorithms interpret the label -1 (which is commonly used in binary classification) as unlabeled. | closed | 2019-11-13T12:56:55Z | 2019-11-21T15:12:07Z | https://github.com/scikit-learn-contrib/metric-learn/issues/261 | [] | bellet | 1 |
zappa/Zappa | flask | 968 | Promoting Zappa To the Next Generation of Developers | I recently published a [blog post on Zappa](https://www.mslinn.com/blog/2021/04/14/serverless-ecommerce.html#zappa). Feedback / corrections and suggestions are invited. The main points I am attempting to make are:
1. Zappa is a seminal project that has inspired other well-known projects.
2. Zappa is still very useful; in fact, its best days lie ahead.
3. Zappa needs new contributors (a "new generation") to help work through the backlog and hone the messaging.
I would like to update the blog post (and the follow-on posts that are in progress) with more information about the old-time developers and recent additions, and tell the world the skills necessary for new contributors to be most effective in addressing the backlog. Please respond publicly here or in private to me at mslinn@mslinn.com.
Has anyone communicated with @jezdez at [JazzBand](https://jazzband.co) about discussing the possibility of having Zappa join JazzBand? | closed | 2021-04-24T11:47:49Z | 2021-09-28T12:25:53Z | https://github.com/zappa/Zappa/issues/968 | [] | mslinn | 3 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,349 | How to install the undetected script | Hello, what is the proper way to install this script, just using the following command
pip install undetected-chrome
Or there are more steps to put this script into work ?
Thanks in advance! | open | 2023-06-15T11:25:55Z | 2023-06-19T13:44:09Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1349 | [] | stefanobabyy777 | 4 |
HIT-SCIR/ltp | nlp | 186 | ltp_server异常终止 | 通过nohup开启,经常无故终止
| closed | 2016-09-20T10:17:27Z | 2017-02-27T12:54:38Z | https://github.com/HIT-SCIR/ltp/issues/186 | [] | lisisong | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,211 | Cant launched it!! | i got this message
melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given
What should I do? | open | 2023-05-05T07:08:28Z | 2023-08-19T11:37:47Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1211 | [] | slim1592 | 2 |
sloria/TextBlob | nlp | 279 | TextBlob word counting parses curly single quote as a word | I am doing an example for a live training seminar in which I use the plain text version of the play Romeo and Juliet from Project Gutenberg to display the top 20 words in the text.
There are many curly apostrophes (`’`)--867 of them to be exact. TextBlob splits words containing them, such as `Romeo’s` and counts the apostrophes (`’`) as a word.
I am assuming this is a bug, but just wondering if I am doing something wrong. TextBlob does not do this for straight single-quote characters | open | 2019-07-25T15:03:40Z | 2019-07-25T15:03:40Z | https://github.com/sloria/TextBlob/issues/279 | [] | pdeitel | 0 |
wagtail/wagtail | django | 12,652 | Public page inside private page tree | ### Is your proposal related to a problem?
As explained in the documentation [here ](https://docs.wagtail.org/en/stable/topics/permissions.html#page-permissions)it is currently not possible to set differents permissions for pages within a private page tree. All pages inside a private tree inherit the permissions of the parent private page.
### Describe the solution you'd like
In some cases specific child pages might need to be accessible to certain user groups or individuals, while others remain restricted
Currently, this use case cannot be accommodated due to inherited permissions across the tree.
Could this feature be considered for a future release? What is your opinion on this feature? Do you think it would be useful?
| closed | 2024-12-02T15:44:17Z | 2024-12-02T17:29:54Z | https://github.com/wagtail/wagtail/issues/12652 | [
"type:Enhancement"
] | matteonext | 1 |
hankcs/HanLP | nlp | 1,278 | 同义词多次出现会被覆盖 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [√] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.4
我使用的版本是:1.7.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
如果同一个词多次出现,会被替换,例如:Aa01A01=北京 北京人 Aa01A02=北京 北京天安门
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
for (Synonym synonym : synonymList)
{
treeMap.put(synonym.realWord, new SynonymItem(synonym, synonymList, type));
// 这里稍微做个test
//assert synonym.getIdString().startsWith(line.split(" ")[0].substring(0, line.split(" ")[0].length() - 1)) : "词典有问题" + line + synonym.toString();
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
这里可以判断是否存在,存在则追加,反之新增
```
北京 北京人 北京天安门
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
北京 北京天安门
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2019-09-04T02:19:55Z | 2019-09-04T15:29:57Z | https://github.com/hankcs/HanLP/issues/1278 | [] | yangyang303 | 1 |
plotly/dash | data-visualization | 2,687 | pio.renderers.config not reflected in dcc.Graph figure config | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
As shown in this post: https://stackoverflow.com/a/70824247 we are able to change fig.show default config by setting pio.renderers[renderer].config beforehand. The problem is that this is not reflected in dcc.Graph figure's config.
**Describe the solution you'd like**
Be able to globally set default dcc.Graph figure config.
**Describe alternatives you've considered**
Importing default custom config every time a figure is created.
| open | 2023-11-10T15:28:56Z | 2024-08-13T19:42:32Z | https://github.com/plotly/dash/issues/2687 | [
"bug",
"P3"
] | subsurfaceiodev | 0 |
rougier/from-python-to-numpy | numpy | 93 | Can you provide an info file for reading this in Emacs? | I'm having trouble converting rst2textinfo due to issues with docutils library. Is it possible to provide an info file? I'd like to read this in Emacs when I'm offline. Thanks in advance. | open | 2020-03-29T01:10:29Z | 2020-03-30T07:21:57Z | https://github.com/rougier/from-python-to-numpy/issues/93 | [] | stevem995 | 1 |
pallets/flask | python | 5,380 | Improve error messages on simple after request hook bugs | When a user forgets to return the response in an `after_request` hook, the message is particularly unhelpful as it does not point to the involvement of the hook:
```python
from flask import Flask
app = Flask("test")
# oops
app.after_request(lambda _: None)
@app.get("/hello")
def get_hello():
return "hello world!", 200
```
Run:
```sh
flask --debug -A app run
curl -si -X GET http://0.0.0.0:5000/hello
```
Results in:
```
TypeError
TypeError: 'NoneType' object is not callable
Traceback (most recent call last)
File "/home/fabien/tmp/test/venv/lib/python3.12/site-packages/flask/app.py", line 1478, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fabien/tmp/test/venv/lib/python3.12/site-packages/flask/app.py", line 1462, in wsgi_app
return response(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable
```
I suggest that the error message *should* point to the involvement of the after request hook. | closed | 2024-01-13T10:24:40Z | 2024-01-30T00:05:36Z | https://github.com/pallets/flask/issues/5380 | [] | zx80 | 1 |
streamlit/streamlit | data-visualization | 10,634 | Feature request for st.data_editor | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
I hope that st.data_editor can include a button.
### Why?
Because I want to add a button in the last column,
which could be used to delete an entire row or export an entire row.
### How?
st.column_config.ButtonColumn()
### Additional Context
_No response_ | closed | 2025-03-04T03:10:51Z | 2025-03-05T01:19:30Z | https://github.com/streamlit/streamlit/issues/10634 | [
"type:enhancement",
"status:awaiting-user-response",
"feature:st.data_editor"
] | z0983357347 | 3 |
cchen156/Learning-to-See-in-the-Dark | tensorflow | 94 | Will the Sony model work for dual camera iPhone? | I don't have a iPhone 6s handy, but I heard some people are getting good results with iPhone 6s. If I try Sony model with raw images taken with iPhone with two or three cameras, (iPhone 8 or iPhone 11), will the result be roughly the same as with iPhone 6s? Thanks! | open | 2019-09-29T08:29:51Z | 2020-10-16T06:35:32Z | https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/94 | [] | JasonVann | 15 |
qubvel-org/segmentation_models.pytorch | computer-vision | 1,085 | Crash on Colab When Switching Encoder to 'mobilenet_v2' for the Binary Segmentation Tutorial Example | When running the [binary segmentation intro tutorial](https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb) on Goolge Colab, changing the encoder type to `mobilenet_v2` and it will crash the environment during training. No exceptions are thrown, and the application logs do not show any errors. | open | 2025-03-09T03:34:43Z | 2025-03-09T03:34:43Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/1085 | [] | taiyungwang | 0 |
yt-dlp/yt-dlp | python | 12,616 | [twitter] login with `--username`+`--password`/`--netrc` fails and flags account as suspicious | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Any
### Provide a description that is worded well enough to be understood
Using the --password --username login method does not work for X.com and makes your account get flagged for suspicious activity.
yt-dlp shows:
"Twitter rejected this login attempt as suspicious. Use --cookies-from-browser or --cookies for the authentication."
And when I attempt any interaction on X (liking included) I get:
"This request looks like it might be automated. To protect our users from spam and other malicious activity, we can’t complete this action right now. Please try again later."
I cannot show the verbose output since it may expose my login details or get my account blocked for longer, and I don't think it would help that much either given it's an issue with X.
My suggestion is that any attempt to log in this way for X gets blocked then shows an error and only cookie authentication is allowed, rather than it letting you mess up your account.
UPDATE: My account is no longer blocked about 24 hours later.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
Justification for not showing this is in the description.
``` | open | 2025-03-15T03:23:18Z | 2025-03-16T00:15:52Z | https://github.com/yt-dlp/yt-dlp/issues/12616 | [
"account-needed",
"site-bug",
"triage"
] | unknown-intentions | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,489 | ValueError:Unknown resampling filter | I need your help!
I use training argument "python train.py --dataroot /gemini/data-1/render_data1 --name r_cyclegan --model cycle_gan --preprocess scale_width_and_crop --load_size 800 --crop_size 400", and then get this error.ValueError: Unknown resampling filter (InterpolationMode.BICUBIC). Use Image.NEAREST (0), Image.LANCZOS (1), Image.BILINEAR (2), Image.BICUBIC (3), Image.BOX (4) or Image.HAMMING (5)

| open | 2022-10-04T05:20:16Z | 2022-10-15T03:22:39Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1489 | [] | yourbikun | 6 |
521xueweihan/HelloGitHub | python | 2,629 | 【开源自推】卡片式链接 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:[https://github.com/Lete114/CardLink](https://github.com/Lete114/CardLink)
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JavaScript
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:让页面上非同域的超链接显示为卡片式链接
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:类似于知乎一样的卡片式链接,将其他域链接转换成卡片式
- 示例代码:
```html
<!-- 1. 使用方法 -->
<script>
// 为<article></article>(文章)标签下所有打开新标签窗口的a标签生成卡片链接
cardLink(document.querySelectorAll('article a[target=_blank]'))
</script>
<!-- 2. 使用方法 -->
<script>
// 为指定的a标签设置cardlink属性,最后调用cardLink()
document.querySelectorAll('article a[target=_blank]').forEach((el) => {
el.setAttribute('cardlink', '')
})
// 或
document.querySelector('a#example').setAttribute('cardlink', '')
// 默认会对页面上所有a[cardlink]生成卡片链接
cardLink()
</script>
```
**存在的问题**
由于这是前端发送请求获取 HTML,可能部分网站会存在跨域 (CORS) 问题,所以 `cardLink` 允许你使用代理服务器去请求目标网站的 HTML
```html
<script>
// 注意: 只有发生跨域请求时,cardLink 才会将请求发送到代理服务器(以此可以减轻代理服务器的压力)
// 在执行 cardLink 之前预设代理服务器
cardLink.server = 'https://api.allorigins.win/raw?url='
// 为<article></article>(文章)标签下所有打开新标签窗口的a标签生成卡片链接
cardLink(document.querySelectorAll('article a[target=_blank]'))
</script>
```
- 截图:

| closed | 2023-10-24T08:50:06Z | 2024-01-25T11:59:33Z | https://github.com/521xueweihan/HelloGitHub/issues/2629 | [
"JavaScript 项目"
] | Lete114 | 0 |
waditu/tushare | pandas | 1,450 | index_weight数据缺失严重 | df = pro.index_weight(index_code='399001.SZ', start_date='20190101', end_date='20191231')查询出来2500条记录,即调整了5次。
df = pro.index_weight(index_code='399001.SZ', start_date='20200101', end_date='20201231')查询出来500条记录,即调整了1次。
既不符合样本股定期调整规则(每半年调整一次),也不符合按月剔除实施风险警示(ST或*ST)的样本股的规则。
ID:397319 | closed | 2020-10-31T13:29:38Z | 2020-10-31T13:31:48Z | https://github.com/waditu/tushare/issues/1450 | [] | shytotoro | 0 |
Farama-Foundation/PettingZoo | api | 828 | [Proposal] Classic env GUI upgrade | We wish to upgrade the rendering of some of the classic environments to come in line with our standards (see https://pettingzoo.farama.org/environments/classic/).
- [x] Gin Rummy should be rendered in the same style as Texas Holdem
- [x] Leduc Holdem should be rendered in the same style as Texas Holdem
- [ ] Mahjong should be rendered using https://gamblemountain.itch.io/riichi-asset
- [x] Chess should be rendered using https://byandrox.itch.io/pixchess (reach out to Jordan for funds)
- [ ] Hanabi should be rendered using [TBC]
- [x] TicTacToe should be rendered using [TBC]
Notes:
- Usage terms of the asset packs should be double-checked
- The option to render Chess, Hanabi and TicTacToe without any dependencies/assets should be kept as a non-default option (`render_mode='ascii'`) or something | closed | 2022-10-14T08:33:59Z | 2023-08-14T00:32:12Z | https://github.com/Farama-Foundation/PettingZoo/issues/828 | [
"enhancement"
] | WillDudley | 0 |
apache/airflow | machine-learning | 47,654 | DAG Versioning || versions not getting generated | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
I noticed when I am changing DAG parameters like say changing `tags` new versions are not getting logged in DAG version table.
### What you think should happen instead?
Versions should get created. I tried this in alpha4 and it is working fine.
### How to reproduce
1. Update a tag in an exiting DAG
2. Check in DAG_Version table for new version
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-12T05:34:25Z | 2025-03-12T13:41:31Z | https://github.com/apache/airflow/issues/47654 | [
"kind:bug",
"priority:critical",
"area:core",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 6 |
hack4impact/flask-base | sqlalchemy | 198 | install error | Hi guys, trying to use your template project to learn how to create a simple sandbox, I'm getting this error after running
`pip install -r -requirements.txt`
` ERROR: Command errored out with exit status 1:
command: ~/flaskPython/env/bin/python2.7 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/setup.py'"'"'; __file__='"'"'/private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info
cwd: /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/
Complete output (23 lines):
running egg_info
creating /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info/psycopg2.egg-info
writing /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to /private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file '/private/var/folders/5j/b27k95js4rg4g4byh2glxhwc0000gn/T/pip-install-QYde2H/psycopg2/pip-egg-info/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<http://initd.org/psycopg/docs/install.html>).
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.` | closed | 2020-01-05T06:51:59Z | 2020-06-21T09:11:14Z | https://github.com/hack4impact/flask-base/issues/198 | [] | W-Lawless | 5 |
graphql-python/gql | graphql | 432 | RequestsHTTPTransport keeps retrying the query even though it is invalid | **Describe the bug**
Hi,
The program (given below) will freeze at the last line, where I forget to pass the `skip` parameter in the variables.
It appears that the server answers a message with a 500 error:
```text
Variable "$skip" of required type "Int!" was not provided.'
```
And since I specified `retries=5` and the default retry code `_default_retry_codes = (429, 500, 502, 503, 504)`, requests keeps resending the same query again and again, even though the server gave an answer.
When I disable the retries or exclude 500 from the `retry_status_forcelist`, gql raises the TransportQueryError immediately.
**To Reproduce**
Code below:
<details>
<summary>Details</summary>
```python
from typing import Any, Dict, Optional, Union
from gql import Client, gql
from gql.transport.requests import RequestsHTTPTransport
from graphql import DocumentNode
api_key = ""
api_endpoint = "https://staging.cloud.kili-technology.com/api/label/v2/graphql"
gql_transport = RequestsHTTPTransport(
url=api_endpoint,
headers={
"Authorization": f"X-API-Key: {api_key}",
"Accept": "application/json",
"Content-Type": "application/json",
},
use_json=True,
timeout=30,
verify=True,
retries=5,
method="POST",
retry_backoff_factor=0.5,
)
client = Client(
transport=gql_transport,
fetch_schema_from_transport=True,
introspection_args={
"descriptions": True, # descriptions for the schema, types, fields, and arguments
"specified_by_url": False, # https://spec.graphql.org/draft/#sec--specifiedBy
"directive_is_repeatable": True, # include repeatability of directives
"schema_description": True, # include schema description
"input_value_deprecation": True, # request deprecated input fields
},
)
def execute(
query: Union[str, DocumentNode], variables: Optional[Dict] = None, **kwargs
) -> Dict[str, Any]:
document = query if isinstance(query, DocumentNode) else gql(query)
result = client.execute(
document=document,
variable_values=variables,
**kwargs,
)
return result
query = """
query projects($where: ProjectWhere!, $first: PageSize!, $skip: Int!) {
data: projects(where: $where, first: $first, skip: $skip) {
id title
}
}
"""
project_id = "clm6gd8b701yu01324m3cesih"
result = execute(query=query, variables={"where": {"id": project_id}, "first": 1, "skip": 0}) # WORKS
print(result)
result = execute(query=query, variables={"where": {"id": project_id}, "first": 1}) # STUCK HERE
```
</details>
**Expected behavior**
I'm actually not sure if it's really a bug...
I expected gql to check both the query along with the variables, but it looks like gql only checks the query with respect to the schema, but doesn't check the variables with respect to the query?
I'm not a graphql expert, so I wonder if a server is supposed to answer a 500 error for such a query? If not, the bug is server-side, but if yes, maybe the `_default_retry_codes` should not include the 500 code?
**System info (please complete the following information):**
- OS: macos
- Python version: 3.8
- gql version: v3.5.0b5
- graphql-core version: 3.3.0a3
Thanks! | closed | 2023-09-05T19:01:21Z | 2023-09-08T19:26:10Z | https://github.com/graphql-python/gql/issues/432 | [
"type: question or discussion"
] | Jonas1312 | 4 |
d2l-ai/d2l-en | tensorflow | 2,570 | MLX support | I plan on contributing for the new ML framework by Apple for silicon https://github.com/ml-explore/mlx
I tried setting up jupyter notebook to directly edit markdown using these resources:
1. https://d2l.ai/chapter_appendix-tools-for-deep-learning/contributing.html
2. https://github.com/d2l-ai/d2l-en/blob/master/CONTRIBUTING.md
I still can't run the code .md files as jupyter notebook is opening md files in text format only.
What is the recommended approach to add new framework support? | open | 2023-12-10T06:52:32Z | 2024-01-17T05:19:37Z | https://github.com/d2l-ai/d2l-en/issues/2570 | [] | rahulchittimalla | 1 |
gradio-app/gradio | data-science | 9,971 | [Gradio 5] Input text inside Slider component alignment and size | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I like the input insid of slider component when aligned by default to the right, but when info text is present, sometimes the input down to the left in next line. Its could be nice if we have too other attribute to specify the size of input text and other to specify his alignment.
**Describe the solution you'd like**
Here are one solution i did using external css:
```python
import gradio as gr
css="""
.gradio-slider-input {
input[type="number"] {
width: 8em;
}
}
.slider-input-right > .wrap > .head {
display: flex;
}
.slider-input-right > .wrap > .head > .tab-like-container {
margin-left: auto;
}
"""
#css="" #uncomment this line to test without css
with gr.Blocks(css=css) as app:
with gr.Row():
with gr.Column():
freeu_s1_scale = gr.Slider(
elem_classes="gradio-slider-input slider-input-right",
interactive=False,
label="S1 scale",
minimum=0.1,
maximum=1.0,
step=0.1,
value=0.9,
info="Scaling factor to prevent over-smoothing during image generation in stage 1 of the diffusion process"
)
with gr.Column():
freeu_s2_scale = gr.Slider(
elem_classes="gradio-slider-input slider-input-right",
interactive=False,
label="S2 scale",
minimum=0.1,
maximum=1.0,
step=0.1,
value=0.2,
info="Scale factor to prevent over-smoothing during image generation in stage 2 of the diffusion process"
)
with gr.Column():
freeu_b1_scale = gr.Slider(
elem_classes="gradio-slider-input slider-input-right",
interactive=False,
label="B1 scale",
minimum=1.0,
maximum=1.2,
step=0.1,
value=1.2,
info="Scaling factor for stage 1 to amplify the contributions of backbone features"
)
with gr.Column():
freeu_b2_scale = gr.Slider(
elem_classes="gradio-slider-input slider-input-right",
interactive=False,
label="B2 scale",
minimum=1.2,
maximum=1.6,
step=0.1,
value=1.4,
info="Scaling factor for stage 2 to amplify the contributions of backbone features"
)
app.launch(inbrowser=True)
```
**Screenshots**
**Default (without css)**

**Aligned to the right and input text changed**

| open | 2024-11-16T18:06:19Z | 2024-11-20T17:09:01Z | https://github.com/gradio-app/gradio/issues/9971 | [
"bug"
] | elismasilva | 0 |
littlecodersh/ItChat | api | 607 | No module named 'itchat.content' | 在提交前,请确保您已经检查了以下内容!
- [ ] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [ ] 我已经阅读并按[文档][document] 中的指引进行了操作
- [ ] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ ] 本问题确实关于`itchat`, 而不是其他项目.
- [ ] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:1.3.10。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
>
$pip3 list
certifi (2018.1.18)
chardet (3.0.4)
idna (2.6)
itchat (1.3.10)
pip (9.0.1)
pypng (0.0.18)
PyQRCode (1.2.1)
requests (2.18.4)
setuptools (28.8.0)
urllib3 (1.22)
import itchat,sys
from itchat.content import *
Traceback (most recent call last):
File "./itchat.py", line 1, in <module>
import itchat,sys
File "/data/python/itchat.py", line 2, in <module>
from itchat.content import *
ModuleNotFoundError: No module named 'itchat.content'; 'itchat' is not a package
| closed | 2018-03-14T07:27:11Z | 2023-02-17T02:30:53Z | https://github.com/littlecodersh/ItChat/issues/607 | [] | ai400 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.