repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
drivendataorg/cookiecutter-data-science | data-science | 296 | Fix v2 aws sync commands | https://github.com/drivendata/cookiecutter-data-science/blob/b4c0c12243653c493c188239af14c835b9768fbc/%7B%7B%20cookiecutter.repo_name%20%7D%7D/Makefile#L62
In v2, `sync_data_up` incorrectly lists the bucket as the source directory, rather than the local `data/` folder. The order should be filpped.
In addition, there is inconsistent use of environment variables. `sync_data_down` uses the templatized AWS profile from the cookiecutter form:
https://github.com/drivendata/cookiecutter-data-science/blob/b4c0c12243653c493c188239af14c835b9768fbc/%7B%7B%20cookiecutter.repo_name%20%7D%7D/Makefile#L51
while `sync_data_up` uses the `PROFILE` environment variable which is not set in the Makefile.
https://github.com/drivendata/cookiecutter-data-science/blob/b4c0c12243653c493c188239af14c835b9768fbc/%7B%7B%20cookiecutter.repo_name%20%7D%7D/Makefile#L63
| closed | 2022-12-04T01:21:38Z | 2025-03-07T18:57:05Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/296 | [
"bug"
] | chrisjkuch | 1 |
Kanaries/pygwalker | matplotlib | 281 | [BUG] Got error when pyg_html = pyg.to_html(all_df, spec=vis_spec) | Hi,
I got an error when do pyg_html = pyg.to_html(all_df, spec=vis_spec)
and the vis_spec is copied from PygWalker UI. I tryied `copy python string` or `export as json file`, and does not work for both.
```
vis_spec = r"""{"config":"[{\"visId\":\"gw_gk-j\",\"name\":\"Chart 1\",\"encodings\":{\"dimensions\":[{\"dragId\":\"gw_Hx5M\",\"fid\":\"GW_3A7K11RPVIJPBTY6A7K\",\"name\":\"model_name\",\"basename\":\"model_name\",\"semanticType\":\"nominal\",\"analyticType\":\"dimension\"},{\"dragId\":\"gw_V6W-\",\"fid\":\"GW_CHHI4ZTJ98JUWG\",\"name\":\"summary\",\"basename\":\"summary\",\"semanticType\":\"nominal\",\"analyticType\":\"dimension\"},{\"dragId\":\"gw_mea_key_fid\",\"fid\":\"gw_mea_key_fid\",\"name\":\"Measure names\",\"analyticType\":\"dimension\",\"semanticType\":\"nominal\"}],\"measures\":[{\"dragId\":\"gw_0XY8\",\"fid\":\"GW_1HZ6NYRW6QVWK180\",\"name\":\"F1 score\",\"basename\":\"F1 score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_3s6c\",\"fid\":\"GW_9CHH66MJ5QKKVYRMLDMZ8J4I38DS\",\"name\":\"perplexity score\",\"basename\":\"perplexity score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_U1vn\",\"fid\":\"GW_2YBCWMOE99T9YAYPKQ8\",\"name\":\"blue score\",\"basename\":\"blue score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_qKqY\",\"fid\":\"GW_OF0FTHBL7GFA5XY4VZ1S\",\"name\":\"rouge score\",\"basename\":\"rouge score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_H29e\",\"fid\":\"GW_1DBEOXMH0OY1ZND69Y5DUKQ4MVUY40\",\"name\":\"SentenceSim score\",\"basename\":\"SentenceSim score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_YZGf\",\"fid\":\"GW_45C50BNKG6PHNLX03KJ4TS\",\"name\":\"bleurt score\",\"basename\":\"bleurt score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_VfaT\",\"fid\":\"GW_2YAAKBPK6VUQDJWVZ1S\",\"name\":\"bert score\",\"basename\":\"bert score\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"},{\"dragId\":\"gw_count_fid\",\"fid\":\"gw_count_fid\",\"name\":\"Row count\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\",\"computed\":true,\"expression\":{\"op\":\"one\",\"params\":[],\"as\":\"gw_count_fid\"}},{\"dragId\":\"gw_mea_val_fid\",\"fid\":\"gw_mea_val_fid\",\"name\":\"Measure values\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"}],\"rows\":[{\"dragId\":\"gw_KMu1\",\"fid\":\"gw_mea_val_fid\",\"name\":\"Measure values\",\"analyticType\":\"measure\",\"semanticType\":\"quantitative\",\"aggName\":\"sum\"}],\"columns\":[{\"dragId\":\"gw_NXxS\",\"fid\":\"gw_mea_key_fid\",\"name\":\"Measure names\",\"analyticType\":\"dimension\",\"semanticType\":\"nominal\"}],\"color\":[{\"dragId\":\"gw_zE7F\",\"fid\":\"GW_3A7K11RPVIJPBTY6A7K\",\"name\":\"model_name\",\"basename\":\"model_name\",\"semanticType\":\"nominal\",\"analyticType\":\"dimension\"}],\"opacity\":[],\"size\":[{\"dragId\":\"gw_MFYP\",\"fid\":\"GW_CHHI4ZTJ98JUWG\",\"name\":\"summary\",\"basename\":\"summary\",\"semanticType\":\"nominal\",\"analyticType\":\"dimension\"}],\"shape\":[],\"radius\":[],\"theta\":[],\"longitude\":[],\"latitude\":[],\"geoId\":[],\"details\":[],\"filters\":[],\"text\":[]},\"config\":{\"defaultAggregated\":false,\"geoms\":[\"line\"],\"showTableSummary\":false,\"coordSystem\":\"generic\",\"stack\":\"stack\",\"showActions\":false,\"interactiveScale\":false,\"sorted\":\"none\",\"zeroScale\":true,\"scaleIncludeUnmatchedChoropleth\":false,\"size\":{\"mode\":\"fixed\",\"width\":654,\"height\":411},\"format\":{},\"geoKey\":\"name\",\"resolve\":{\"x\":false,\"y\":false,\"color\":false,\"opacity\":false,\"shape\":false,\"size\":false},\"limit\":-1,\"folds\":[\"GW_1HZ6NYRW6QVWK180\",\"GW_2YBCWMOE99T9YAYPKQ8\",\"GW_OF0FTHBL7GFA5XY4VZ1S\",\"GW_1DBEOXMH0OY1ZND69Y5DUKQ4MVUY40\",\"GW_45C50BNKG6PHNLX03KJ4TS\",\"GW_2YAAKBPK6VUQDJWVZ1S\"]}}]","chart_map":{},"version":"0.3.9"}"""
```
```
python3.10/site-packages/pygwalker/api/html.py", line 46, in to_html
walker = PygWalker(
TypeError: pygwalker.api.pygwalker.PygWalker() got multiple values for keyword argument 'spec'
``` | closed | 2023-10-24T19:01:15Z | 2023-11-03T10:56:07Z | https://github.com/Kanaries/pygwalker/issues/281 | [
"documentation"
] | Louis-udm | 8 |
OFA-Sys/Chinese-CLIP | computer-vision | 17 | evaluation datasets | could you please provide the muge, flickr 30 cn and coco cn you used for eval ?
thanks | open | 2022-11-18T22:21:48Z | 2022-11-19T11:21:37Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/17 | [] | rom1504 | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,237 | Tutorial on How to Use Audio Files | Hello everyone,
I've been using the software for a few days now, and I've grasped the concept. I've realized that by adding multiple recordings of the same voice, it improves the voice for text dictation. I have also successfully extracted audio files using Audacity and converted them to MP4 or FLAC formats, importing and using them correctly. However, I'm trying to figure out how I can create a folder with 3 or 4 audio files of the same voice so that I can work on them properly.
Thank you. | open | 2023-07-26T07:59:02Z | 2023-07-26T07:59:02Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1237 | [] | lettreabc | 0 |
scikit-tda/kepler-mapper | data-visualization | 178 | Higher Dimensional Simplex Visualizations | Hi KMapper Team,
This is a great library for TDA applications. Really helped me a lot!! I have checked lots of examples but I only find nodes connected by edges and not any surfaces (or higher dimensional equivalents). For example, if 3 nodes have common intersection of data points above threshold, then there should a "filled" triangle to represent it rather than an "empty" triangle. Could you please tell why does this happen?
Thank you,
Raunak | open | 2019-08-07T21:16:59Z | 2023-01-20T19:00:47Z | https://github.com/scikit-tda/kepler-mapper/issues/178 | [] | raunaak | 2 |
nltk/nltk | nlp | 3,244 | Duplicates in wordnet hypernyms closure | The relation closure for wordnet synsets is supposed to prevent duplicates in the output. However, the duplicates check fails to detect some repetitions, which occur when there are multiple paths to a given synset. In the following output for ex., the branch going from 'taxonomic_group.n.01' to 'entity.n.01' appears twice, because it is reachable by two different paths:
```
from nltk.corpus import wordnet as wn
ss=wn.synset("calamagrostis.n.01")
print(list(ss.closure(lambda s: s.hypernyms())))
```
> [Synset('gramineae.n.01'), Synset('monocot_genus.n.01'), Synset('monocot_family.n.01'), Synset('genus.n.02'), Synset('family.n.06'), Synset('taxonomic_group.n.01'), Synset('taxonomic_group.n.01'), Synset('biological_group.n.01'), Synset('biological_group.n.01'), Synset('group.n.01'), Synset('group.n.01'), Synset('abstraction.n.06'), Synset('abstraction.n.06'), Synset('entity.n.01'), Synset('entity.n.01')]
Produce an SVG image to illustrate this graph:
```
from nltk.parse.dependencygraph import dot2img
print(dot2img(wn.digraph([ss])))
```

| closed | 2024-03-25T08:12:49Z | 2024-08-18T02:01:29Z | https://github.com/nltk/nltk/issues/3244 | [] | ekaf | 0 |
polakowo/vectorbt | data-visualization | 473 | Is this correctly setup for long/short trading? | Hello there,
Can someone check whether from vectorbt perspective this is correctly set up to backtest a long/short strategy? The performance seems too good that's why :).
To generate entries and exits data:
```
long_entries = pd.DataFrame(np.select(condlist = [(signals.shift() == 0) & (signals == 1), (signals.shift() == -1) & (signals == 1) ], choicelist = [True, True], default = False ), index = crypto_signals_4hours.index, columns = ['close'])
long_exits = pd.DataFrame(np.select(condlist = [(signals.shift() == 1) & (signals == 0)], choicelist = [True], default = False ), index = crypto_signals_4hours.index, columns = ['close'])
short_entries = pd.DataFrame(np.select(condlist = [(signals.shift() == 0) & (signals == -1), (signals.shift() == 1) & (signals == -1) ], choicelist = [True, True], default = False ), index = crypto_signals_4hours.index, columns = ['close'])
short_exits = pd.DataFrame(np.select(condlist = [(signals.shift() == -1) & (signals == 0)], choicelist = [True], default = False ), index = crypto_signals_4hours.index, columns = ['close'])
crypto_prices_4hours.vbt.drop_levels(-1, inplace=True)
pf = vbt.Portfolio.from_signals(crypto_prices_4hours[ticker],
entries = long_entries['close'],
exits = long_exits['close'],
short_entries = short_entries['close'],
short_exits = short_exits['close'],
init_cash=1000,
fees = 0.001,
slippage=0.001,
call_seq='auto')
```
<img width="773" alt="image" src="https://user-images.githubusercontent.com/103491704/179400709-b9cd149f-5e0f-44c7-bdf2-6b5baea9489d.png">
<img width="719" alt="image" src="https://user-images.githubusercontent.com/103491704/179400718-44721cd8-6066-4777-b025-469cfa1f9ec2.png">
| open | 2022-07-17T13:30:49Z | 2022-07-17T13:30:49Z | https://github.com/polakowo/vectorbt/issues/473 | [] | AlexanderMoonBit | 0 |
521xueweihan/HelloGitHub | python | 2,888 | 项目推荐 | DeepSeek-R1 性能对标 OpenAI o1 正式版的开源AI大模型 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/deepseek-ai/DeepSeek-R1
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:机器学习
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:性能对标 OpenAI o1 正式版的开源AI大模型
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:AI大模型,性能对标OpenAI o1,训练技术全部开源,对中文处理十分到位,推理能力极佳
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:性能对标 OpenAI o1,训练技术开源,对中文处理非常到位,推理能力极佳
- 截图:

- 后续更新计划:未知 | open | 2025-01-25T14:39:35Z | 2025-01-25T14:40:49Z | https://github.com/521xueweihan/HelloGitHub/issues/2888 | [] | SekiBetu | 0 |
davidsandberg/facenet | tensorflow | 473 | Trained CASIA to 98.92/93.3 without using center loss | When I applied center loss, the final cross entropy is low (~0.5), but the accuracy/tar combination is not as good as just training with softmax. When only training with softmax, the final cross entropy is higher (0.8). This means when introducing center loss to incepiton_resnet_v1 with CASIA dataset, there is actually some overfitting.
| open | 2017-09-28T13:42:55Z | 2017-09-28T13:42:55Z | https://github.com/davidsandberg/facenet/issues/473 | [] | JianbangZ | 0 |
autogluon/autogluon | scikit-learn | 4,776 | [BUG] Chronos & Chronos-Bolt unable to hyperparameter tune, and fails to output best found parameters when it succeeds. | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When attempting to tune hyperparameters, none of the hypertuning options work.
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
When running:
```
predictor.fit(
train_data,
hyperparameters={
"ChronosModel": {
"model_path": "bolt_base",
"context_length": 2048,
},
},
enable_ensemble=False,
)
```
Training occurs correctly. However, when running:
```
predictor.fit(
train_data,
hyperparameters={
"ChronosModel": {
"model_path": "bolt_base",
"context_length": space.Int(64, 4096),
},
},
hyperparameter_tune_kwargs="auto",
enable_ensemble=False,
)
```
It fails due with:
```
Traceback (most recent call last):
File "/mnt/bc457ffc-58c4-4dfe-a922-2b44ae3fa37e/gluon/train.py", line 212, in <module>
predictor.fit(
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/utils/decorators.py", line 31, in _call
return f(*gargs, **gkwargs)
^^^^^^^^^^^^^^^^^^^^
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/predictor.py", line 753, in fit
self._learner.fit(
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/learner.py", line 66, in fit
return self._fit(
^^^^^^^^^^
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/learner.py", line 126, in _fit
self.trainer.fit(
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/trainer/auto_trainer.py", line 67, in fit
self._train_multi(
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/trainer/abstract_trainer.py", line 593, in _train_multi
models = self.construct_model_templates(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/trainer/auto_trainer.py", line 17, in construct_model_templates
return get_preset_models(
^^^^^^^^^^^^^^^^^^
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/models/presets.py", line 273, in get_preset_models
model = model_type(**model_type_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ghostdog/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/timeseries/models/chronos/model.py", line 219, in __init__
if self.context_length is not None and self.context_length > self.maximum_context_length:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>' not supported between instances of 'Int' and 'int'
```
Similar errors happen when attempting to do the following:
```
"covariate_regressor": space.Categorical("LR", "GBM", "CAT", "XGB", "RF")
"target_scaler": space.Categorical("standard", "mean_abs", "robust", "min_max"),
"context_length": space.Int(64, 4096),
"covariate_scaler": space.Categorical("global", None),
```
But "succeeds" with:
```
"model_path": space.Categorical("bolt_base", "bolt_small", "bolt_mini")
"fine_tune_lr": space.Real(0.0001, 0.1, log=True),
"fine_tune_steps": space.Int(100, 10000),
"fine_tune": space.Bool(),
```
In all fine tuning, it seems to not rename the output model names, nor output the best found hyperparamer set. For example (with fine_tune_lr as the search space):
```
Starting training. Start time is 2025-01-08 13:09:07
Models that will be trained: ['Chronos[bolt_base]']
Hyperparameter tuning model Chronos[bolt_base].
Trained 10 models while tuning Chronos[bolt_base].
-0.3823 = Validation score (-MASE)
15.26 s = Total tuning time
Training complete. Models trained: ['Chronos[bolt_base]/T1', 'Chronos[bolt_base]/T2', 'Chronos[bolt_base]/T3', 'Chronos[bolt_base]/T4', 'Chronos[bolt_base]/T5', 'Chronos[bolt_base]/T6', 'Chronos[bolt_base]/T7', 'Chronos[bolt_base]/T8', 'Chronos[bolt_base]/T9', 'Chronos[bolt_base]/T10']
Total runtime: 15.27 s
Best model: Chronos[bolt_base]/T1
Best model score: -0.3823
model score_val pred_time_val fit_time_marginal fit_order
0 Chronos[bolt_base]/T9 -0.382312 1.297168 1.311466 9
1 Chronos[bolt_base]/T8 -0.382312 1.381638 1.395966 8
2 Chronos[bolt_base]/T7 -0.382312 1.366036 1.380749 7
3 Chronos[bolt_base]/T6 -0.382312 1.423013 1.437684 6
4 Chronos[bolt_base]/T5 -0.382312 1.302736 1.317139 5
5 Chronos[bolt_base]/T4 -0.382312 1.295378 1.309779 4
6 Chronos[bolt_base]/T3 -0.382312 1.326350 1.341095 3
7 Chronos[bolt_base]/T2 -0.382312 1.326714 1.341974 2
8 Chronos[bolt_base]/T10 -0.382312 1.318851 1.333019 10
9 Chronos[bolt_base]/T1 -0.382312 2.006882 3.036676 1
Trained the following models:['Chronos[bolt_base]/T1', 'Chronos[bolt_base]/T2', 'Chronos[bolt_base]/T3', 'Chronos[bolt_base]/T4', 'Chronos[bolt_base]/T5', 'Chronos[bolt_base]/T6', 'Chronos[bolt_base]/T7', 'Chronos[bolt_base]/T8', 'Chronos[bolt_base]/T9', 'Chronos[bolt_base]/T10']
Model not specified in predict, will default to the model with the best validation score: Chronos[bolt_base]/T1
```
**Installed Versions**
<details>
```python
INSTALLED VERSIONS
------------------
date : 2025-01-08
time : 13:05:26.112337
python : 3.11.11.final.0
OS : Linux
OS-release : 6.6.69-1-lts
Version : #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:00:49 +0000
machine : x86_64
processor :
num_cores : 12
cpu_ram_mb : 48097.5
cuda version : 12.565.77
num_gpus : 1
gpu_ram_mb : [6598]
avail_disk_size_mb : 311484
accelerate : 1.2.1
autogluon : 1.2
autogluon.common : 1.2
autogluon.core : 1.2
autogluon.features : 1.2
autogluon.multimodal : 1.2
autogluon.tabular : 1.2
autogluon.timeseries : 1.2
boto3 : 1.35.90
catboost : 1.2.7
coreforecast : 0.0.12
defusedxml : 0.7.1
einops : None
evaluate : 0.4.1
fastai : 2.7.18
fugue : 0.9.1
gluonts : 0.16.0
huggingface_hub : 0.26.5
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.5
joblib : 1.4.2
jsonschema : 4.23.0
lightgbm : 4.5.0
lightning : 2.5.0.post0
matplotlib : 3.10.0
mlforecast : 0.13.4
networkx : 3.4.2
nlpaug : 1.1.11
nltk : 3.9.1
numpy : 1.26.4
nvidia-ml-py3 : None
omegaconf : 2.3.0
onnx : None
onnxruntime : None
onnxruntime-gpu : None
openmim : 0.3.7
optimum : None
optimum-intel : None
orjson : 3.10.13
pandas : 2.2.3
pdf2image : 1.17.0
Pillow : 11.0.0
psutil : 6.1.1
pyarrow : 16.1.0
pytesseract : 0.3.10
pytorch-metric-learning: 2.3.0
pytorch_lightning : 2.5.0.post0
ray : 2.31.0
requests : 2.32.3
scikit-image : 0.24.0
scikit-learn : 1.5.2
scikit-learn-intelex : None
scipy : 1.14.1
seqeval : 1.2.2
skl2onnx : None
spacy : 3.8.2
statsforecast : 1.7.8
tabpfn : None
tensorboard : 2.18.0
text-unidecode : 1.3
timm : 1.0.3
torch : 2.4.1.post300
torchmetrics : 1.2.1
torchvision : 0.19.1
tqdm : 4.67.1
transformers : 4.47.1
utilsforecast : 0.2.3
vowpalwabbit : None
xgboost : 2.1.3
```
</details>
| open | 2025-01-08T02:13:40Z | 2025-02-14T00:32:24Z | https://github.com/autogluon/autogluon/issues/4776 | [
"bug",
"module: timeseries"
] | GhostDog98 | 4 |
vitalik/django-ninja | django | 859 | Generate client with "long" datatype? | Hello! Maybe this is a Pydantic specific question - but wanted to start here.
If I have a schema with `int` - like for `id` - how do I generate clients for a language like Java where I want to use `long` instead? Python's `int` is auto promoting but for Java clients I'd need to use `long`...
Thanks for any help! | closed | 2023-09-17T21:20:50Z | 2023-09-18T04:40:18Z | https://github.com/vitalik/django-ninja/issues/859 | [] | winrid | 4 |
seleniumbase/SeleniumBase | web-scraping | 2,203 | "Customize Chrome to give your browser a new look" pop-up is now appearing | ## "Customize Chrome to give your browser a new look" pop-up is now appearing
It must be raining pop-ups today, because earlier I encountered https://github.com/seleniumbase/SeleniumBase/issues/2201, and now I'm encountering this:
<img width="500" alt="Screenshot 2023-10-20 at 2 29 05 PM" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/f356b2b5-79c6-4030-8335-f85862563f30">
Thanks to the info in https://github.com/GoogleChrome/chrome-launcher/blob/main/docs/chrome-flags-for-tools.md, I can remove this pop-up by using: `--ash-no-nudges` in Chromium options.
It appears that the latest Chromium release has added multiple pop-ups that haven't been seen before.
| closed | 2023-10-20T18:57:49Z | 2025-02-24T13:26:06Z | https://github.com/seleniumbase/SeleniumBase/issues/2203 | [
"bug"
] | mdmintz | 2 |
JaidedAI/EasyOCR | deep-learning | 675 | Difference between _reader.readtext vs _reader.recognize | Can someone explain the difference between this two methods of easyOCR?
readtext gives for some images better results then recognize. But why? :)
Is there a best way when to use "readtext" and when to use "recognize"? | closed | 2022-03-02T08:30:40Z | 2022-03-07T06:50:29Z | https://github.com/JaidedAI/EasyOCR/issues/675 | [] | christian-becker-ta | 1 |
pydantic/pydantic | pydantic | 10,920 | 2.10: AnyHttpUrl is no longer subclass of pydantic_core.Url (according to mypy) | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Using pydantic<2.10.0, mypy says AnyHttpUrl is subclass of Url. With pydantic 2.10.0, it's not anymore.
Is that intended?
### Example Code
```Python
from pydantic import AnyHttpUrl
from pydantic_core import Url
foo = AnyHttpUrl("https://www.google.com/")
print(isinstance(foo, Url))
# True in 2.9
# False in 2.10
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.0
pydantic-core version: 2.27.0
pydantic-core build: profile=release pgo=false
install path: /home/userx/proj/internal/task_runner/.virtualenv/lib/python3.10/site-packages/pydantic
python version: 3.10.8 (main, Oct 10 2023, 11:46:07) [GCC 9.4.0]
platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
related packages: mypy-1.6.1 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-11-21T10:30:04Z | 2024-11-21T11:23:09Z | https://github.com/pydantic/pydantic/issues/10920 | [
"question"
] | mjuhlin1 | 1 |
thtrieu/darkflow | tensorflow | 1,032 | Referring:- Training on your own dataset | I am referring Training on your own dataset getting following error.
Windwos 8.1 , python 3.7
[train.zip](https://github.com/thtrieu/darkflow/files/3124722/train.zip)
##############################################
C:\work\darkflow-master>pip3 list
##############################################
Package Version
-------------------- --------
absl-py 0.7.1
asn1crypto 0.24.0
astor 0.7.1
boost 0.1
certifi 2019.3.9
cffi 1.12.3
chardet 3.0.4
cmake 3.13.3
cryptography 2.6.1
Cython 0.29.7
darkflow 1.0.0
decorator 4.4.0
gast 0.2.2
grpcio 1.20.0
h5py 2.9.0
http-ece 1.1.0
idna 2.8
imutils 0.5.2
Keras-Applications 1.0.7
Keras-Preprocessing 1.0.9
Markdown 3.1
Mastodon.py 1.3.1
mock 2.0.0
numpy 1.16.2
opencv-python 3.4.2.16
pbr 5.1.3
Pillow 6.0.0
pip 19.1
protobuf 3.7.1
pycparser 2.19
python-dateutil 2.8.0
pytz 2019.1
requests 2.21.0
scipy 1.2.1
setuptools 41.0.1
six 1.12.0
SQLAlchemy 1.3.3
tensorboard 1.13.1
tensorflow 1.13.1
tensorflow-estimator 1.13.0
termcolor 1.1.0
urllib3 1.24.2
Werkzeug 0.15.2
wheel 0.33.1
C:\work\darkflow-master>python flow --model cfg/tiny-yolo-voc-1c.cfg --load bi
n/yolov2-tiny-voc.weights --train --annotation new_model_data/nnotations --datas
et new_model/data/images --epoch 300
Traceback (most recent call last):
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, descript
ion)
File "C:\Python\Python37\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Python\Python37\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialisation routin
e failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "flow", line 4, in <module>
from darkflow.cli import cliHandler
File "C:\work\darkflow-master\darkflow\cli.py", line 3, in <module>
from .net.build import TFNet
File "C:\work\darkflow-master\darkflow\net\build.py", line 1, in <module>
import tensorflow as tf
File "C:\Python\Python37\lib\site-packages\tensorflow\__init__.py", line 24, i
n <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-im
port
File "C:\Python\Python37\lib\site-packages\tensorflow\python\__init__.py", lin
e 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow
_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, descript
ion)
File "C:\Python\Python37\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Python\Python37\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialisation routin
e failed.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
C:\work\darkflow-master>
################################################## | open | 2019-04-28T07:54:20Z | 2019-05-27T09:32:13Z | https://github.com/thtrieu/darkflow/issues/1032 | [] | pardeep3december | 1 |
nvbn/thefuck | python | 1,146 | [SUGGESTION] Write to stdin instead of using the confirmation prompt | It would be nice to have an option for thefuck to write to stdin instead of using the confirmation prompt, this way the command will appear on a new line and we can edit it if necessary or hit enter to run it.
It would look like this:
```sh
$ git stats
git: 'stats' is not a git command. See 'git --help'.
The most similar command is
status
$ fuck
$ git status
```
Instead of this:
```sh
$ git stats
git: 'stats' is not a git command. See 'git --help'.
The most similar command is
status
$ fuck
git status [enter/↑/↓/ctrl+c]
```
The keystrokes are exactly the same (`git stats` + `ENTER` + `fuck` + `ENTER ` + `ENTER `)
But this is more convenient if the corrected command needs further fixing. | open | 2020-11-20T16:07:16Z | 2020-11-20T21:01:11Z | https://github.com/nvbn/thefuck/issues/1146 | [] | pilattebe | 9 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 519 | [FEATURE]: Simulation mode | ### Feature summary
Run the bot in "simulation" mode, i.e. doing all the steps except applying for real
### Feature description
In simulation mode, the bot would build the resume & cover letter, and save them to a directory along with the job posting URL. That would allow for further fine-tuning / debug before using it "for real".
### Motivation
Debugging & improving code ; testing in languages other than English
### Alternatives considered
_No response_
### Additional context
I would gladly help & start this work if someone gives me a starting point. Thanks! | closed | 2024-10-12T14:28:32Z | 2024-10-22T23:59:39Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/519 | [
"enhancement"
] | RichP-fr | 4 |
aio-libs/aiomysql | asyncio | 809 | setuptools_scm version 7 generates an incorrect version number for wheels | when switching to `setuptools_scm` version 7 we can also drop the dependency on `setuptools_scm_git_archive`, as the functionality is now provided in setuptools_scm` directly.
for the archive functionality it also has a different `.git_archival.txt` template than what we currently use, we should probably update that: https://github.com/pypa/setuptools_scm#builtin-mechanisms-for-obtaining-version-numbers | open | 2022-06-29T17:40:53Z | 2022-07-11T00:26:38Z | https://github.com/aio-libs/aiomysql/issues/809 | [
"dependencies"
] | Nothing4You | 1 |
Kludex/mangum | fastapi | 26 | Add support for OpenFaas | Add support for [openfaas](https://github.com/openfaas/faas) so that we can use this anywhere we an run Kubernetes. | closed | 2019-01-28T15:34:50Z | 2019-01-30T18:16:22Z | https://github.com/Kludex/mangum/issues/26 | [
"feature",
"maybe"
] | unixorn | 2 |
fastapi/sqlmodel | pydantic | 500 | Column with `list[Enum]` dtype silently removes value if not a valid enum string | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
import enum
from sqlmodel import SQLModel, Field
class SomeEnum(enum.Enum):
VALUE_1 = "value_1"
VALUE_2 = "value_2"
class ModelWithEnumList(SQLModel, table=True):
key: int = Field(primary_key=True)
value: list[SomeEnum]
if __name__ == "__main__":
# This Works Well:
instance_1 = ModelWithEnumList(key=1, value=["value_1", "value_2"])
assert len(instance_1.value) == 2
assert instance_1.value == [SomeEnum.VALUE_1, SomeEnum.VALUE_2]
# This just silently drops the value...
instance_2 = ModelWithEnumList(key=2, value=["value_1", "value_2", "value_3"])
assert len(instance_2.value) == 3, "Something weird happens here, the field just gets dropped silently"
```
### Description
`SQLModel` with `table=True` silently drops the field for a `list[Enum]` with invalid Enum. Correct error is raised without `table=True` and using pydantic `BaseModel` so assume this is purely sqlmodel bug.
### Operating System
Linux
### Operating System Details
Docker, using python3.10-slim image.
### SQLModel Version
0.0.8
### Python Version
3.10
### Additional Context
_No response_ | closed | 2022-11-14T17:16:07Z | 2023-04-17T16:44:43Z | https://github.com/fastapi/sqlmodel/issues/500 | [
"question",
"investigate"
] | Vlados09 | 1 |
comfyanonymous/ComfyUI | pytorch | 6,550 | 报错 | ### Your question
Molmo7BDbnb
all() received an invalid combination of arguments - got (Tensor, dim=tuple, keepdim=bool), but expected one of:
* (Tensor input, *, Tensor out)
didn't match because some of the keywords were incorrect: dim, keepdim
* (Tensor input, int dim, bool keepdim, *, Tensor out)
* (Tensor input, name dim, bool keepdim, *, Tensor out)
### Logs
```powershell
```
### Other
_No response_ | open | 2025-01-21T07:58:01Z | 2025-01-21T07:58:01Z | https://github.com/comfyanonymous/ComfyUI/issues/6550 | [
"User Support"
] | Oxygen-925 | 0 |
home-assistant/core | python | 140,966 | HomeKit devices no longer responsive after 2025.2.x | ### The problem
After upgrading from HA core from 2025.1.4 to 2025.2.x, or even 2025.3.x, all of our HomeKit devices show "no response".
We've tried installing various patches of 2025.2.x & 2025.3.x, reloading the HomeKit Bridge integration, restarting HA, rebooting the VM running HA, but nothing has helped. The only thing that's worked is reverting back to 2025.1.4; however, recently even that workaround has stopped working (I think this was after we tried upgrading to HA core 2025.3.x).
There's a very similar issue here #138781, but I don't know/think our issue has to do with IPv6 since I didn't see similar log messages in our logs (although it's very possible I just missed them).
NOTE: I generated our logs by enabling logging, opening Apple Home and trying to turn on one of our light bulbs, then disabling logging again. Let me know if there's something I should do differently.
### What version of Home Assistant Core has the issue?
2025.2.x & 2025.3.x
### What was the last working version of Home Assistant Core?
2025.1.4
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
HomeKit Bridge
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/homekit/
### Diagnostics information
[home-assistant_homekit_2025-03-20T01-07-46.169Z.log](https://github.com/user-attachments/files/19356961/home-assistant_homekit_2025-03-20T01-07-46.169Z.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-20T01:37:31Z | 2025-03-20T01:45:18Z | https://github.com/home-assistant/core/issues/140966 | [
"integration: homekit",
"missing-diagnostics-and-logs"
] | SpencerMcO | 1 |
OFA-Sys/Chinese-CLIP | computer-vision | 218 | 请问有用immich相册的吗?这个模型怎么集成进immich? | 我现在用的是默认的ViT-B-32::openai,对中文支持很不好,请问怎么能集成进去?
[https://github.com/immich-app/immich](url) | open | 2023-10-15T08:01:09Z | 2023-10-15T08:01:09Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/218 | [] | nodis | 0 |
pytorch/vision | machine-learning | 8,497 | Improve empty import time of torchvision | ### 🚀 The feature
When importing torchvision, a number of libraries are imported by default for more niche functionality of the library. To improve import time, I would favor delaying those imports to when they are needed
### Motivation, pitch
In my case, it is the av library in particular that contributes to the import time:
<img width="2087" alt="image" src="https://github.com/pytorch/vision/assets/2241296/2af05ab0-f97c-44bd-b7f2-fd5111f747d7">
(this assumes that torch, dynamo and onnx are already imported).
The import of `av` can easily be avoided as it is not needed by default.
### Alternatives
_No response_
### Additional context
I checked the code and I found this code here:
```
try:
import av
av.logging.set_level(av.logging.ERROR)
if not hasattr(av.video.frame.VideoFrame, "pict_type"):
av = ImportError(
"""\
Your version of PyAV is too old for the necessary video operations in torchvision.
If you are on Python 3.5, you will have to build from source (the conda-forge
packages are not up-to-date). See
https://github.com/mikeboers/PyAV#installation for instructions on how to
install PyAV on your system.
"""
)
except ImportError:
av = ImportError(
"""\
PyAV is not installed, and is necessary for the video operations in torchvision.
See https://github.com/mikeboers/PyAV#installation for instructions on how to
install PyAV on your system.
"""
)
```
The `pict_type` got added somewhere in the 0.5 range (released around 2020), 6.0 followed shortly. So I would suggest to change this test to not import av but the use `importlib` to check the version which would make this go away. This applies both to `torchvision/io/video_reader.py` as well as `torchvision/io/video.py`. I also wonder whether the logging call is still required given so much has changed since this code was written. | open | 2024-06-18T09:24:43Z | 2024-07-29T12:02:13Z | https://github.com/pytorch/vision/issues/8497 | [] | bschindler | 3 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 743 | how to use multiprocessing in flask-sqlclchemy? | I have two connections in my project.
SQLALCHEMY_BINDS = {"a_bind": mysql1, "b_bind": mysql2}
```python
from multiprocessing import Pool
class Table1:
__bind_key__="a_bind"
id=...
class Table2:
__bind_key__="b_bind"
id=...
def fun(single_element):
result=Table1.query(id=single_element).all()
new_data=[Table2(id=i.name) for i in result]
db.session.add_all(new_data)
db.session.commit()
db.session.remove()
pool=Pool(10)
pool.apply_async(fun, element_list)
pool.close()
pool.join()
```
Above code, which is not work. why? Is there any extra settings?
the problem can be solved with the package of [SQLAlchemy](https://www.sqlalchemy.org/), but is not work in flask-sqlalchemy.
thanks! | closed | 2019-05-27T16:18:21Z | 2020-12-05T20:21:50Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/743 | [] | zlg358 | 1 |
matplotlib/mplfinance | matplotlib | 205 | Bar colors based on indicator value | Is there a way to color bars based on TA indicator values (obtained using TA-lib or similar) or based on aggregate values of multiple indicators, e.g. 3 different ohlc bar colors representing “long”, “short” and “no trade”? Thank you for any insight.
| closed | 2020-07-03T02:44:10Z | 2022-09-11T01:50:09Z | https://github.com/matplotlib/mplfinance/issues/205 | [
"enhancement",
"question",
"released"
] | hrpomrx | 8 |
vvbbnn00/WARP-Clash-API | flask | 183 | [Bug] 连上以后无法访问ipv6的资源 | 显示连接到了ipv6的endpoint,但是连接成功后并不能访问ipv6的资源,只能访问ipv4。
然而用warp软件直接连接后可以访问IPV6的资源。 | open | 2024-04-28T15:11:14Z | 2024-05-13T18:02:32Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/183 | [
"bug"
] | Jerrychenshen | 0 |
pyppeteer/pyppeteer | automation | 257 | Screenshot fails from some WEB sites | ```
Traceback (most recent call last):
File "./src/process_url.py", line 260, in load_page
self.screenshot = await self._take_screenshot(page)
File "./src/process_url.py", line 241, in _take_screenshot
await page.screenshot({"path": filename, "fullPage": True})
File "/usr/local/lib/python3.8/dist-packages/pyppeteer/page.py", line 1253, in screenshot
return await self._screenshotTask(screenshotType, options)
File "/usr/local/lib/python3.8/dist-packages/pyppeteer/page.py", line 1300, in _screenshotTask
result = await self._client.send('Page.captureScreenshot', opt)
pyppeteer.errors.NetworkError: Protocol error (Page.captureScreenshot): Cannot take screenshot with 0 width.
```
The link is unfortunately a phishing URL. I modify the protocol type just in case
`hxxp://www.49_13_49.neuss.cl/#aHR0cHM6Ly9ncnlmZmVzcG91bHkuY29tL3RyYXNoL3JxcG1scWtmLmdmdj9naGgwN18xM18wNzQ5MDcwN18wN18xMz1waWFubmlAYmFjYXJkaS5jb20=
` | closed | 2021-05-12T10:02:15Z | 2021-05-12T19:11:52Z | https://github.com/pyppeteer/pyppeteer/issues/257 | [] | larytet | 1 |
joke2k/django-environ | django | 146 | Problem Sending Mail using ENV | I am using Django Edge Project that has django-environ installed,
I need to send Email using TLS,
my email string is :
EMAIL_URL=smtp+tls://myusername@gmail.com:mypassword@smtp.gmail.com:587
and in settings
EMAIL_CONFIG = env.email_url(
'EMAIL_URL', default='smtp+tls://username@gmaillcom@:password@smtp.gmail.com:587')
vars().update(EMAIL_CONFIG)
But when i am sending Email,
It sends from webmaster@localhost, its not picking smtp configurations.
| closed | 2017-09-26T01:53:39Z | 2021-09-04T21:20:59Z | https://github.com/joke2k/django-environ/issues/146 | [
"question"
] | dinesh829269 | 2 |
python-visualization/folium | data-visualization | 1,263 | Maximum number of popups or bug? | #### Please add a code sample or a nbviewer link, copy-pastable if possible
```python
import folium
import folium
number_of_markers = 1610 #Work!
# number_of_markers = 1611 #Will not work! not render
m = folium.Map()
for i in range(number_of_markers):
mark = folium.CircleMarker(location=(31,31),popup='here')
mark.add_to(m)
display(m)
```
#### Problem description
Hi, I have a regular dataset size of ~5000 and I was trying to add popups to each one of the points to provide explanations about the point. I could not do it; I was able to attach the popup but they are not rendered when I want to display the map.
The example above is a simple example.
My folium version is 0.10.0
So my question is whether this is a bug or is a limit of folium? If it is a bug, is there any workaround that you know to have more popups?
Thanks
| closed | 2020-02-13T20:24:48Z | 2020-02-17T11:20:51Z | https://github.com/python-visualization/folium/issues/1263 | [
"duplicate"
] | jmaralcZ | 5 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 75 | Add an "average_per_class" option to AccuracyCalculator | This option would compute accuracy for each sample, take the average for each class, and then report the average of those averages. This is contrast with the current approach which computes accuracy for each sample and then reports the global average. | closed | 2020-04-29T18:50:30Z | 2020-05-03T12:17:35Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/75 | [
"enhancement",
"fixed in dev branch"
] | KevinMusgrave | 1 |
tfranzel/drf-spectacular | rest-api | 908 | POSTPROCESSING_HOOKS and ENUM_NAME_OVERRIDES clashing | **Describe the bug**
I have defined `ENUM_NAME_OVERRIDES` which work well. However, when I try to add the documentation's post_processing_hook to `POSTPROCESSING_HOOKS`, the enums no longer generate as expected (the empty post process hook is copy pasted from the documentation)
**To Reproduce**
```
# settings.py
SPECTACULAR_SETTINGS = {
"ENUM_NAME_OVERRIDES": ENUM_NAME_OVERRIDES,
'POSTPROCESSING_HOOKS': [
'mypath.drf_spectacular_postprocess_hook'
],
}
# mypath.py
def drf_spectacular_postprocess_hook(result, generator, request, public):
# your modifications to the schema in parameter result
return result
```
Without the post processing hook, the enums are generated in components. With the post processing hook, the enums are generated in within each path. I've looked through the repo and tried adding `drf_spectacular.hooks.postprocess_schema_enums` as a post processing hook but it gives me the error `AssertionError: Incompatible AutoSchema used on View <class '...'>. Is DRF's DEFAULT_SCHEMA_CLASS pointing to "drf_spectacular.openapi.AutoSchema" or any other drf-spectacular compatible AutoSchema?`
**Expected behavior**
I need to be able to add post processing hooks whilst maintaining the ENUM_NAME_OVERRIDES.
FWIW I'm trying to add this post processing hook to solve this problem https://stackoverflow.com/questions/75005444/union-of-non-model-serializers-in-django-rest-framework if you have any suggestions
Thanks for a great library! | closed | 2023-01-04T13:00:24Z | 2023-01-04T15:39:29Z | https://github.com/tfranzel/drf-spectacular/issues/908 | [] | joshbenhamou | 4 |
mljar/mljar-supervised | scikit-learn | 125 | Add EDA for input data set | It will be nice to have Exploratory Data Analysis (EDA) similar that is in https://mljar.com

The EDA can be saved in a separate Markdown file and have link to it from the main AutoML readme. | closed | 2020-07-18T08:37:07Z | 2020-08-25T14:44:46Z | https://github.com/mljar/mljar-supervised/issues/125 | [
"enhancement",
"help wanted",
"good first issue"
] | pplonski | 3 |
yzhao062/pyod | data-science | 386 | MO_GAAL 'stop_epochs' parameter not working? | Hi
I have been running MO_GAAL for unsupervised anomaly detection. Even though I have set the 'stop_epochs' parameter to 20. I am getting 60 epochs run regardless
Parameter setup and class instantiation:

While running:

as you can see in the above images.
Please let me know if I am missing anything inadvertently. Thanks in advance.
| closed | 2022-04-09T09:58:59Z | 2022-04-20T02:32:02Z | https://github.com/yzhao062/pyod/issues/386 | [] | anirbanmukherjee2709 | 1 |
pytorch/vision | machine-learning | 8,631 | Simplify transfer learning by modifying get_model() | ### 🚀 The feature
Currently `torchvision.models.get_model()` doesn't allow you to build a model architecture with a different number of classes and keep existing pre-trained weights backbone for certain types (namely Image Classification models like EfficientNet).
Could something like this be incorporated into the get_model() method, or could another method be created to accommodate?
```python
model = torchvision.models.get_model(self.model_type, weights=self.weights_backbone)
# fix the in/out features of the final layer of the classifier to match num_classes.
# We have to do this after get_model() so we can retain the pre-trained weights, but
# modify the model architecture for our use case.
classifier_layer = model.classifier
last_layer_index = len(classifier_layer) - 1
original_linear_layer = classifier_layer[last_layer_index]
new_linear_layer = torch.nn.Linear(in_features=original_linear_layer.in_features, out_features=self.num_classes)
classifier_layer[last_layer_index] = new_linear_layer
```
### Motivation, pitch
Raising an error about the backbone weights having a mismatch guides users in a direction that isn't helpful.
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-09-03T21:09:42Z | 2024-09-04T08:57:34Z | https://github.com/pytorch/vision/issues/8631 | [] | david-csnmedia | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,267 | Reappearance of Deleted Rows while Versioning with a History Table | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10259
<div type='discussions-op-text'>
<sup>Originally posted by **PHvL** August 18, 2023</sup>
We are using the [Versioned class from history_meta](https://docs.sqlalchemy.org/en/20/orm/examples.html#module-examples.versioned_history) for tables that are synced from some external source. Sometimes rows disappear in the external source and later re-appear with the same primary key,
The result is that both the history table and the original table contain a row with `version=1`, resulting in a violation of the primary key constraint of the history table at the next update (or deletion) of this row.
I've hence changed the default value of the version column in the history table to get the maximum of the version already present in history columns +1
I don't know if this is the most elegant solution, but maybe it helps someone with the same issue.
```diff
@@ -11,6 +11,9 @@ from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import util
+from sqlalchemy import select
+from sqlalchemy.sql.expression import func, and_
+from sqlalchemy.engine.default import DefaultExecutionContext
from sqlalchemy.orm import attributes
from sqlalchemy.orm import object_mapper
from sqlalchemy.orm.exc import UnmappedColumnError
@@ -146,8 +149,15 @@ def _history_mapper(local_mapper):
super_history_table.append_column(col)
if not super_mapper:
+ def default_version_from_history(context: DefaultExecutionContext):
+ current_parameters = context.get_current_parameters()
+ return context.connection.scalar(select(func.coalesce(func.max(history_table.c.version), 0)+1).where(and_(*[getattr(history_table.c, c.name)==current_parameters.get(c.name, None) for c in inspect(local_mapper.local_table).primary_key])))
+ # Set default value of version column to the maximum of the version in history columns already present +1
+ # Otherwise re-appearance of deleted rows would cause an error with the next update
local_mapper.local_table.append_column(
- Column("version", Integer, default=1, nullable=False),
+ Column("version", Integer,
+ default=default_version_from_history,
+ nullable=False),
replace_existing=True,
)
local_mapper.add_property(
```</div> | closed | 2023-08-21T13:42:02Z | 2024-08-02T13:37:11Z | https://github.com/sqlalchemy/sqlalchemy/issues/10267 | [
"bug",
"orm",
"patch provided",
"near-term release",
"examples"
] | zzzeek | 3 |
jonaswinkler/paperless-ng | django | 112 | Database errors during consumption lead to index entries referencing non-existing documents | Which will in turn result in search errors when those documents are part of the result set. | closed | 2020-12-09T17:15:09Z | 2020-12-09T21:32:46Z | https://github.com/jonaswinkler/paperless-ng/issues/112 | [
"bug"
] | jonaswinkler | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,556 | Problem sending keystrokes "Undetected" | Hello
ive got a problem with a site im running a script for
They use cloudflare as protection for ddos / bots.
Every 24h my "Session" times out and im stuck on the cloudflare screen.
So every 24h i have to open the site with my script, wait a couple seconds, then i need to open a second tab and open the page again and then i get forwardet and it will not show me cloudflare for another 24h.
Im now trying to automate a bypass for cloudflare. but somehow opening a second tab automated gets detected.
I tryed the open new window method. i tryed sending "ctrl" + "t" with different librarys ( keyboard, undetected_chromedriver, pyDirectinput) it allways gets detected somehow and keeps me stuck on the cloudflare screen for ever. but then i manually press "ctrl" + "t" i get forwardet after a couple seconds.
Has anyone experienced something similar and knows how to open a new tab or send "ctrl" + "t" without getting flagged for automation?
ty <3 | open | 2023-09-10T14:48:23Z | 2023-09-10T14:48:23Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1556 | [] | SAfsfsf | 0 |
marcomusy/vedo | numpy | 129 | vtkplotter.dolfin.plot not working as usual | Hi Marco,
I was experimenting with `vtkplotter.dolfin.plot` in `Jupyter` notebook. I am noticing some unusual behavior between my local machine and a docker container that I built on `binder`.
A sample [notebook](https://mybinder.org/v2/gh/bhaveshshrimali/FEniCSdiscourse/master?filepath=001.ipynb) with the container.
### Initial problem on the container:
I was having trouble in making embedded inline plots using jupyter (+dolfin). A simple example like
```python3
from dolfin import UnitCubeMesh
from vtkplotter.dolfin import plot
plot(UnitCubeMesh(2,2,2))
```
would render the plot in a separate window, but on the container it would spit out the error
```python
ModuleNotFoundError: No module named 'vtkOpenGLKitPython'
```
### Try Installing MESA: namely `libgl1-mesa-glx` for rendering graphics
Following this I simply installed `libgl1-mesa-glx libsm6` using
```
apt-get install libgl1-mesa-glx libsm6
```
on both my local machine and the container. Now everything works perfectly fine on the container, but I do not get any plot on my machine (neither embedded nor in a separate window). Would you happen to know the issue ?
Also attached is the notebook's [output.pdf](https://github.com/marcomusy/vtkplotter/files/4500473/001.pdf) on my machine. Note that everything works smoothly when running a python script on the terminal, namely:
```python3
from dolfin import UnitCubeMesh, Function, FunctionSpace
from vtkplotter.dolfin import plot
import numpy as np
msh = UnitCubeMesh(4,4,4)
V = FunctionSpace(msh, "CG", 1)
w = Function(V)
w.vector()[:] = np.linspace(0,1,V.dim())
plot(w)
```
runs absolutely fine on the machine.
| closed | 2020-04-19T23:29:14Z | 2020-04-22T20:13:21Z | https://github.com/marcomusy/vedo/issues/129 | [] | bhaveshshrimali | 6 |
wandb/wandb | tensorflow | 9,151 | [Bug]: Pydantic V2.0 `copy` method is deprecated; use `model_copy` instead | ### Describe the bug
<!--- Describe your issue here --->
ENV:
Using python 3.12 using conda environment on a jupyter notebook:
* pydantic: 2.10.4
* wandb: 0.19.1
-------
Problem:
There seems to be a deprecation error caused by pydantic v2.0 copy method used in line:
wandb/sdk/wandb_init.py:202:
```settings = self._wl.settings.copy()```
I managed to fix it in my machine by replacing "copy()" with "model_copy()"
```settings = self._wl.settings.model_copy()```
However no thorough testing has been done.
relevant stack trace:
```
---------------------------------------------------------------------------
PydanticDeprecatedSince20 Traceback (most recent call last)
Cell In[11], line 7
3 port_return = []
6 # start a new wandb run to track this script
----> 7 wandb.init(
8 # set the wandb project where this run will be logged
9 project="my-awesome-project",
10
11 # track hyperparameters and run metadata
12 config={
13 "learning_rate": 0.02,
14 "architecture": "testRL",
15 "dataset": "test",
16 "epochs": num_episodes,
17 }
18 )
File ~/miniconda3/envs/RLbot/lib/python3.12/site-packages/wandb/sdk/wandb_init.py:1312, in init(entity, project, dir, id, name, notes, tags, config, config_exclude_keys, config_include_keys, allow_val_change, group, job_type, mode, force, anonymous, reinit, resume, resume_from, fork_from, save_code, tensorboard, sync_tensorboard, monitor_gym, settings)
1308 logger.exception("error in wandb.init()", exc_info=e)
1310 # Need to build delay into this sentry capture because our exit hooks
1311 # mess with sentry's ability to send out errors before the program ends.
-> 1312 wandb._sentry.reraise(e)
1313 raise AssertionError()
File ~/miniconda3/envs/RLbot/lib/python3.12/site-packages/wandb/analytics/sentry.py:156, in Sentry.reraise(self, exc)
153 self.exception(exc)
154 # this will messily add this "reraise" function to the stack trace,
155 # but hopefully it's not too bad
--> 156 raise exc.with_traceback(sys.exc_info()[2])
File ~/miniconda3/envs/RLbot/lib/python3.12/site-packages/wandb/sdk/wandb_init.py:1290, in init(entity, project, dir, id, name, notes, tags, config, config_exclude_keys, config_include_keys, allow_val_change, group, job_type, mode, force, anonymous, reinit, resume, resume_from, fork_from, save_code, tensorboard, sync_tensorboard, monitor_gym, settings)
1288 try:
1289 wi = _WandbInit()
-> 1290 wi.setup(
1291 init_settings=init_settings,
1292 config=config,
1293 config_exclude_keys=config_exclude_keys,
1294 config_include_keys=config_include_keys,
1295 allow_val_change=allow_val_change,
1296 monitor_gym=monitor_gym,
1297 )
1298 return wi.init()
1300 except KeyboardInterrupt as e:
File ~/miniconda3/envs/RLbot/lib/python3.12/site-packages/wandb/sdk/wandb_init.py:202, in _WandbInit.setup(self, init_settings, config, config_exclude_keys, config_include_keys, allow_val_change, monitor_gym)
199 _set_logger(self._wl._get_logger())
201 # Start with settings from wandb library singleton
--> 202 settings = self._wl.settings.copy()
204 # handle custom sweep- and launch-related logic for init settings
205 if settings.sweep_id:
File ~/miniconda3/envs/RLbot/lib/python3.12/site-packages/pydantic/main.py:1375, in BaseModel.copy(self, include, exclude, update, deep)
1340 @typing_extensions.deprecated(
1341 'The `copy` method is deprecated; use `model_copy` instead. '
1342 'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
(...)
1351 deep: bool = False,
1352 ) -> Self: # pragma: no cover
1353 """Returns a copy of the model.
1354
1355 !!! warning "Deprecated"
(...)
1373 A copy of the model with included, excluded and updated fields as specified.
1374 """
-> 1375 warnings.warn(
1376 'The `copy` method is deprecated; use `model_copy` instead. '
1377 'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
1378 category=PydanticDeprecatedSince20,
1379 stacklevel=2,
1380 )
1381 from .deprecated import copy_internals
1383 values = dict(
1384 copy_internals._iter(
1385 self, to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False
1386 ),
1387 **(update or {}),
1388 )
PydanticDeprecatedSince20: The `copy` method is deprecated; use `model_copy` instead. See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/
``` | closed | 2024-12-25T13:44:00Z | 2025-01-02T19:34:57Z | https://github.com/wandb/wandb/issues/9151 | [
"ty:bug",
"a:sdk",
"c:sdk:settings"
] | FYYHU | 3 |
pandas-dev/pandas | data-science | 61,126 | DOC: Write user guide page on apply/map/transform methods | There is some information in our documentation regarding how to use user defined functions in pandas. The API pages of the used methods, and these sections:
- https://pandas.pydata.org/docs/user_guide/groupby.html#aggregation-with-user-defined-functions
- https://pandas.pydata.org/docs/user_guide/gotchas.html#gotchas-udf-mutation
My understanding is that we've been mostly discouraging the use of functions like apply, or at least the community has with many posts and comments regarding `apply` is slow, which seem fair. With the work going on supporting JIT compilers on these functions (see https://github.com/pandas-dev/pandas/pull/54666 and https://github.com/pandas-dev/pandas/pull/61032) this can hopefully change, and allow in some cases for clearer code while not compromising speed.
I think it may be difficult to communicate all the information related to udf in the existing sections on group by and FAQ pages and in the API docs. A dedicated page in the users guide that guides users on when to use udf, a general idea of the API, the differences between the different methods, the options available... seems a better idea.
Also, the APIs of the different methods are quite inconsistent, and in some cases cumbersome. I think writing this page will be a good exercise to identify cases when explaining the functionality to the users is complex and not intuitive, and see if we can address them. | open | 2025-03-15T04:03:22Z | 2025-03-24T18:33:19Z | https://github.com/pandas-dev/pandas/issues/61126 | [
"Docs",
"Apply"
] | datapythonista | 2 |
jupyter-incubator/sparkmagic | jupyter | 430 | Unable to manage sessions after shutting down notebook | I have installed sparkmagic and creating sessions with %manage_spark.
I refered to [similar issue](https://github.com/jupyter-incubator/sparkmagic/issues/387) and set heartbeat timeout to 10 so that my sessions are stopped if I shutdown/close&halt my notebook.
But without this setting:
If I close my notebook without shutting it down I see application running in yarn and my livy session as idle in livy UI.
However if I shutdown the notebook I am not able to delete the previous sessions using %manage_spark as I am not able to see these in manage sessions option. It says No sessions to manage in Manage session tab nor anything in Manage endpoints tab. Where as I see in Livy UI session as idle. Nor the application stops in yarn. I need to manually kill it using yarn kill.
If I do not set the heartbeats parameter do I need to manually kill applications every time? As not able to see in %manage_spark magic for the same(previously shutdown notebook). Otherwise after how long are these sessions and running applications in yarn stopped?
I am using jupyterhub 0.7.0 and jupyter 4.2.1. | closed | 2018-01-12T09:50:49Z | 2018-01-16T18:43:05Z | https://github.com/jupyter-incubator/sparkmagic/issues/430 | [] | mrunmayeejog | 3 |
Significant-Gravitas/AutoGPT | python | 8,784 | Implement agent output rendering | * Part of #8780
* Also needed for #8776
LLMs can output markdown and/or code, and we should implement rendering for this. | open | 2024-11-26T18:22:17Z | 2025-02-28T16:38:21Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8784 | [
"platform/frontend"
] | Pwuts | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,320 | Rectangle Images | Hi,
I noticed that we should crop rectangle image to square patches during training. But why rectangle image is supported during test?
Thank you in advance for any helps! | closed | 2021-09-25T23:05:23Z | 2021-11-03T19:34:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1320 | [] | zhangdan8962 | 1 |
joke2k/django-environ | django | 127 | PgBouncer and django.db.utils.OperationalError: FATAL | Hello,
Currently it seems it's not possible to use PgBouncer:
DATABASE_URL=postgis://myuserdb:mydbpasswd@/var/run/postgresql:6432/mydb
I got
django.db.utils.OperationalError: FATAL: database "var/run/postgresql:6432/mydb" does not exist
Thanks,
D
| closed | 2017-06-02T06:30:08Z | 2018-02-06T05:50:30Z | https://github.com/joke2k/django-environ/issues/127 | [] | murdav | 3 |
cvat-ai/cvat | pytorch | 8,502 | Add a new methode in SDK | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I've added two new methods in **cvat-sdk/core/proxies/project** but they're not recognised because the sdk is launched from /.local/lib/python3.12/site-packages/cvat_sdk/api_client/model_utils.py..
.I've tried changing PYTHONPATH to call the modified sources in the cvat_sdk directory directly but that causes quite a few problems. In the documentation I can't find how to customise the sdk.
I have added several backend functions and I need to interact with the client, I have a view of the type
```
@action(detail=True, methods=[‘PUT’], url_path=‘update-project’)
def update...
```
and in url
```
path(‘api/visionia/<int:pk>/update-project/’,visionia.VisioniaViewSet.as_view({‘get’: ‘project_update’}),
name=‘project_update’),
```
### Describe the solution you'd like
In my client I try to do
```
def update_project_visionia(self):
try:
with make_client(self.host, port=self.port, credentials=(self.username, self.password)) as client:
project = client.projects.retrieve(self.id)
return project.update_project_visionia()
except exceptions.ApiException as e:
print(f ‘Exception on API call for update_project_visionia: {e}’)
raise
```
but I get the error...
`cvat_sdk.api_client.exceptions.ApiAttributeError: ProjectRead has no attribute ‘update_project_visionia’`
in the file cvat-sdk/core/proxies/project I add class ProjectsRepo()
```
def update_project_visionia(self, id: int):
project = self.retrieve(id)
return project.update_project_visionia()
```
and in class Project() :
```
def update_project_visionia(self):
(_, response) = self._client.api_client.put(
f"/api/visionia/{self.id}/update-project/"
)
self.fetch()
return response
```
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-10-02T14:00:26Z | 2024-10-02T14:43:16Z | https://github.com/cvat-ai/cvat/issues/8502 | [
"enhancement"
] | davy-blavette | 1 |
huggingface/transformers | pytorch | 36,271 | Layer activation discrepancy when running DeepSeekV3 after swapping `nn.Linear` with custom `FakeLinear` with disk offloading | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Using `accelerate` for big model inference and offloading layers to disk
- Using GPU in script?: Yes 8x80GB (A100) GPUs
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@SunMarc @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
# This code just registers the DeepSeekV3 with HF autoclass
from deepseekv3.hf_utils import register_with_auto_class
register_with_auto_class()
def get_causal_model_and_tokenizer(model_name, torch_dtype, model_kwargs={}, tokenizer_kwargs={}):
tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
config = AutoConfig.from_pretrained(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(
config,
attn_implementation=model_kwargs.pop("attn_implementation", "eager"),
)
model = load_checkpoint_and_dispatch(
model=model,
checkpoint=model_name,
dtype=torch_dtype,
no_split_module_classes=model._no_split_modules,
**model_kwargs,
)
return model, tokenizer
# I have 8 A100 GPUs with 80GB memory in each
# I offload the rest to disk. I have also attached my `hf_device_map`
model, tokenizer = get_causal_model_and_tokenizer(
model_name = "/data/custom_model_downloads/DeepSeek-V3-bf16",
torch_dtype=torch.bfloat16,
model_kwargs= {
"attn_implementation": "eager",
"offload_folder": "/data/model_offload_cache/deepseekv3_671b",
"offload_buffers": True,
"device_map": "auto",
"max_memory":
{
0: 60000000000,
1: 60000000000,
2: 60000000000,
3: 60000000000,
4: 60000000000,
5: 60000000000,
6: 60000000000,
7: 60000000000,
},
}
)
print(model)
model = model.eval()
print(model.device)
def forward_hook(name):
def tmp(layer, inp, out):
print(f"Done with layer {name}")
return tmp
def forward_prehook(name):
def tmp(layer, inp):
print(f"Starting layer {name}")
return tmp
forward_hook_handles = []
forward_prehook_handles = []
for name, module in model.named_modules():
if isinstance(module, torch.nn.Linear):
forward_hook_handles.append(module.register_forward_hook(forward_hook(name)))
forward_prehook_handles.append(module.register_forward_pre_hook(forward_prehook(name)))
text = "Hello I am not sure what is happening can you help me debug this?"
tokens = tokenizer(text, return_tensors="pt")
# Model before swapping. Works as intended. This prints the entry and exit from the `nn.Linear` using the hooks
output = model.generate(tokens.input_ids, max_new_tokens=1, use_cache=False)
for hook in forward_hook_handles:
hook.remove()
for hook in forward_prehook_handles:
hook.remove()
class FakeLinear(torch.nn.Linear):
"""
FakeLinear: A fake linear layer that can be used to replace a real linear layer.
It is exactly same as `nn.Linear` as the forward() is inherited from `nn.Linear`
"""
@classmethod
@torch.no_grad()
def from_float(
cls,
mod,
) -> "FakeLinear":
with torch.device("meta"):
super_kwargs = {
"in_features": mod.in_features,
"out_features": mod.out_features,
"bias": False,
}
new_mod = cls(**super_kwargs)
new_mod.weight = mod.weight
new_mod.bias = mod.bias
return new_mod
from tqdm import tqdm
def _replace_with_custom_fn_if_matches_filter(
model,
replacement_fn,
filter_fn,
pbar: tqdm,
cur_fqn="",
) -> None:
"""
For each `child` in `model`, replaces it with `replacement_fn(child)`
if `filter_fn(child)` is `True`
"""
name_to_child = dict(model.named_children())
for name, child in name_to_child.items():
pbar.update(1)
if cur_fqn == "":
new_fqn = name
else:
new_fqn = f"{cur_fqn}.{name}"
if filter_fn(child, new_fqn):
new_child = replacement_fn(child)
setattr(model, name, new_child)
else:
_replace_with_custom_fn_if_matches_filter(
child, replacement_fn, filter_fn, pbar, new_fqn
)
def swap_linear_(model: torch.nn.Module):
_replace_with_custom_fn_if_matches_filter(
model=model,
replacement_fn=lambda mod: FakeLinear.from_float(
mod=mod,
),
filter_fn=lambda mod, fqn: type(mod) == torch.nn.Linear,
pbar=tqdm(
desc="Swapping linear layers in model...",
),
)
swap_linear_(model)
# You will see that all the `nn.Linear` is replaced with `FakeLinear` as expected
print(model)
forward_hook_handles = []
forward_prehook_handles = []
for name, module in model.named_modules():
if type(module) == FakeLinear:
forward_hook_handles.append(module.register_forward_hook(forward_hook(name)))
forward_prehook_handles.append(module.register_forward_pre_hook(forward_prehook(name)))
model.hf_device_map
# The Activated layers are not same as before. Infact a lot of layers are not even activated.
# They are same for layers in the GPUs but for the layers on the disk (example layers 19 and above) only 0-7 are activated. For layers 18 there is a discrepancy in the layers numbers getting activated. But whereas before the swap almost all the MOEs are activated in each layer
output = model.generate(tokens.input_ids, max_new_tokens=1, use_cache=False)
```
I have attached 3 files:
- [hf_device_map.txt](https://github.com/user-attachments/files/18857452/hf_device_map.txt)
- [after_swapping.txt](https://github.com/user-attachments/files/18857450/after_swapping.txt): Layer activation numbers before swapping
- [before_swapping.txt](https://github.com/user-attachments/files/18857453/before_swapping.txt): Layer activation numbers after swapping
### Expected behavior
I expect both of the files to have the same layer activation numbers. The numbers match exactly until Layer 17 (The last layer which resides in the GPU). The problem starts after layer 18 (First layer in disk). In Layer 18 there are a few missing layers and from layer 19 on only 0-7 experts are activated. | closed | 2025-02-19T02:29:38Z | 2025-02-19T14:57:28Z | https://github.com/huggingface/transformers/issues/36271 | [
"bug"
] | balaabhijit | 1 |
yt-dlp/yt-dlp | python | 11,778 | [fc2] `HTTP Error 400: Bad Request` | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Example URL:
https://video.fc2.com/content/20121129xMeT3Czt
For _some_ fc2 videos, yt-dlp takes an eternity to start downloading, only to then fail on the first fragment.
What makes this so weird is that using the direct download link extracted by yt-dlp works just fine.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--abort-on-unavailable-fragments', 'https://video.fc2.com/content/20121129xMeT3Czt']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version nightly@2024.12.06.161513 from yt-dlp/yt-dlp-nightly-builds [6fef82402] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-8.1-6.3.9600-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.3, ffprobe 2022-07-24-git-39a538f430-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.12.06.161513 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.12.06.161513 from yt-dlp/yt-dlp-nightly-builds)
[fc2] Extracting URL: https://video.fc2.com/content/20121129xMeT3Czt
[fc2] 20121129xMeT3Czt: Downloading webpage
[fc2] 20121129xMeT3Czt: Downloading info page
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 20121129xMeT3Czt: Downloading 1 format(s): 0
[debug] Invoking hlsnative downloader on "https://video.fc2.com/api/v3/videoplay/veoh.20121129xMeT3Czt/3?signature=$6EX67M.VEOH&t=1733758153"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 6379303
[download] Destination: Rotfux [20121129xMeT3Czt].mp4
[debug] File locking is not supported. Proceeding without locking
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (1/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (2/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (3/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (4/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (5/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (6/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (7/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (8/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (9/10)...
[download] Got error: HTTP Error 400: Bad Request. Retrying fragment 1 (10/10)...
[download] Got error: HTTP Error 400: Bad Request. Giving up after 10 retries
File "yt_dlp\__main__.py", line 17, in <module>
File "yt_dlp\__init__.py", line 1093, in main
File "yt_dlp\__init__.py", line 1083, in _real_main
File "yt_dlp\YoutubeDL.py", line 3605, in download
File "yt_dlp\YoutubeDL.py", line 3578, in wrapper
File "yt_dlp\YoutubeDL.py", line 1613, in extract_info
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1780, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1839, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 3011, in process_video_result
File "yt_dlp\YoutubeDL.py", line 177, in wrapper
File "yt_dlp\YoutubeDL.py", line 3479, in process_info
File "yt_dlp\YoutubeDL.py", line 3199, in dl
File "yt_dlp\downloader\common.py", line 464, in download
File "yt_dlp\downloader\hls.py", line 381, in real_download
File "yt_dlp\downloader\fragment.py", line 513, in download_and_append_fragments
File "yt_dlp\downloader\fragment.py", line 459, in download_fragment
File "yt_dlp\utils\_utils.py", line 5251, in __iter__
File "yt_dlp\downloader\fragment.py", line 456, in error_callback
File "yt_dlp\downloader\common.py", line 410, in report_retry
File "yt_dlp\utils\_utils.py", line 5258, in report_retry
File "yt_dlp\downloader\common.py", line 413, in <lambda>
File "yt_dlp\YoutubeDL.py", line 1090, in report_error
File "yt_dlp\YoutubeDL.py", line 1018, in trouble
ERROR: fragment 1 not found, unable to continue
File "yt_dlp\__main__.py", line 17, in <module>
File "yt_dlp\__init__.py", line 1093, in main
File "yt_dlp\__init__.py", line 1083, in _real_main
File "yt_dlp\YoutubeDL.py", line 3605, in download
File "yt_dlp\YoutubeDL.py", line 3578, in wrapper
File "yt_dlp\YoutubeDL.py", line 1613, in extract_info
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1780, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1839, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 3011, in process_video_result
File "yt_dlp\YoutubeDL.py", line 177, in wrapper
File "yt_dlp\YoutubeDL.py", line 3479, in process_info
File "yt_dlp\YoutubeDL.py", line 3199, in dl
File "yt_dlp\downloader\common.py", line 464, in download
File "yt_dlp\downloader\hls.py", line 381, in real_download
File "yt_dlp\downloader\fragment.py", line 514, in download_and_append_fragments
File "yt_dlp\downloader\fragment.py", line 479, in append_fragment
File "yt_dlp\YoutubeDL.py", line 1090, in report_error
File "yt_dlp\YoutubeDL.py", line 1018, in trouble
# Using the direct download link
[debug] Command-line config: ['-v', 'https://video.fc2.com/api/v3/videoplay/veoh.20121129xMeT3Czt/3?signature=$6EX67M.VEOH&t=1733758153']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version nightly@2024.12.06.161513 from yt-dlp/yt-dlp-nightly-builds [6fef82402] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-8.1-6.3.9600-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.3, ffprobe 2022-07-24-git-39a538f430-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[generic] Extracting URL: https://video.fc2.com/api/v3/videoplay/veoh.20121129xMeT3Czt/3?signature=$6EX67M.VEOH&t=1733758153
[generic] 3?signature=$6EX67M: Downloading webpage
[redirect] Following redirect to https://acache.veoh.com/file/f/l40850948.mp4?e=1733777229&rs=100&h=f884e7c17e5c422f3fc3c1c33c1f6e2a
[generic] Extracting URL: https://acache.veoh.com/file/f/l40850948.mp4?e=1733777229&rs=100&h=f884e7c17e5c422f3fc3c1c33c1f6e2a
[generic] l40850948: Downloading webpage
[debug] Identified a direct video link
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] l40850948: Downloading 1 format(s): mp4
[debug] Invoking http downloader on "https://acache.veoh.com/file/f/l40850948.mp4?e=1733777229&rs=100&h=f884e7c17e5c422f3fc3c1c33c1f6e2a"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: l40850948 [l40850948].mp4
[download] 100% of 236.03MiB in 00:40:17 at 99.99KiB/s
```
| open | 2024-12-09T21:40:14Z | 2024-12-09T21:49:17Z | https://github.com/yt-dlp/yt-dlp/issues/11778 | [
"site-bug"
] | clusterized | 1 |
polakowo/vectorbt | data-visualization | 726 | Plotting with custom benchmark_rets got error (VectorBT used undefined attribute 'obj') | When I tried to override the default benchmark returns https://github.com/polakowo/vectorbt/discussions/247 .
I found that VectorBT is try to broadcast it to an undefined "obj".
`AttributeError: 'Portfolio' object has no attribute 'obj'`
https://github.com/polakowo/vectorbt/blob/67da37e1a963d9408c1046bf6467bcccde6e9a94/vectorbt/portfolio/base.py#L5154
Reproduction code
> Simulate comparing two Portfolio results
```python
import vectorbt as vbt
import numpy as np
import pandas as pd
close1 = pd.DataFrame(
{
"symbol_a": np.abs(np.random.randn(10)) * 10,
"symbol_b": np.abs(np.random.randn(10)) * 10,
}
)
close2 = close1[["symbol_a"]]
raw_signal1 = np.random.randn(10)
signal1_a = (raw_signal1 > 0.5) * 1 + (raw_signal1 < -0.5) * -1
signal1_b = (raw_signal1 > 0.5) * 1 + (raw_signal1 < -0.5) * -1
signal1 = pd.DataFrame({"symbol_a": signal1_a, "symbol_b": signal1_b})
raw_signal2 = pd.Series(np.random.randn(10), name="symbol_a")
signal2 = (raw_signal2 > 0.5) * 1 + (raw_signal2 < -0.5) * -1
pf1 = vbt.Portfolio.from_signals(
close1, entries=signal1 == 1, exits=signal1 == -1, freq="T"
)
pf2 = vbt.Portfolio.from_signals(
close2, entries=signal2 == 1, exits=signal2 == -1, freq="T"
)
# ok
print(pf1.stats(settings=dict(benchmark_rets=pf2.returns())))
# ok
print(pf1.stats(column="symbol_a", settings=dict(benchmark_rets=pf2.returns())))
# ok
print(pf2.stats(settings=dict(benchmark_rets=pf1['symbol_a'].returns())))
# ---- error ----
# works if I comment the bug line
pf1.plot(
column="symbol_a",
settings=dict(benchmark_rets=pf2.returns()),
subplots=[
"orders",
"trade_pnl",
"cum_returns",
"drawdowns",
"underwater",
"asset_flow",
"asset_value",
"assets",
"cash",
"cash_flow",
"gross_exposure",
"net_exposure",
"trades",
"value",
],
)
# works if I comment the bug line
pf2.plot(
column="symbol_a",
settings=dict(benchmark_rets=pf1.returns()),
subplots=[
"orders",
"trade_pnl",
"cum_returns",
"drawdowns",
"underwater",
"asset_flow",
"asset_value",
"assets",
"cash",
"cash_flow",
"gross_exposure",
"net_exposure",
"trades",
"value",
],
)
```
Version
- vectorbt: 0.26.1
- Python 3.8.13 | open | 2024-07-04T05:33:33Z | 2024-07-04T05:33:33Z | https://github.com/polakowo/vectorbt/issues/726 | [] | daviddwlee84 | 0 |
cvat-ai/cvat | pytorch | 9,146 | Bug when using a "Connected file share" and "Segment size" parameter [UI & SDK] or "job_file_mapping" parameter [SDK]. (Regression since CVAT 2.23) | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
### Steps to reproduce via UI:
1. Go to the "Tasks" view
2. Click [+] "Create a new task"
3. Set task name
4. Create a label
5. Select resource "Connected file share"
6. Select 3 images (or at least 2 images)
7. Click "Advanced configuration"
8. Set "Segment size" to 2 (or at least 1)
9. Click "Submit & Open"
10. Click on 2nd job (one with a higher ID).
11. See the error message "Could not receive image data"




### Code sample to reproduce using the Python SDK:
```
from cvat_sdk import make_client
from cvat_sdk.core.proxies.tasks import ResourceType
images = [
"./images/image_0.png",
"./images/image_1.png",
"./images/image_2.png",
]
with make_client(host="localhost", credentials=('user', 'password')) as client:
task_A_spec = {
"name": "TaskA",
"segment_size": 2,
"labels": [{"name": "LabelName", "color": "#0572f7"}],
}
task_B_spec = {
"name": "TaskB",
"labels": [{"name": "LabelName", "color": "#0572f7"}],
}
job_file_mapping = [[images[0], images[1]], [images[2]]]
# Segment Size Test
task_A = client.tasks.create(task_A_spec)
task_A.upload_data(
resources=images,
resource_type=ResourceType.SHARE,
)
# Job File Mapping Test
task_B = client.tasks.create(task_B_spec)
task_B.upload_data(
resources=images,
resource_type=ResourceType.SHARE,
params={"job_file_mapping": job_file_mapping},
)
```
### Expected Behavior
Image data from other jobs should be received.
### Possible Solution
The bug is not present in CVAT 2.23
### Context
Using a connected file share, we are trying to create multiple jobs using either the "segment size" parameter in the task creation form UI, or in the task specification with the Python SDK.
In the SDK, the bug also impacts the `job_file_mapping` parameter in `task.upload_data()`.
Other information:
- If the segment size parameter is not set, all the images are correctly visible in the annotation view.
- If the `job_file_mapping` parameter is not used, all the images are correctly visible in the annotation view.
- The images in the first job are always correctly uploaded, but the bug impacts all the other jobs. This happens regardless of the number of jobs; only the images in the first job are correctly uploaded.
- There is no issue if the resource type is local (i.e when selecting "My computer" as the source in the task creation form)
### Environment
```Markdown
- CVAT Server version: 2.30.0
- CVAT UI version: 2.30.0
- CVAT SDK version: 2.30.0
- Docker version:
Client:
Version: 27.5.1
API version: 1.47
Go version: go1.22.11
Git commit: 9f9e405
Built: Wed Jan 22 13:37:19 2025
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.38.0 (181591)
Engine:
Version: 27.5.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.11
Git commit: 4c9b3b0
Built: Wed Jan 22 13:41:25 2025
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.25
GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e946
docker-init:
Version: 0.19.0
GitCommit: de40ad0
- Are you using Docker Swarm or Kubernetes?
No
- Operating System and version:
MacOS Sequoia 15.3
- Other diagnostic information / logs:
Logs from `cvat_server` container:
2025-02-21 18:22:17 2025-02-21 23:22:17,899 DEBG 'uvicorn-0' stderr output:
2025-02-21 18:22:17 [2025-02-21 23:22:17,899] ERROR django.request: Internal Server Error: /api/jobs/58/preview
2025-02-21 18:22:17 Traceback (most recent call last):
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler
2025-02-21 18:22:17 raise exc_info[1]
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
2025-02-21 18:22:17 response = await get_response(request)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
2025-02-21 18:22:17 response = await wrapped_callback(
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
2025-02-21 18:22:17 ret = await asyncio.shield(exec_coro)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
2025-02-21 18:22:17 result = self.fn(*self.args, **self.kwargs)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
2025-02-21 18:22:17 return func(*args, **kwargs)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
2025-02-21 18:22:17 return view_func(*args, **kwargs)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/rest_framework/viewsets.py", line 124, in view
2025-02-21 18:22:17 return self.dispatch(request, *args, **kwargs)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
2025-02-21 18:22:17 response = self.handle_exception(exc)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
2025-02-21 18:22:17 self.raise_uncaught_exception(exc)
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
2025-02-21 18:22:17 raise exc
2025-02-21 18:22:17 File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
2025-02-21 18:22:17 response = handler(request, *args, **kwargs)
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/views.py", line 2431, in preview
2025-02-21 18:22:17 return data_getter()
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/views.py", line 910, in __call__
2025-02-21 18:22:17 return super().__call__()
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/views.py", line 782, in __call__
2025-02-21 18:22:17 data = frame_provider.get_preview()
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/frame_provider.py", line 553, in get_preview
2025-02-21 18:22:17 preview, mime = cache.get_or_set_segment_preview(self._db_segment)
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/cache.py", line 467, in get_or_set_segment_preview
2025-02-21 18:22:17 self._get_or_set_cache_item(
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/cache.py", line 192, in _get_or_set_cache_item
2025-02-21 18:22:17 return self._create_cache_item(
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/cache.py", line 267, in _create_cache_item
2025-02-21 18:22:17 wait_for_rq_job(rq_job)
2025-02-21 18:22:17 File "/home/django/cvat/apps/engine/cache.py", line 113, in wait_for_rq_job
2025-02-21 18:22:17 raise exc_type(*exc_args)
2025-02-21 18:22:17 FileNotFoundError: [Errno 2] No such file or directory
``` | closed | 2025-02-24T16:17:49Z | 2025-02-27T13:00:50Z | https://github.com/cvat-ai/cvat/issues/9146 | [
"documentation"
] | jimytim | 3 |
pykaldi/pykaldi | numpy | 297 | pyclif build - SVN down | since the llvm server is retired, the relevant version is here so you can clone it and insert into the INSTALL.sh script for pyclif
https://github.com/bobzsj87/llvm-307315 | open | 2022-02-18T11:09:02Z | 2022-02-18T11:09:02Z | https://github.com/pykaldi/pykaldi/issues/297 | [] | rleaver152 | 0 |
falconry/falcon | api | 1,966 | CPython 3.10 support | CPython 3.10 has been released.
Although it may already work out of the box, we need to add official first class support anyway:
- [x] Add a CPython 3.10 CI gate: (https://github.com/falconry/falcon/pull/1922).
- [x] Build CPython 3.10 wheels.
- [x] Advertise support using ["trove classifiers"](https://pypi.org/classifiers/).
- [x] Check if anything needs an update in `CONTRIBUTING.md`.
In addition, check for any new warnings emitted when running tests, e.g., whether we are relying on any deprecated functionality that will be removed in future Python versions:
- [x] Multiple `DeprecationWarning`: non-integer arguments to randrange() have been deprecated since Python 3.10 and will be removed in a subsequent version https://github.com/falconry/falcon/pull/1972
- [x] `falcon/util/sync.py`:224: `DeprecationWarning`: There is no current event loop
loop = asyncio.get_event_loop()
[`asyncio.get_event_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_event_loop): _Deprecated since version 3.10:_ Deprecation warning is emitted if there is no running event loop. In future Python releases, this function will be an alias of [`get_running_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_running_loop).
- [x] `tests/asgi/test_ws.py`:344: `DeprecationWarning`: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.
- [x] Anything else? | closed | 2021-10-12T07:44:32Z | 2021-11-08T08:44:07Z | https://github.com/falconry/falcon/issues/1966 | [
"maintenance"
] | vytas7 | 0 |
plotly/dash | data-science | 2,719 | [BUG] callback in infinite loop but only triggered once | Here is a minimal example of how it breaks.
Error appears as the app keeps loading forever and debug mode suggests 'Maximum Depth Exceeded', indicating the callback is in an infinite loop. However, printing out the trigger_id suggests the callback is only triggered once and everything before return is ran once. **This is almost impossible to debug since the messages don't give any actionable feedback.**
The real problem seems to be the 0 Amount: changing it to non-zero OR changing` if x['Type'] == 'C'` to `if x['Type'] == 'C' and float(x['Amount'])` fixes the front-end problem.
**However, I don't understand why this would be a problem if get_data() is only called once in the callback and the callback is only triggered once.** Maybe it is stuck updating 0 to -0 and -0 to 0 forever? But what is causing *(-1) to be applied over and over again when it is only supposed to be called once?
It seems to relate to componentDidUpdate function here: [Dash GitHub Link](https://github.com/plotly/dash/blob/dev/dash/dash-renderer/src/components/error/ComponentErrorBoundary.react.js), but I don't have any JS knowledge to further understand this. Please help, thanks!
```
import dash_ag_grid as dag
import pandas as pd
from dash import dcc, html, Input, Output, no_update
import dash
def get_data():
df = pd.DataFrame([{'Amount': 11111.0, 'Type': 'C'}, {'Amount': 0.0, 'Type': 'C'}])
df['Amount'] = df.apply(lambda x: float(x['Amount']) * (-1) if x['Type'] == 'C' else float(x['Amount']), axis=1)
return df
app = dash.Dash(__name__)
app.layout = dcc.Loading([html.Div(id='placeholder'), dag.AgGrid(id='datatable', columnDefs=[{'field': 'Amount'}]),
dcc.Store(id='data_store', storage_type='session')])
@dash.callback(
Output('placeholder', 'children'),
Output('datatable', 'rowData'),
Output('data_store', 'data'),
Input('placeholder', 'children')
)
def update_table(dummy):
# trigger_id = dash.callback_context.triggered[0]['prop_id'].split('.')[0]
# trigger_property = dash.callback_context.triggered[0]['prop_id'].split('.')[1] if trigger_id else ""
# print(f"Callback triggered: {trigger_id}, {trigger_property}")
df_all = get_data()
cached_data = df_all.to_dict('records')
return no_update, df_all.to_dict('records'), cached_data
if __name__ == '__main__':
app.run_server(debug=True)
``` | closed | 2023-12-25T22:16:34Z | 2024-09-25T15:17:40Z | https://github.com/plotly/dash/issues/2719 | [
"bug",
"sev-2"
] | yxc8 | 7 |
assafelovic/gpt-researcher | automation | 359 | Enable editing the api-keys not from the env-vars | Sometimes, it is useful to enter the API key into the method explicitly; for example, it can enable the app to use multiple API keys and every time use a different API key.
I am referring to the OpenAI API key, Tvaily API key, etc.
| closed | 2024-02-16T21:04:58Z | 2024-02-27T10:16:56Z | https://github.com/assafelovic/gpt-researcher/issues/359 | [] | uripeled2 | 2 |
paulpierre/RasaGPT | fastapi | 41 | Container is not running - Windows-WSL-Ubuntu | Hi,
Problem while running "make install"
Error response from daemon: Container f73041640f9c5db75dcdd5cd62338ae63aa1caf79692fb8262fd359cc8da3ac9 is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory '/mnt/d/RasaGPT'
make: *** [Makefile:57: install] Error 2 | open | 2023-07-11T15:57:28Z | 2023-09-25T14:16:32Z | https://github.com/paulpierre/RasaGPT/issues/41 | [] | shanumas | 3 |
holoviz/panel | matplotlib | 7,156 | Terminal widget set as `sys.stdout` does *not* write to terminal in synchronous functions and is delayed in appearing, using Jupyter notebook | ### ALL software version info
- panel: 1.4.5
- bokeh: 3.4.2
- python: 3.10.14
- IPython : 8.14.0
- ipykernel : 6.29.5
- ipywidgets : 8.1.2
- jupyter_client : 8.6.2
- jupyter_core : 5.7.2
- jupyter_server : 2.14.2
- jupyterlab : 4.2.4
- nbclient : 0.10.0
- notebook : 7.2.1
- qtconsole : 5.5.2
- traitlets : 5.14.3
- OS: Mac
- Browser: Chrome
### Description of expected behavior and the observed behavior
When setting `sys.stdout = terminal` as documented in https://panel.holoviz.org/reference/widgets/Terminal.html#writing-stdout-to-the-terminal for the Terminal widget, there are 2 bugs when rendering inside a Jupyter notebook with synchronous callback functions that write to stdout (e.g. `print()`):
1. The output does *not* show up in the Terminal widget and instead is printed to the notebook cell output.
2. The output does not show up until after the entire function completes, rather than as each `print` is called (just as described in https://github.com/holoviz/panel/issues/3261), cc @MarcSkovMadsen
Using an `async` version of the callback seems to avoid both issues.
I would expect/need both sync and async functions to have their stdout directed to the Terminal.
### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import sys
import time
import asyncio
pn.extension('terminal')
terminal = pn.widgets.Terminal()
sys.stdout = terminal
def train_sync(e, n=5):
print("train_sync")
for n in range(n):
time.sleep(0.5)
print(n)
async def train_async(e, n=5):
print("train_async")
for n in range(n):
await asyncio.sleep(0.5)
print(n)
button_sync = pn.widgets.Button(name="Train sync", button_type="primary", on_click=train_sync) # <- has issues
button_async = pn.widgets.Button(name="Train async", button_type="primary", on_click=train_async) # <- does not have issues
pn.Column(pn.Row(button_sync, button_async), terminal)
```
### Stack traceback and/or browser JavaScript console output
JS console output for the notebook shows only the following (after clicking _both_ async and sync buttons); seems odd to me though not the bug I'm reporting:
```
Python callback returned following output:
train_sync
0
1
2
3
4
```
### Screenshots or screencasts of the bug in action
This takes a few different forms depending on where the code is run.
#### Jupyter Notebook (both issues 1 & 2 mentioned above)

#### Jupyter Notebook "Panel Preview" (just issue 2 mentioned above)

#### panel serve (neither issue — shows the behavior expected for Notebooks)

---
- [x] I may be interested in making a pull request to address this
- I am not familiar with the cause/internals, but happy to PR if folks could point me in the right direction
| open | 2024-08-16T01:33:14Z | 2024-08-16T01:54:05Z | https://github.com/holoviz/panel/issues/7156 | [] | sjdemartini | 0 |
WeblateOrg/weblate | django | 13,647 | Enforced checks not applied to strings imported from the repository | ### Describe the issue
Target string added in the repository that fails the enforced check will not be marked as `Needs editing`.
Example: https://hosted.weblate.org/translate/weblate/documentation/zh_Hans/?checksum=5826a7ff56f512cf#history
https://github.com/WeblateOrg/weblate/commit/7be41bc69f981ea941668c092de070173ff40b1e#diff-06441696C896E897AA1359E2373AE291 only adjusts the source of the above string in the translation file.
I remember that one of the reStructuredText checks was failed (it was fixed by another translator when I took this screenshot), but the state of this string was `translated`.
### I already tried
- [x] I've read and searched [the documentation](https://docs.weblate.org/).
- [x] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Updating a string in the repository which fails the enforced check
2. Updating in Weblate
3. Go to the full editor view of this string
### Expected behavior
Target string added in the repository that fails the enforced check shoudl be marked as `Needs editing`.
### Screenshots

### Exception traceback
```pytb
```
### How do you run Weblate?
weblate.org service
### Weblate versions
Weblate 5.10-dev
### Weblate deploy checks
```shell
```
### Additional context
_No response_ | closed | 2025-01-24T13:33:35Z | 2025-01-29T16:04:24Z | https://github.com/WeblateOrg/weblate/issues/13647 | [
"enhancement",
"backlog"
] | Geeyun-JY3 | 4 |
iterative/dvc | data-science | 9,880 | dvc get/artifacts get: repo clone is slow | # Bug Report
## Description
`dvc get` is slow to clone the git repo.
This is shown in the DVC benchmarks:

### Reproduce
I can also see it by profiling the example command `dvc get -f git@github.com:iterative/example-get-started-experiments.git models/model.pkl --viztracer --viztracer-depth=32` (CLI `git clone` for this repo takes <1 second):
<img width="1440" alt="Screenshot 2023-08-28 at 9 17 06 AM" src="https://github.com/iterative/dvc/assets/2308172/f4f7fe4f-a203-49a4-be95-5fb432699a41">
[viztracer.tar.gz](https://github.com/iterative/dvc/files/12454794/viztracer.tar.gz)
### Expected
The operation should take a few seconds at most instead of almost a minute. | closed | 2023-08-28T13:23:10Z | 2023-08-29T17:16:37Z | https://github.com/iterative/dvc/issues/9880 | [
"p1-important",
"performance",
"regression",
"git",
"A: data-sync"
] | dberenbaum | 11 |
pytorch/pytorch | machine-learning | 149,640 | torch.distributed.checkpoint CUDA OOM with broadcast_from_rank0 | I am trying to load an FSDP checkpoint by broadcasting weights from rank 0. The model is already correctly set up on GPU on each rank. I use
```python
model_state_dict = torch.distributed.checkpoint.state_dict.set_model_state_dict(
model=self._model,
model_state_dict=model_state_dict,
options=torch.distributed.checkpoint.state_dict.StateDictOptions(
full_state_dict=True,
cpu_offload=True,
ignore_frozen_params=False,
broadcast_from_rank0=True,
),
)
```
When this call starts executing, I can see the CUDA memory on each GPU rapidly rising and from ~20GB → 40GB of memory per GPU on nvidia-smi. Eventually it fails with CUDA OOM (see stack trace below). When I set `broadcast_from_rank0=False`, it works fine. This is observed with both FSDP1 and FSDP2. `model_state_dict` is empty on all ranks except rank 0
```
Traceback (most recent call last):
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train.py", line 19, i
n <module>
main()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train.py", line 15, i
n main
TrainJob().run(config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train_job.py", line 1
8, in run
self.run_trainer(config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train_job.py", line 1
18, in run_trainer
trainer = Trainer(config=config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/trainer.py", line 102, in __init_
_
self._maybe_restore_checkpoint()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/trainer.py", line 124, in _maybe_
restore_checkpoint
self.load_state_dict(state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/common/state_dict.py", line 96, in load_s
tate_dict
load_state_dict_method(value)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/common/state_dict.py", line 82, in load_s
tate_dict
self._load_custom_state_dict(state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/training_module.py", line 164, in
_load_custom_state_dict
model_state_dict = torch.distributed.checkpoint.state_dict.set_model_state_dict(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/checkpoint/st
ate_dict.py", line 1184, in set_model_state_dict
return _load_model_state_dict(model, model_state_dict, info)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/checkpoint/st
ate_dict.py", line 566, in _load_model_state_dict
_state_dict_fn(model, "load_state_dict")(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2201, in load_state_dict
load(self, state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
[Previous line repeated 2 more times]
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2183, in load
module._load_from_state_dict(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2034, in _load_from_state_dict
hook(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 73, in __call__
return self.hook(module, *args, **kwargs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/utils/_contextlib.py", li
ne 116, in decorate_context
return func(*args, **kwargs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 849, in _pre_load_state_dict_hook
_pre_load_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 371, in _full_pre_load_state_dict_hook
_enter_unshard_params_ctx(module, fsdp_state, writeback=True)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 139, in _enter_unshard_params_ctx
fsdp_state._unshard_params_ctx[module].__enter__()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/external/python_runtime_x86_64-unknown-linux-gnu/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_unshard
_param_utils.py", line 197, in _unshard_fsdp_state_params
_unshard(state, handle, computation_stream, computation_stream)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_runtime
_utils.py", line 300, in _unshard
handle.unshard()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_flat_pa
ram.py", line 1310, in unshard
unsharded_flat_param = self._alloc_padded_unsharded_flat_param()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_flat_pa
ram.py", line 1337, in _alloc_padded_unsharded_flat_param
_alloc_storage(unsharded_flat_param, flat_param._padded_unsharded_size) # type: ignore[attr-defined]
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/utils.py", li
ne 186, in _alloc_storage
tensor._typed_storage()._resize_(size.numel())
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/storage.py", line 1027, i
n _resize_
self._untyped_storage.resize_(size * self._element_size())
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.16 GiB. GPU 0 has a total capacity of 39.38 GiB of which 547.38 MiB is free. Including non-PyTorch memory, this process has 38.84 GiB memory in use. Of the allocated memory
35.91 GiB is allocated by PyTorch, and 363.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentati
on for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
Related to:
- https://discuss.pytorch.org/t/torch-distributed-checkpoint-cuda-oom-with-broadcast-from-rank0/209240
- https://github.com/pytorch/pytorch/issues/148756#issuecomment-2722593066
Environment:
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Oct 3 2023, 01:22:22) [Clang 17.0.1 ] (64-bit runtime)
Python platform: Linux-5.15.0-1048-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
Same issue observed with torch 2.6.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ptrblck @msaroufim @eqy @zhaojuanmao @mrshenli @rohan-varma @chauhang @LucasLLC @pradeepfn | open | 2025-03-20T18:15:13Z | 2025-03-24T17:20:44Z | https://github.com/pytorch/pytorch/issues/149640 | [
"oncall: distributed",
"module: cuda",
"module: fsdp",
"oncall: distributed checkpointing"
] | nikonikolov | 0 |
d2l-ai/d2l-en | machine-learning | 2,588 | WikiText-2 is not a zip file | When I executed the following part:
```python
from d2l import torch as d2l
batch_size, max_len = 512, 64
train_iter, vocab = d2l.load_data_wiki(batch_size, max_len)
```
```python
from d2l import mxnet as d2l
batch_size, max_len = 512, 64
train_iter, vocab = d2l.load_data_wiki(batch_size, max_len)
```
I met this error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/charry/miniconda3/envs/d2l/lib/python3.9/site-packages/d2l/torch.py", line 2443, in load_data_wiki
data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')
File "/home/charry/miniconda3/envs/d2l/lib/python3.9/site-packages/d2l/torch.py", line 3247, in download_extract
fp = zipfile.ZipFile(fname, 'r')
File "/home/charry/miniconda3/envs/d2l/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/home/charry/miniconda3/envs/d2l/lib/python3.9/zipfile.py", line 1333, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
```
I think it is because the dataset in the server has been damaged. I reimplemented this error with d2l 1.0.0 - 1.0.3. And it will cause some errors when WikiText-2 dataset is needed.
I have a pull request failed due to this error. I also mentioned that there are some pull requests related fixing typo errors also failed check due to this error.
I hope this error can be fixed as soon as possible. | open | 2024-03-07T18:49:37Z | 2025-03-18T03:50:25Z | https://github.com/d2l-ai/d2l-en/issues/2588 | [] | CharryLee0426 | 4 |
mwaskom/seaborn | data-science | 3,633 | `so.Hist` ignores `common_norm=True` for the `"density"` aggregate statistic | ### Current behaviour
`so.Hist` ignores `common_norm=True` for the `"density"` aggregate statistic. `sns.histplot` works correctly.
### Desired behaviour
`so.Hist` should take into account `common_norm` when applying density normalization. The objects interface adds some neat functionality for [normalizing per facet](https://github.com/mwaskom/seaborn/issues/3112#issuecomment-1292870232). Currently it can't be used with `"density"`.
### Examples
I would expect the below examples to produce the same histogram.
```python
(
so.Plot(
penguins,
x='bill_length_mm',
color='species'
)
.add(
so.Bars(),
so.Hist(
stat='density',
common_norm=True,
),
)
)
```

```python
sns.histplot(
data=penguins,
x='bill_length_mm',
hue='species',
stat='density',
common_norm=True,
)
```

### Versions
- seaborn 0.13.2
- matplotlib 3.8.2
- pandas 2.2.0 | open | 2024-02-14T12:37:43Z | 2024-02-25T16:39:36Z | https://github.com/mwaskom/seaborn/issues/3633 | [
"bug"
] | defiori | 0 |
mitmproxy/pdoc | api | 386 | Installation of version 11.1.0 fails with: No matching distribution found for pygments>=2.12.0 | #### Problem Description
Trying to install Version 11.1.0 of pdoc fail with "ERROR: No matching distribution found for pygments>=2.12.0"
#### Steps to reproduce the behavior:
1. Set up environment with python 3.8.12
2. pip install pdoc
Or:
have pdoc < 11.0.0 installed and use "pip install pdoc --upgrade" : The upgrade only installs version 11.0.0
#### System Information
pdoc: 11.0.0
Python: 3.8.12
Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.10
pdoc: 11.0.0
Python: 3.9.10
Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.31
I could not find any requirements for a python version for pygments so I tested it with python 3.8 and 3.9 and I received the same Error message. I am not sure if this is really a pdoc issue but I thought it probably should be noted here first. Happy to also post the issue with pygments
| closed | 2022-04-25T08:34:46Z | 2022-04-25T09:33:08Z | https://github.com/mitmproxy/pdoc/issues/386 | [
"bug"
] | JenniferHem | 2 |
iperov/DeepFaceLab | deep-learning | 5,526 | Train Quick96 press any key Forever | On step 6, after loading samples it says "Press any key", but nothing happens after pressing... Any ways i can fix it? Thanks.
Running trainer.
[new] No saved models found. Enter a name of a new model : 1
1
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : NVIDIA GeForce GTX 1060 6GB
[0] Which GPU indexes to choose? :
0
Initializing models: 100%|###############################################################| 5/5 [00:01<00:00, 2.72it/s]
Loading samples: 100%|############################################################| 2951/2951 [00:05<00:00, 497.68it/s]
Loading samples: 100%|##########################################################| 33410/33410 [01:09<00:00, 478.53it/s]
Для продолжения нажмите любую клавишу . . . | open | 2022-05-29T12:27:07Z | 2023-07-25T09:36:26Z | https://github.com/iperov/DeepFaceLab/issues/5526 | [] | huebez | 5 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 90 | memory error struct.unpack() | when I was using utils model there was a mistake happened at struct.unpack()
train, val, test = tools.read_mnist(Data_Dir, flatten=True)
MemoryError
| closed | 2018-01-26T15:11:35Z | 2019-02-27T08:13:40Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/90 | [] | phyorch | 3 |
tflearn/tflearn | tensorflow | 460 | UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. | Hi TFLearn Team,
I am new to tensorflow and tflearn and implementing a classifier using LSTM.
In the meanwhileI, encountered the following warning:
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
I cannot understand what it means. Could you give me a solution?
Here is a part of my code:
net = tflearn.input_data([None, 500])
net = tflearn.embedding(net, input_dim=len(dic), output_dim=128)
net = tflearn.lstm(net, 1024, dropout=0.8, dynamic=True)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='adam', learning_rate=args.rate,
loss='categorical_crossentropy')
model = tflearn.DNN(net, tensorboard_verbose=3)
Thanks. | open | 2016-11-14T07:37:04Z | 2020-04-11T15:40:24Z | https://github.com/tflearn/tflearn/issues/460 | [] | KihongHeo | 6 |
babysor/MockingBird | pytorch | 138 | librosa.load无法读取filestorage音频对象 | 我在web调用时提示File contains data in an unknown format.
发现__init__py中使用librosa读取音频文件使用librosa.load(request.files['file'])
我并没有找到librosa能直接读取filestorage对象的资料
最后通过接收文件本地保存来处理这个问题 | open | 2021-10-12T08:27:09Z | 2022-01-24T13:05:59Z | https://github.com/babysor/MockingBird/issues/138 | [
"bug"
] | wxtt522 | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,065 | TypeError in ORM model using an enum type hint (1.4) | ### Ensure stubs packages are not installed
- [] No sqlalchemy stub packages *are* installed, this is against 1.4.x
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
"INTERNAL ERROR" when running mypy on a file with basic ORM models.
### To Reproduce
```python
import enum
import sqlalchemy
from sqlalchemy import orm
Base = orm.declarative_base()
class BlaEnum(enum.Enum):
SOMETHING = enum.auto()
class Model(Base):
...
class Bar(Model):
size: BlaEnum = sqlalchemy.Column(sqlalchemy.Text)
```
### Error
```
version: 1.5.0+dev.c13f1d416e907f58bc77d086b84819f500f1bde9
Traceback (most recent call last):
File "/path/to/project/venv/bin/mypy", line 8, in <module>
sys.exit(console_entry())
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/__main__.py", line 15, in console_entry
main()
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/main.py", line 94, in main
res, messages, blockers = run_build(sources, options, fscache, t0, stdout, stderr)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/main.py", line 173, in run_build
res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/build.py", line 195, in build
result = _build(
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/build.py", line 268, in _build
graph = dispatch(sources, manager, stdout)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/build.py", line 2927, in dispatch
process_graph(graph, manager)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/build.py", line 3325, in process_graph
process_stale_scc(graph, scc, manager)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/build.py", line 3420, in process_stale_scc
mypy.semanal_main.semantic_analysis_for_scc(graph, scc, manager.errors)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal_main.py", line 93, in semantic_analysis_for_scc
process_top_levels(graph, scc, patches)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal_main.py", line 220, in process_top_levels
deferred, incomplete, progress = semantic_analyze_target(
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal_main.py", line 349, in semantic_analyze_target
analyzer.refresh_partial(
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 600, in refresh_partial
self.refresh_top_level(node)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 611, in refresh_top_level
self.accept(d)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 6475, in accept
node.accept(self)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/nodes.py", line 1141, in accept
return visitor.visit_class_def(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 1600, in visit_class_def
self.analyze_class(defn)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 1685, in analyze_class
self.analyze_class_body_common(defn)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 1714, in analyze_class_body_common
self.apply_class_plugin_hooks(defn)
File "/path/to/project/venv/lib/python3.11/site-packages/mypy/semanal.py", line 1801, in apply_class_plugin_hooks
hook(ClassDefContext(defn, base_expr, self))
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/plugin.py", line 261, in _base_cls_hook
decl_class.scan_declarative_assignments_and_apply_types(ctx.cls, ctx.api)
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/decl_class.py", line 96, in scan_declarative_assignments_and_apply_types
_scan_declarative_assignment_stmt(
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/decl_class.py", line 459, in _scan_declarative_assignment_stmt
python_type_for_type = infer.infer_type_from_right_hand_nameexpr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/infer.py", line 52, in infer_type_from_right_hand_nameexpr
python_type_for_type = _infer_type_from_decl_column(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/infer.py", line 410, in _infer_type_from_decl_column
return _infer_type_from_left_and_inferred_right(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/venv/lib/python3.11/site-packages/sqlalchemy/ext/mypy/infer.py", line 457, in _infer_type_from_left_and_inferred_right
format_type(orig_left_hand_type),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: format_type() missing 1 required positional argument: 'options'
```
### Versions
- OS: Ubuntu 22.04 LTS
- Python: 3.11.3
- SQLAlchemy: 1.4.48
- Type checker (eg: mypy 0.991, pyright 1.1.290, etc): mypy 1.4.1 (and current master)
## requirements.txt:
```
mypy==1.4.1
mypy-extensions==1.0.0
sqlalchemy[mypy]==1.4.48
sqlalchemy2-stubs==0.0.2a34
types-sqlalchemy==1.4.53.38
types-sqlalchemy-utils==1.0.1
typing-extensions==4.6.3
```
### Additional context
This is a condensed minimal reproducible example. In short, `Model` is my abstract base model for all ORM classes. `Bar` is an ORM model that has an enum property.
Declaring the column as enum would be correct, but it's not necessary to trigger the bug. `__abstract__ = True` is missing of course. Code just serves as a minimal example and doesn't make sense. I just tried to remove all distractions. | closed | 2023-07-05T13:08:34Z | 2023-07-05T13:56:19Z | https://github.com/sqlalchemy/sqlalchemy/issues/10065 | [
"SQLA mypy plugin",
"typing"
] | hofrob | 7 |
holoviz/panel | plotly | 7,567 | Make TextInput work for forms | When comparing to [Reflex](https://reflex.dev/) its clear they have given more thought to their TextInput for forms.
We are missing basic attributes like
- `required` (`True` or `False`)
- `type` (for example `"email"`)
- `pattern` (`".+@example\.com"`)
That can help users when they enter text.
Reflex will
- make a TextInput active is its required and nothing is input
- show a tooltip explaining what is missing (for example a `@` in an email)
https://github.com/user-attachments/assets/a10a04c9-8537-48e4-80b5-6fc876052fe3
| open | 2024-12-23T07:18:32Z | 2025-01-20T21:32:15Z | https://github.com/holoviz/panel/issues/7567 | [
"type: enhancement"
] | MarcSkovMadsen | 3 |
ageitgey/face_recognition | python | 956 | face_recognition too slow | * face_recognition version:1.2.3
* Python version:3
* Operating System:ubuntu 16.04
Hi I am using minnow intel board(processor-Atom E3845 ).The facerec_from_webcam.py and facerec_from_webcam_multiprocessing.py is too slow it is working at 1fps.But same examples when I run in my laptop intel i3 processor it is giving 15fps.How can I improve the speed in monnow board.
| open | 2019-10-18T05:24:48Z | 2019-12-26T15:31:12Z | https://github.com/ageitgey/face_recognition/issues/956 | [] | Harishrelysys | 1 |
Colin-b/pytest_httpx | pytest | 167 | Allow usage with `httpx.MockTransport` | In certain use cases, `pytest_httpx` is used only to mock a single httpx client, specifically one responsible for connecting to a cluster of servers for service `A`. However, while mocking this specific client, we still want the flexibility to make normal, non-mocked requests to other services without impacting the entire environment.
Previously, we initialized `HTTPXMock` directly, but with recent changes, this is no longer supported. While the focus on clean interfaces is understandable, this change limits the ability to easily create single mocked clients while keeping other requests unaltered.
To address this, I propose adding support for a MockTransport, as outlined below:
```python
TCallable = TypeVar("TCallable", bound=Callable)
def copy_docs(source_func: Callable) -> Callable[[TCallable], TCallable]:
def apply_docs(target_func: TCallable) -> TCallable:
target_func.__doc__ == source_func.__doc__
class MockedTransport(httpx.MockTransport):
def __init__(
self,
assert_all_responses_were_requested: bool = True,
assert_all_requests_were_expected: bool = True,
can_send_already_matched_responses: bool = False,
):
_pool = None # enusre _proxy_url does not fail
options = _HTTPXMockOptions(
can_send_already_matched_responses=can_send_already_matched_responses,
assert_all_responses_were_requested=assert_all_responses_were_requested,
assert_all_requests_were_expected=assert_all_requests_were_expected,
)
self.mock = HTTPXMock(options)
super().__init__(lambda request: self.mock._handle_request(self, request))
# TODO copy call signature
# see https://stackoverflow.com/a/71968448/3813064 or
# https://github.com/python/cpython/pull/121693
@copy_docs(HTTPXMock.add_response)
def add_response(self, *args, **kwargs) -> None:
self.mock.add_response(*args, **kwargs)
@copy_docs(HTTPXMock.add_callback)
def add_callback(self, *args, **kwargs) -> None:
self.mock.add_callback(*args, **kwargs)
@copy_docs(HTTPXMock.add_exception)
def add_exception(self, *args, **kwargs) -> None:
self.mock.add_exception(*args, **kwargs)
```
The `MockedTransport` class extends integrate smoothly with `HTTPXMock`, allowing targeted client mocking while enabling other clients to make live requests as usual.
| open | 2024-11-04T17:33:05Z | 2024-12-23T12:22:19Z | https://github.com/Colin-b/pytest_httpx/issues/167 | [
"question"
] | CarliJoy | 4 |
ultralytics/yolov5 | machine-learning | 12,978 | How do Yolo target assignments to anchors work? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am trying to understand exactly how does Yolo make its predictions. I have found that yolo assigns each target to an anchor, and that there are 3 anchors per detection head per grid cell. I understand this is what makes Yolo have some problems when detecting small objects that are close, as only one prediction is done per grid cell.
I am not sure if this is how it exactly works, but I think it is something like that. Working in some training experiments, I did not quite understand the results, so I decided to run a minimal experiment to understand this fact. Taking into account Yolo's limitations on detecting close and small objects, I decided to use a black image as input (with noise) and a 10x10 pixels white square located somewhere in the image. I then assigned two labels (5 pixels away one from each other) in the white square.
If what I was understanding was okay, the model should reach, more or less, 50% in P and R, as it should only be able to detect one of the labels. However, the model is able to predict both labels.
How is this happening? Where is the error in my understanding?
### Additional
_No response_ | closed | 2024-04-30T12:16:16Z | 2024-06-13T00:22:16Z | https://github.com/ultralytics/yolov5/issues/12978 | [
"question",
"Stale"
] | nachoogriis | 4 |
sktime/pytorch-forecasting | pandas | 947 | 'DataFrame' object has no attribute 'dtype' | - PyTorch-Forecasting version: 0.10.1
- PyTorch version: 1.11.0
- Python version: 3.7
- Operating System: Windows
- Pandas version: 1.3.5
### Expected behavior
I tried to use a TemporalFusionTransformer model to make a prediction
### Actual behavior
However, result was an AttributeError: 'DataFrame' object has no attribute 'dtype', pointing to line 423 in file pytorch_forecasting\data\encoders.py
### Code to reproduce the problem
The model was trained using TorchNormalizer in scalers when I set the TimeSeriesDataSet.
After testing I found that the line 903 in the file pytorch_forecasting\data\timeseries.py removes the dtype attribute from the dataframe.
### Code to resolve the problem in attachments
[code.zip](https://github.com/jdb78/pytorch-forecasting/files/8456197/code.zip)
| open | 2022-04-09T03:59:47Z | 2023-02-20T10:14:20Z | https://github.com/sktime/pytorch-forecasting/issues/947 | [] | JoMaCaCha | 1 |
albumentations-team/albumentations | machine-learning | 2,055 | Extra keypoints when using Rotate with keypoints | ## Describe the bug
Using `A.Rotate` in `A.Compose` with `remove_invisible=False` returns 9 times more keypoints than the input number of keypoints.
### To Reproduce
Steps to reproduce the behavior:
```
import numpy as np
import albumentations as A
image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8)
keypoints = np.random.randint(0, 100, (50, 2), dtype=np.uint8)
# Check shape
print(keypoints.shape) # (50, 2)
result = A.Compose([A.Rotate(limit=45, p=1.0)], keypoint_params=A.KeypointParams(format='yx', remove_invisible=False))(image=image, keypoints=keypoints)
# Check shape
print(result['keypoints'].shape) # (450, 2)
```
### Expected behavior
`result['keypoints']` should have the same shape as the input keypoints. (50, 2)
### Actual behavior
`result['keypoints']` has 9 times more keypoints than the input keypoints. (9 * 50, 2)
### Additional context
The bug was not present in albumentations==1.4.18 and was introduced since albumentations==1.4.19
| closed | 2024-11-04T08:50:59Z | 2024-11-04T15:51:35Z | https://github.com/albumentations-team/albumentations/issues/2055 | [
"bug"
] | Chxresubles | 1 |
pandas-dev/pandas | pandas | 60,421 | BUG: pd.read_json fails with newer versions (>1.26.4) of numpy | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import io
json_data = '{"name": ["John", "Jane", "Bob"],"age": [25, 30, 35],"city": ["New York", "San Francisco", "Chicago"]}'
df = pd.read_json(io.StringIO(json_data))
print(df)
```
### Issue Description
With numpy versions larger than 1.26.4 (haven't tested every version), it appears to fail in [StringFormatter._join_multiline ](https://github.com/pandas-dev/pandas/blob/main/pandas/io/formats/string.py#L127), because of the call to
`np.array([self.adj.len(x) for x in idx]).max()`.
Similar call is in line [130](https://github.com/pandas-dev/pandas/blob/main/pandas/io/formats/string.py#L130).
It seems broken in numpy as `np.array([0,3]).max()` fails with the same error. Reported it [here](https://github.com/numpy/numpy/issues/27857).
A quick fix is to just change it to:
`np.max(np.array([self.adj.len(x) for x in idx])`
### Expected Behavior
Should print out the dataframe (which it does with numpy==1.26.4):
```
name age city
0 John 25 New York
1 Jane 30 San Francisco
2 Bob 35 Chicago
```
### Installed Versions
<details>
pandas : 2.2.2
numpy : 2.1.3
pytz : 2024.2
dateutil : 2.9.0.post0
setuptools : 56.0.0
pip : 21.1
pytest : 8.3.3
scipy : 1.14.1
sqlalchemy : 2.0.35
xlrd : 2.0.1
tzdata : 2024.2
</details>
| closed | 2024-11-26T14:14:29Z | 2024-11-26T15:04:35Z | https://github.com/pandas-dev/pandas/issues/60421 | [
"Bug",
"Needs Triage"
] | MagnusCaspersen184 | 1 |
tflearn/tflearn | data-science | 1,168 | LSTM stateful? | In keras, there is the option to specify whether the lstm is stateful and the next batch continues the previous sequence or not.
How are the lstm in tflearn handelt?
When I start prediction with data of shape (1, 1, 10) and pass afterwards new data of shape (1, 1, 10), will the lstm take this as (1, 2, 10) data and continue the sequence or does it take it as passing (2, 1, 10) and think it is a new training sequence with timesteps 1.
I'm in a reinforcement setting and can only pass my time series one timestep at a time, so I need to know if I can achieve this using tflearn or if I need to keep track of the state of the lstm.
Also, keras requires the initial_state during calling the lstm, not at creation. This also seems to not be the case here. | open | 2022-05-25T19:36:13Z | 2022-05-25T19:36:13Z | https://github.com/tflearn/tflearn/issues/1168 | [] | Lukas3226 | 0 |
joke2k/django-environ | django | 89 | Add documentation for proxied values | With django-environ it's possible to have one environment variable refer to another, using this syntax:
```
CLOUDAMQP_URL=amqp://.....
BROKER_URL='$CLOUDAMQP_URL'
```
(Heroku escapes the environment variables, and so they are not expanded in `os.environ`, so this support from django-environ is very handy)
It would be good to document this feature in the readme :-)
| closed | 2016-07-07T15:24:53Z | 2022-06-19T14:15:50Z | https://github.com/joke2k/django-environ/issues/89 | [
"documentation"
] | edmorley | 2 |
keras-team/autokeras | tensorflow | 891 | Roc curve, other metr | Hey, does anyone tried to plot a ROC curve for autokeras? I tried sklearn.metrics.roc_curve but it did not work because the size of array is wrong. Maybe someone has an example.
Thank you very much | closed | 2020-01-12T00:04:20Z | 2020-03-22T22:53:12Z | https://github.com/keras-team/autokeras/issues/891 | [
"bug report",
"wontfix"
] | taylor4712 | 4 |
3b1b/manim | python | 1,747 | I've installed manim, but can't get it to run the example scenes | I've installed manim, but can't get it to run the example scenes
python -m manim example_scenes.py SquareToCircle -pl
And I get the following back:
RuntimeWarning: 'manim.__main__' found in sys.modules after import of package 'manim', but prior to execution of 'manim.__main__'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Manim Community v0.14.0
Usage: python -m manim render [OPTIONS] FILE [SCENE_NAMES]...
Try 'python -m manim render --help' for help.
Error: No such option: -l
I'm running Python 3.10 on a Windows 11 OS. | closed | 2022-02-17T02:46:04Z | 2022-02-17T07:45:32Z | https://github.com/3b1b/manim/issues/1747 | [] | hoonle131 | 1 |
joeyespo/grip | flask | 187 | Error while displaying special characters | So I have file with characters like <kbd>ê</kbd> <kbd>õ</kbd> <kbd>á</kbd> <kbd>ç</kbd> <kbd>ì</kbd>. When I start grip it looks normal (`* Running on http://localhost:6419/ (Press CTRL+C to quit)`) but when I open the browser window and navigate to localhost:6419, it gives me a 500 Internal Error. On the terminal windows this is the Traceback:
``` bash
[2016-07-08 18:59:47,107] ERROR in app: Exception on / [GET]
Traceback (most recent call last):
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\_compat.py", line 33, in reraise
raise value
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\flask\app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\grip\app.py", line 163, in _render_page
text = self.reader.read(subpath)
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\grip\readers.py", line 265, in read
return self._read_text(filename)
File "c:\users\coppe\appdata\local\programs\python\python35\lib\site-packages\grip\readers.py", line 146, in _read_text
return f.read()
File "c:\users\coppe\appdata\local\programs\python\python35\lib\codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf5 in position 68: invalid start byte
127.0.0.1 - - [08/Jul/2016 18:59:47] "GET / HTTP/1.1" 500 -
```
This error only shows up on Windows (tested on Windows 10, Firefox/Chrome). On my Linux (Fedora 23) it is working without issues.
| open | 2016-07-08T22:04:01Z | 2017-05-10T09:44:42Z | https://github.com/joeyespo/grip/issues/187 | [
"bug"
] | L30Bola | 2 |
PablocFonseca/streamlit-aggrid | streamlit | 174 | V.03 theme no longer available in AgGrid | Code works fine in 2.3 but not in 3 .. can you theme to the config file in V3?
ValueError: light is not valid. Available options: {'STREAMLIT': <AgGridTheme.STREAMLIT: 'streamlit'>, 'ALPINE': <AgGridTheme.ALPINE: 'alpine'>, 'BALHAM': <AgGridTheme.BALHAM: 'balham'>, 'MATERIAL': <AgGridTheme.MATERIAL: 'material'>}
File "C:\Users\sstapinski\pollen\lib\site-packages\st_aggrid__init.py", line 295, in AgGrid
raise ValueError(f"{theme} is not valid. Available options: {AgGridTheme.members__}")
streamlit-aggrid==0.3.3
```
import streamlit as st
import pandas as pd
import numpy as np
from st_aggrid import AgGrid, GridOptionsBuilder
df = pd.DataFrame(
np.random.randint(0, 100, 50).reshape(-1, 5),
index=range(10),
columns=list("abcde"),
)
available_themes = ["streamlit", "light", "dark", "blue", "fresh", "material"]
selected_theme = st.selectbox("Theme", available_themes)
gb = GridOptionsBuilder.from_dataframe(df)
if st.checkbox('Pre-select rows 4 and 6 when loading.'):
gb.configure_selection('multiple', pre_selected_rows=[3,5])
response = AgGrid(
df,
editable=True,
gridOptions=gb.build(),
data_return_mode="filtered_and_sorted",
update_mode="no_update",
fit_columns_on_grid_load=True,
theme=selected_theme
)
```
| closed | 2022-12-19T18:12:22Z | 2024-04-04T17:54:36Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/174 | [] | nafets33 | 2 |
axnsan12/drf-yasg | django | 98 | How to add custom header in javascript ??? | (rest_framework_swagger/index.html) using :
window.swaggerUi.api.clientAuthorizations.add("appVersion",
new SwaggerClient.ApiKeyAuthorization("appVersion", appVersion, "header"));
window.swaggerUi.api.clientAuthorizations.add("userType",
new SwaggerClient.ApiKeyAuthorization("appType", appType, "header"))
window.swaggerUi.api.clientAuthorizations.add("osType",
new SwaggerClient.ApiKeyAuthorization("osType", osType, "header"))
(drf-yasg/swagger-ui.html ) not working... | closed | 2018-04-12T06:08:53Z | 2018-08-13T19:36:25Z | https://github.com/axnsan12/drf-yasg/issues/98 | [] | jyseeeee | 6 |
Gozargah/Marzban | api | 1,628 | Question: is it safe to upgrade Marzban? | Hi all!
Sorry for bothering, but want to ask: is it safe to upgrade Marzban host from v0.4.1 to v0.8.4?
Thank you! | closed | 2025-01-30T17:01:27Z | 2025-01-31T05:47:46Z | https://github.com/Gozargah/Marzban/issues/1628 | [] | yagee | 1 |
MorvanZhou/tutorials | numpy | 80 | pandas values | 显示结果不同
pandas 表格本来数据为:
A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
当我用pd.values 之后变成了
[[-0.16379874 -0.83048248 2.47750063 0.88128204]
[ 0.99402871 1.36858439 -0.73823559 0.22736741]
[ 1.23171536 0.36008269 0.63591057 0.84275845]
[-0.80315072 -0.97444123 0.38838448 0.73846645]
[-0.76956867 1.28131155 1.3510347 1.01261566]
[-0.67800466 0.73231185 1.14658132 0.20543637]]
查文档没有提到这个,请问您知道这个问题吗?非常感谢!
| open | 2019-08-07T03:45:46Z | 2024-01-09T07:04:54Z | https://github.com/MorvanZhou/tutorials/issues/80 | [] | 99sun99 | 4 |
aiortc/aiortc | asyncio | 512 | Displaying extracted image is not proper | I followed the example for VideoClient-cli french flag example and wrote a class that returns frames of ball bouncing of screen and sends it from server to client. When I received the images on the client side and converted these images to ndarray so that I can display them using cv2. These images weren't proper. The whole screen had the color of the ball. The screen was supposed to be black and the ball was supposed to be green. I'm getting a green screen instead of image. I created the class as an extension of VideoStreamStrack | closed | 2021-03-19T05:04:11Z | 2021-03-19T06:06:57Z | https://github.com/aiortc/aiortc/issues/512 | [
"invalid"
] | Mohitkumaraian | 1 |
liangliangyy/DjangoBlog | django | 379 | 请求合并被拦截了 | 我申请的使用simpleui美化django-admin后台管理系统无法合并成功。
<!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x ] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [ x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [ x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ x] 添加新的特性或者功能
- [ ] 请求技术支持
| closed | 2020-03-13T03:54:52Z | 2020-03-16T01:35:35Z | https://github.com/liangliangyy/DjangoBlog/issues/379 | [] | Chise1 | 1 |
modin-project/modin | data-science | 7,150 | Reduce peak memory consumption | closed | 2024-04-05T08:17:12Z | 2024-04-08T17:41:52Z | https://github.com/modin-project/modin/issues/7150 | [
"Code Quality 💯",
"Memory 💾",
"P1"
] | anmyachev | 0 | |
whitphx/streamlit-webrtc | streamlit | 1,408 | Incompatibility between the av==10.0.0 library and streamlit-webrtc==0.47.1 | ### Code:
```
class OpenRecognitionVideoTransformer(VideoTransformerBase):
def __init__(self) -> None:
self.type = "noop"
def transform(self, frame: av.VideoFrame) -> av.VideoFrame:
img = frame.to_image()
return av.VideoFrame.from_image(img)
```
```
webrtc_ctx = webrtc_streamer(
key="opencv-filter",
mode=WebRtcMode.SENDRECV,
client_settings=WEBRTC_CLIENT_SETTINGS,
video_transformer_factory=OpenRecognitionVideoTransformer,
async_processing=True,
)
```
### Error
`2023-10-13 01:28:10,497 - streamlit_webrtc.process - ERROR - Error occurred in the WebRTC thread: - None
2023-10-13 01:28:10,514 - streamlit_webrtc.process - ERROR - Traceback (most recent call last): - None
2023-10-13 01:28:10,519 - streamlit_webrtc.process - ERROR - File "/usr/local/lib/python3.10/site-packages/streamlit_webrtc/process.py", line 108, in _run_worker_thread - None
2023-10-13 01:28:10,523 - streamlit_webrtc.process - ERROR - self._worker_thread() - None
2023-10-13 01:28:10,530 - streamlit_webrtc.process - ERROR - File "/usr/local/lib/python3.10/site-packages/streamlit_webrtc/process.py", line 196, in _worker_thread - None
2023-10-13 01:28:10,534 - streamlit_webrtc.process - ERROR - new_frames = finished.result() - None
2023-10-13 01:28:10,539 - streamlit_webrtc.process - ERROR - File "/usr/local/lib/python3.10/site-packages/streamlit_webrtc/models.py", line 115, in recv_queued - None
2023-10-13 01:28:10,544 - streamlit_webrtc.process - ERROR - return [self.recv(frames[-1])] - None
2023-10-13 01:28:10,551 - streamlit_webrtc.process - ERROR - File "/usr/local/lib/python3.10/site-packages/streamlit_webrtc/models.py", line 107, in recv - None
2023-10-13 01:28:10,556 - **streamlit_webrtc.process - ERROR - return av.VideoFrame.from_ndarray(new_image, format="bgr24") - None**
2023-10-13 01:28:10,560 - streamlit_webrtc.process - ERROR - File "av/video/frame.pyx", line 408, in av.video.frame.VideoFrame.from_ndarray - None
2023-10-13 01:28:10,566 - streamlit_webrtc.process - ERROR - File "av/utils.pyx", line 69, in av.utils.check_ndarray - None
2023-10-13 01:28:10,570 - streamlit_webrtc.process - ERROR - AttributeError: 'av.video.frame.VideoFrame' object has no attribute 'dtype' - None`
**python 3.10**
### requirements.txt
aioice==0.9.0
aiortc==1.5.0
altair==5.1.2
attrs==23.1.0
av==10.0.0
blinker==1.6.3
cachetools==5.3.1
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.0
click==8.1.7
cryptography==41.0.4
dlib==19.24.2
dnspython==2.4.2
face-recognition==1.3.0
face-recognition-models==0.3.0
gitdb==4.0.10
GitPython==3.1.37
google-crc32c==1.5.0
idna==3.4
ifaddr==0.2.0
importlib-metadata==6.8.0
Jinja2==3.1.2
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
markdown-it-py==3.0.0
MarkupSafe==2.1.3
mdurl==0.1.2
numpy==1.26.0
packaging==23.2
pandas==2.1.1
Pillow==10.0.1
protobuf==4.24.4
pyarrow==13.0.0
pycparser==2.21
pydeck==0.8.1b0
pyee==11.0.0
Pygments==2.16.1
pylibsrtp==0.8.0
pymongo==4.5.0
pyOpenSSL==23.2.0
python-dateutil==2.8.2
pytz==2023.3.post1
referencing==0.30.2
requests==2.31.0
rich==13.6.0
rpds-py==0.10.6
six==1.16.0
smmap==5.0.1
streamlit==1.27.2
streamlit-webrtc==0.47.1
tenacity==8.2.3
toml==0.10.2
toolz==0.12.0
tornado==6.3.3
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==5.1
urllib3==2.0.6
validators==0.22.0
watchdog==3.0.0
zipp==3.17.0
| open | 2023-10-13T01:42:55Z | 2023-10-20T03:02:27Z | https://github.com/whitphx/streamlit-webrtc/issues/1408 | [] | gugaucb | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,938 | Enable multiple columns ordering for sqla | https://github.com/dpgaspar/Flask-AppBuilder/blob/a996b9b7aeed6b93b62a2c4a75db8375dd34268f/flask_appbuilder/models/sqla/interface.py#L179
I propose to change the code as follow (adding very few lines to original code)
Could you review and tell me doing a Pull Request please?
Ordering with multiple columns would then be declared as
base_order = (('owner','categorie','propriete'), 'asc') in a view
```python
def apply_order_by(
self,
query: Query,
order_column: Any,
order_direction: str,
aliases_mapping: Dict[str, AliasedClass] = None,
) -> Query:
if isinstance(order_column, str):
if order_column != "":
# if Model has custom decorator **renders('<COL_NAME>')**
# this decorator will add a property to the method named *_col_name*
if hasattr(self.obj, order_column):
if hasattr(getattr(self.obj, order_column), "_col_name"):
order_column = getattr(self._get_attr(order_column), "_col_name")
_order_column = self._get_attr(order_column) or order_column
if is_column_dotted(order_column):
root_relation = get_column_root_relation(order_column)
# On MVC we still allow for joins to happen here
if not self.is_model_already_joined(
query, self.get_related_model(root_relation)
):
query = self._query_join_relation(
query, root_relation, aliases_mapping=aliases_mapping
)
column_leaf = get_column_leaf(order_column)
_alias = self.get_alias_mapping(root_relation, aliases_mapping)
_order_column = getattr(_alias, column_leaf)
if order_direction == "asc":
query = query.order_by(asc(_order_column))
else:
query = query.order_by(desc(_order_column))
return query
elif isinstance(order_column, tuple):
for col in order_column:
query = self.apply_order_by(query, col, order_direction)
return query
else:
return query
``` | open | 2022-10-19T09:54:18Z | 2022-12-05T14:11:14Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1938 | [] | gbrault | 2 |
svc-develop-team/so-vits-svc | pytorch | 18 | Many speaker voice conversion task | Hi
If i use your model for a voice conversion task with around 100 speakers, would its performance be better than that of freevc?
And can i get the checkpoint repository link? | closed | 2023-03-13T09:00:06Z | 2023-03-23T08:38:00Z | https://github.com/svc-develop-team/so-vits-svc/issues/18 | [] | lsw5835 | 1 |
quantumlib/Cirq | api | 6,582 | Implement dynamical decoupling transformer | **Enable dynamical decoupling operations insertion**
**Acceptance criteria - Users are able to use the transformer to insert dynamical decoupling sequences based on their preferences** | closed | 2024-05-01T20:17:53Z | 2024-12-26T20:34:10Z | https://github.com/quantumlib/Cirq/issues/6582 | [
"kind/task"
] | babacry | 4 |
kornia/kornia | computer-vision | 2,352 | Create `draw_point2d` | Right now we have `render_gaussian2d` to "draw" 2d points, but would be great to find user friendly function that you can pass an input tensor and draw some points.
https://kornia.readthedocs.io/en/latest/geometry.subpix.html#kornia.geometry.subpix.render_gaussian2d | closed | 2023-04-28T10:36:36Z | 2023-07-27T06:55:29Z | https://github.com/kornia/kornia/issues/2352 | [
"help wanted",
"module: geometry"
] | edgarriba | 2 |
long2ice/fastapi-cache | fastapi | 70 | Help understanding cache | So I have an endpoint I need to cache for a short time because it can get a lot of the same request at the same time.
I use InMemory cache, and the request takes a second or more to process.
If I get say 100 requests for the route "at the same time" would they all run the process because the cache wasn't ready, or would they all wait for the first one to finish and use the cache from that one? | open | 2022-08-07T15:41:45Z | 2023-05-15T11:25:17Z | https://github.com/long2ice/fastapi-cache/issues/70 | [
"documentation",
"question"
] | ShayBox | 3 |
ultralytics/ultralytics | computer-vision | 19,508 | resume in YOLOv8 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When I was using resume on a YOLOv8 based network hoping to continue training, I encountered a problem where it did not start training from where it was last time, but instead maintained this state from the first round, and even the rounds I set were not 100 at all. I hope to get your help,Here is my code and a screenshot of the problem I encountered
` task_type = {
"train": YOLO(model_conf).train(resume=True)
# "val": YOLO(model_conf).val(**args),
# "test": YOLO(model_conf).test(**args),
}
And python mbyolo_train.py --task train --config /{weight path}/last.pt --data /data.yaml
`
<img width="739" alt="Image" src="https://github.com/user-attachments/assets/b69ca4ff-2d2f-472a-a213-895599ebd9a2" />
### Additional
_No response_ | open | 2025-03-04T01:43:09Z | 2025-03-04T02:10:04Z | https://github.com/ultralytics/ultralytics/issues/19508 | [
"question",
"detect"
] | li-25-creater | 3 |
autokey/autokey | automation | 937 | Is there a way to initiate script execution with mouse click rather than key press? | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Unknown
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [X] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [X] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Lubuntu 20.04.
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
0.96.0. GTK.
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Frankly, I am not sure if the following is a bug, but I do not see any logical rationale not to provide the described functionality (of course, there may be rather serious technical reasons). So, I wonder if this was inadvertently omitted.
The listener behind the «Set hotkey» dialogue fails at registering mouse click events. Is there any way to bypass this? For example, to edit the corresponding json-file (I tried but my attempts to find proper syntax to make it work were futile)?
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | closed | 2024-03-04T12:16:08Z | 2024-04-12T15:24:00Z | https://github.com/autokey/autokey/issues/937 | [
"duplicate",
"enhancement",
"autokey triggers"
] | SN-CH | 6 |
davidsandberg/facenet | tensorflow | 1,227 | ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512]. | Hi, I'm trying to test validate_on_lfw following Wiki.
I'm done align the LFW dataset, and I have a hard time on number 6. Run the test.
My computer is Ubuntu Linux.
How can I run the test correctly?
2022-07-11 18:15:04.304549: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2022-07-11 18:15:04.304574: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: ubuntu
2022-07-11 18:15:04.304579: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: ubuntu
2022-07-11 18:15:04.304641: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 510.47.3
2022-07-11 18:15:04.304655: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 510.47.3
2022-07-11 18:15:04.304660: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 510.47.3
2022-07-11 18:15:04.304854: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
/home/facenet/src/lfw.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return np.array(pairs)
WARNING:tensorflow:From /home/facenet/src/facenet.py:114: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
WARNING:tensorflow:From /home/facenet/src/facenet.py:133: batch_join (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.interleave(...).batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).
WARNING:tensorflow:From /home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/training/input.py:732: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
WARNING:tensorflow:From /home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/training/input.py:732: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
Model directory: /home/models/facenet/20180402-114759
Metagraph file: model-20180402-114759.meta
Checkpoint file: model-20180402-114759.ckpt-275
2022-07-11 18:15:06.172482: W tensorflow/core/common_runtime/graph_constructor.cc:1526] Importing a graph with a lower producer version 24 into an existing graph with producer version 1087. Shape inference will have run different parts of the graph with different producer versions.
Traceback (most recent call last):
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 500, in _import_graph_def_internal
results = c_api.TF_GraphImportGraphDefWithResults(
tensorflow.python.framework.errors_impl.InvalidArgumentError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/validate_on_lfw.py", line 167, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/validate_on_lfw.py", line 75, in main
facenet.load_model(args.model, input_map=input_map)
File "/home/facenet/src/facenet.py", line 383, in load_model
saver = tf.train.import_meta_graph(os.path.join(model_exp, meta_file), input_map=input_map)
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/training/saver.py", line 1582, in import_meta_graph
return _import_meta_graph_with_return_elements(meta_graph_or_file,
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/training/saver.py", line 1603, in _import_meta_graph_with_return_elements
meta_graph.import_scoped_meta_graph_with_return_elements(
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/framework/meta_graph.py", line 804, in import_scoped_meta_graph_with_return_elements
imported_return_elements = importer.import_graph_def(
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
return func(*args, **kwargs)
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 404, in import_graph_def
return _import_graph_def_internal(
File "/home/heaan-venv/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512]. | open | 2022-07-11T09:30:48Z | 2023-10-20T12:03:15Z | https://github.com/davidsandberg/facenet/issues/1227 | [] | thoongee | 4 |
tqdm/tqdm | pandas | 1,257 | Nested progress bars on same line? | - [X] I have marked all applicable categories:
+ [X] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [X] new feature request
- [X] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [X] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.2, 3.9, macOs
```
So could we have a double progress bar / nested bar on the same line? e.g.
```
each of these would be on the same line:
Outer Bar 0% | | [time estimate] Inner Bar 50%. |████ | [time estimate]
Outer Bar 0% | | [time estimate] Inner Bar 100% |█████████| [time estimate]
Outer Bar 30% |███ | [time estimate] Inner Bar 100% | | [time estimate]
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2021-10-09T17:36:06Z | 2021-10-09T17:36:06Z | https://github.com/tqdm/tqdm/issues/1257 | [] | SumNeuron | 0 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 437 | Distutils deprecation warnings on python3.10 (version ==0.28.0) | The following are raised as warnings on v0.28.0 on python 3.10
```
/usr/local/lib/python3.10/site-packages/marshmallow_sqlalchemy/convert.py:17: DeprecationWarning:
distutils Version classes are deprecated. Use packaging.version instead.
_META_KWARGS_DEPRECATED = LooseVersion(ma.__version__) >= LooseVersion("3.10.0")
../../../usr/local/lib/python3.10/site-packages/flask_marshmallow/__init__.py:34
/usr/local/lib/python3.10/site-packages/flask_marshmallow/__init__.py:34: DeprecationWarning:
distutils Version classes are deprecated. Use packaging.version instead.
__version_info__ = tuple(LooseVersion(__version__).version)
``` | closed | 2022-05-17T16:39:34Z | 2022-07-18T21:40:12Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/437 | [] | tgross35 | 2 |
allenai/allennlp | pytorch | 4,955 | Learning rate scheduler do not work on AllenNLP v2 | Hello, I´m porting my code to the v2 version, and realized that the Learning rate scheduler was not working.
After inspecting the training code, I realized that on def _try_train(self) the variable:
`this_epoch_val_metric: float = 0.0` never changes it values
However the scheduler API requires a validation metric
```
# The Scheduler API is agnostic to whether your schedule requires a validation metric -
# if it doesn't, the validation metric passed here is ignored.
if self._learning_rate_scheduler:
self._learning_rate_scheduler.step(this_epoch_val_metric)
if self._momentum_scheduler:
self._momentum_scheduler.step(this_epoch_val_metric)
```
I guess it is related to _metric_tracker change, that now receives a list of metrics. But I guess @dirkgr would solve it more elegantly as he was the designer of the improved metric tracker | closed | 2021-02-02T21:26:48Z | 2021-02-04T19:21:59Z | https://github.com/allenai/allennlp/issues/4955 | [
"bug"
] | bratao | 4 |
davidsandberg/facenet | tensorflow | 815 | Distilling facenet (Hinton, Distilling the Knowledge in a Neural Network) | Dear All,
Anyone tried distillation on David's facenet?
According to : 3 lines in github.com/DushyantaDhyani/kdtf :
soft_targets = teacher_model.predict(batch_x, self.temperature)
self.sess.run([self.train_op, self.merged_summary_op],..., self.soft_Y: soft_targets , ...
softmax_cross_entropy_with_logits( ..., labels=self.soft_Y )
I am looking to feed inception res v1 's softmax (scaled log probabilities) probabilities into nn4's softmax_cross_entropy_with_logits label.
But, I embarrassingly found out just then the input images and labels are queued.
Anyone know how to enqueue inception res v1 smx probabilities alongside images and labels?
And another question is data_flow_ops is supposed to make things efficient.
If possible at all, I come up with some solution to enqueue inception res v1 smx probability, would it destroy the original performance gain?
Or instead of dynamically do inference on teacher then feed smx probabilities to student, it's better to do things in 2 passes?
Pass 1, run inception res v1 on all images and store the smx probabilities in a file.
Pass 2, read the teacher smx probabilities from file, then besides the original image and label, also enqueue smx probabilities?
BR,
Jimmy
ps. some related references, study materials :
https://www.tensorflow.org/api_docs/python/tf/train/batch_join
https://stackoverflow.com/questions/38827264/how-to-use-tensorflow-reader-and-queue-to-read-two-file-at-same-time
https://stackoverflow.com/questions/37581671/how-to-mix-queue-based-and-feed-based-input-in-tensorflow
https://www.tensorflow.org/api_docs/python/tf/FIFOQueue | open | 2018-07-17T02:29:59Z | 2018-07-17T02:29:59Z | https://github.com/davidsandberg/facenet/issues/815 | [] | speculaas | 0 |
Sanster/IOPaint | pytorch | 308 | [BUG]Memory keeps rising and not released | **Model**
Lama
**Describe the bug**
```
lama-cleaner --model=lama --device=cuda --port=9003
```
During use, the memory will continue to increase without any release. Is there any way to automatically free the memory?
**Screenshots**

**System Info**
Software version used
- lama-cleaner: 1.1.2
- pytorch: 2.0.1
- CUDA: 12.0.0
| open | 2023-05-18T02:34:24Z | 2023-05-20T05:33:15Z | https://github.com/Sanster/IOPaint/issues/308 | [] | AnJoiner | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.