repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
deepset-ai/haystack | machine-learning | 8,930 | Remove explicit mention of Haystack "2.x" in cookbooks | closed | 2025-02-25T10:56:07Z | 2025-03-11T09:05:31Z | https://github.com/deepset-ai/haystack/issues/8930 | [
"P2"
] | julian-risch | 0 | |
httpie/cli | api | 1,001 | https command not found after fresh installation | Hi guys,
Any time I install httpie on Ubuntu (`sudo apt-get install httpie`), the `http` command works perfectly fine afterwards.
However, `https` is never found.

I have had this on all machines where I tried this so far, WSL on Windows, as well as native Ubuntu 19.x and 20.x.
What do I need to do to get https command working as well, and what needs to change in the installation instructions, because I can't imagine I'm the only one encountering this? | closed | 2020-12-04T00:11:38Z | 2021-09-22T15:56:45Z | https://github.com/httpie/cli/issues/1001 | [
"packaging"
] | batjko | 3 |
autogluon/autogluon | data-science | 3,838 | [BUG] GPU is not used in v1.0.0 | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
I specified num_gpus=2 in predictor.fit() but GPUs are not used at all during training. However, if I specify "ag.num_gpus" for CAT model, gpus will be used as normal. This problem only exists in v1.0.0.
**Expected behavior**
GPUs should be used when num_gpus=2 is specified in predictor.fit()
**To Reproduce**
(this code fails to use gpus)
predictor = TabularPredictor(label='target', eval_metric='accuracy', groups='groups')
predictor.fit(df_train, num_gpus=2, hyperparameters={'CAT': {}}, presets='medium_quality')
(this code uses gpus)
predictor = TabularPredictor(label='target', eval_metric='accuracy', groups='groups')
predictor.fit(df_train, num_gpus=2, hyperparameters={'CAT': {'ag.num_gpus': 1}}, presets='medium_quality')
**Installed Versions**
v1.0.0
```python
INSTALLED VERSIONS
------------------
date : 2023-12-22
time : 16:09:39.297051
python : 3.10.12.final.0
OS : Linux
OS-release : 5.4.0-166-generic
Version : #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 128
cpu_ram_mb : 1031560.64453125
cuda version : 12.535.54.03
num_gpus : 2
gpu_ram_mb : [23673, 24203]
avail_disk_size_mb : 574503
accelerate : 0.21.0
async-timeout : 4.0.2
autogluon : 1.0.0
autogluon.common : 1.0.0
autogluon.core : 1.0.0
autogluon.eda : 0.8.1b20230802
autogluon.features : 1.0.0
autogluon.multimodal : 1.0.0
autogluon.tabular : 1.0.0
autogluon.timeseries : 1.0.0
boto3 : 1.28.15
catboost : 1.2.2
defusedxml : 0.7.1
evaluate : 0.4.1
fastai : 2.7.12
gluonts : 0.14.3
hyperopt : 0.2.7
imodels : 1.3.18
ipython : 8.12.2
ipywidgets : 8.0.7
jinja2 : 3.1.2
joblib : 1.3.1
jsonschema : 4.17.3
lightgbm : 3.3.5
lightning : 2.0.9.post0
matplotlib : 3.6.3
missingno : 0.5.2
mlforecast : 0.10.0
networkx : 3.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.24.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : 1.13.1
openmim : 0.3.9
orjson : 3.9.10
pandas : 2.1.4
phik : 0.12.3
Pillow : 10.1.0
psutil : 5.9.5
PyMuPDF : 1.21.1
pyod : 1.0.9
pytesseract : 0.3.10
pytorch-lightning : 2.0.9.post0
pytorch-metric-learning: 1.7.3
ray : 2.6.3
requests : 2.31.0
scikit-image : 0.19.3
scikit-learn : 1.3.0
scikit-learn-intelex : None
scipy : 1.11.1
seaborn : 0.12.2
seqeval : 1.2.2
setuptools : 68.0.0
shap : 0.41.0
skl2onnx : 1.13
statsforecast : 1.4.0
statsmodels : 0.14.0
suod : 0.0.9
tabpfn : 0.1.9
tensorboard : 2.14.1
text-unidecode : 1.3
timm : 0.9.12
torch : 2.0.1
torchmetrics : 1.1.2
torchvision : 0.15.2
tqdm : 4.65.0
transformers : 4.31.0
utilsforecast : 0.0.10
vowpalwabbit : 9.4.0
xgboost : 1.7.6
yellowbrick : 1.5
```
</details>
| open | 2023-12-22T16:10:14Z | 2024-11-05T18:04:12Z | https://github.com/autogluon/autogluon/issues/3838 | [
"bug",
"module: tabular",
"Needs Triage",
"priority: 1"
] | hanxuh-hub | 0 |
huggingface/diffusers | deep-learning | 10,969 | Run FLUX-controlnet zero3 training failed: 'weight' must be 2-D | ### Describe the bug
I am attempting to use Zero-3 for Flux Controlnet training on 8 GPUs following the guidance of [README](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_flux.md#apply-deepspeed-zero3). The error below occured:
```
[rank0]: RuntimeError: 'weight' must be 2-D
```
### Reproduction
accelerate config:
```
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
gradient_accumulation_steps: 8
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
enable_cpu_affinity: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
training command:
```
accelerate launch --config_file "./accelerate_config_zero3.yaml" train_controlnet_flux_zero3.py --pretrained_model_name_or_path=/srv/mindone/wty/flux.1-dev/ --jsonl_for_train=/srv/mindone/wty/diffusers/examples/controlnet/train_1000.jsonl --conditioning_image_column=conditioning_image --image_column=image --caption_column=text --output_dir=/srv/mindone/wty/diffusers/examples/controlnet/single_layer --mixed_precision="bf16" --resolution=512 --learning_rate=1e-5 --max_train_steps=100 --train_batch_size=1 --gradient_accumulation_steps=8 --num_double_layers=4 --num_single_layers=0 --seed=42 --gradient_checkpointing --cache_dir=/srv/mindone/wty/diffusers/examples/controlnet/cache --dataloader_num_workers=8 --resume_from_checkpoint="latest"
```
### Logs
```shell
Map: 0%| | 0/1000 [00:00<?, ? examples/s]
[rank0]: Traceback (most recent call last):
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1481, in <module>
[rank0]: main(args)
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1182, in main
[rank0]: train_dataset = train_dataset.map(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
[rank0]: out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3079, in map
[rank0]: for rank, done, content in Dataset._map_single(**dataset_kwargs):
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3519, in _map_single
[rank0]: for i, batch in iter_outputs(shard_iterable):
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3469, in iter_outputs
[rank0]: yield i, apply_function(example, i, offset=offset)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3392, in apply_function
[rank0]: processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
[rank0]: File "/srv/mindone/wty/diffusers/examples/controlnet/train_controlnet_flux_zero3.py", line 1094, in compute_embeddings
[rank0]: prompt_embeds, pooled_prompt_embeds, text_ids = flux_controlnet_pipeline.encode_prompt(
[rank0]: File "/srv/mindone/wty/diffusers/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 396, in encode_prompt
[rank0]: pooled_prompt_embeds = self._get_clip_prompt_embeds(
[rank0]: File "/srv/mindone/wty/diffusers/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 328, in _get_clip_prompt_embeds
[rank0]: prompt_embeds = self.text_encoder(text_input_ids.to(device), output_hidden_states=False)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 1056, in forward
[rank0]: return self.text_model(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 947, in forward
[rank0]: hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 292, in forward
[rank0]: inputs_embeds = self.token_embedding(input_ids)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 190, in forward
[rank0]: return F.embedding(
[rank0]: File "/home/miniconda3/envs/flux-perf/lib/python3.9/site-packages/torch/nn/functional.py", line 2551, in embedding
[rank0]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
[rank0]: RuntimeError: 'weight' must be 2-D
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0(HEAD on #10945)
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.27
- Running on Google Colab?: No
- Python version: 3.9.21
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB
### Who can help?
@yiyixuxu @sayakpaul | open | 2025-03-05T02:14:09Z | 2025-03-24T02:24:04Z | https://github.com/huggingface/diffusers/issues/10969 | [
"bug"
] | alien-0119 | 1 |
tensorflow/tensor2tensor | deep-learning | 1,212 | Loading weights before decoding starts in interactive decoding | Hi,
I am trying to use interactive decoding using
```
t2t-decoder \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_interactive
```
But in this case the checkpoint gets loaded only once after I enter the first sentence and then receives inputs continuously. Not sure it is a bug or an intended behaviour. But is it not possible that I load model even before I take the first sentence as input and start decoding as the sentence comes
Thank you. | open | 2018-11-12T05:41:26Z | 2018-11-13T06:14:05Z | https://github.com/tensorflow/tensor2tensor/issues/1212 | [] | sugeeth14 | 2 |
Johnserf-Seed/TikTokDownload | api | 375 | cookie用不长久 [BUG] | closed | 2023-03-29T10:07:15Z | 2023-04-03T07:11:36Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/375 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | SCxiaozhouM | 1 | |
plotly/dash-core-components | dash | 67 | `dcc.DatePickerSingle` and `dcc.DatePickerRange` missing `style` and `className` properties | One of recent updates gave all components `style` and `className` properties to make styling easier. Two components added for date picking are missing them.
That can be useful when one wants to disable border on them to make them blend in better, or just change cursor to pointer when hovering. | open | 2017-08-30T13:32:20Z | 2019-09-22T16:30:09Z | https://github.com/plotly/dash-core-components/issues/67 | [
"dash-type-enhancement"
] | radekwlsk | 3 |
hankcs/HanLP | nlp | 727 | 如何在索引分词中只使用自定义词典分词 | ## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
当前最新版本号是:1.5.2
我使用的版本是: 1.3.4
## 我的问题
您好,我希望在索引分词中只使用自定义的词典进行分词,关闭其他词典。请问有何方法吗?
因为索引分词中支持全切分,因此我希望以此来避免基于词典的分词方法只能匹配最长词的缺点。
期待你的回复谢谢。
## 复现问题
字典中为“夏洛特烦恼 n, 夏洛 nr”
List<Term> termList = IndexTokenizer.segment("夏洛特烦恼");
### 期望输出
```
夏洛特烦恼/n [0:5]
夏洛/nr [0:2]
```
### 实际输出
```
夏洛特烦恼/n [0:5]
夏洛/nr [0:2]
夏洛特/nrf [0:3]
烦恼/an [3:5]
```
| closed | 2017-12-29T07:42:58Z | 2018-01-18T10:08:55Z | https://github.com/hankcs/HanLP/issues/727 | [
"question"
] | jimmy-walker | 4 |
psf/requests | python | 6,763 | Body with Special Characters Gets Cut | When sending a request with special characters using the requests module, the request gets cut and is not sent fully.
This seems to be caused by the requests module calculating the length of the original string, but once the request arrives at the urllib3 module, the urllib3 module encodes the request and calculates the content length again. Unfortunately, because the request content length was already calculated and included in the headers dictionary, it gets overwritten.
## Expected Result
The request should be sent fully, including all special characters.
## Actual Result
The request gets cut off, and not all data is sent.
## Reproduction Steps
```python
import requests
### Note that the x are special characters ###
response = requests.post(url='http://127.0.0.1', data="""{"test": "××××"}""")
print(response.text)
```
## Example of the actual request:
```
POST / HTTP/1.1
Host: 127.0.0.1
User-Agent: python-requests/2.31.0
Accept-Encoding: gzip, deflate, br
Accept: */*
Connection: keep-alive
Content-Length: 16
{"test": "×××
```
As it looks the overwrite happens here:
[urllib3 connection.py#L396](https://github.com/urllib3/urllib3/blob/0ce5a89a81943e0153d3655415192e2d82f080cf/src/urllib3/connection.py#L396)
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.7"
},
"implementation": {
"name": "CPython",
"version": "3.11.9"
},
"platform": {
"release": "23.5.0",
"system": "Darwin"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "300000d0"
},
"urllib3": {
"version": "2.0.7"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2024-07-08T14:33:37Z | 2024-07-08T18:35:44Z | https://github.com/psf/requests/issues/6763 | [] | Boris-Rozenfeld | 9 |
matplotlib/matplotlib | data-science | 29,595 | [Bug]: Setting alpha with an array is ignored with jupyterlab %inline backend | ### Bug summary
If alpha is set as an array (of equal shape as the image data), the alpha value is ignored.
This is failing for inline plots in Jupyter Lab.
### Code for reproduction
```Python
import matplotlib
print(matplotlib.__version__)
import numpy as np
import matplotlib.pyplot as plt
img_data = np.random.rand(256, 200)
alpha = np.ones((256, 200))
alpha[:, 0:100] = 0.5
fig, ax = plt.subplots(1, 2, figsize=(10, 8))
ax[0].imshow(img_data, alpha=alpha)
ax[1].imshow(alpha)
```
### Actual outcome
<img width="894" alt="Image" src="https://github.com/user-attachments/assets/dde97c0a-f48b-436f-ad7d-174784bae280" />
### Expected outcome
<img width="869" alt="Image" src="https://github.com/user-attachments/assets/2afff9fe-b9bf-465f-9013-3deda8de3f1a" />
### Additional information
Worked with Matplotlib 3.9.4 with numpy 2.2.1
Fails in matplotlib 3.10.0 (same numpy), running in a Jupyter Lab cell.
Worked as expected when run from the command line, with backend 'macosx', or if `%matplotlib osx` or `%matplotlib ipympl` is used to decorate the cell before running the code above.
### Operating system
OS/X
### Matplotlib Version
3.10.0
### Matplotlib Backend
inline
### Python version
3.13.1
### Jupyter version
4.3.4
### Installation
conda | open | 2025-02-09T02:21:38Z | 2025-02-10T16:14:46Z | https://github.com/matplotlib/matplotlib/issues/29595 | [] | rhiannonlynne | 5 |
SALib/SALib | numpy | 41 | Compute Si for multiple outputs in parallel | It would be good to extend the existing Morris analysis code so that multiple results vectors could be computed from one call, with results passed as a numpy array, rather than just a vector.
At present, it is necessary to loop over each output you wish to compute the metrics for, calling the analysis procedure each time.
``` python
import SALib.analyze.morris
for results in array_of_results:
Si.append(analyze(problem, X, results))
```
It would be preferable to do this:
``` python
import SALib.analyze.morris
Si = analyze(problem, X, array_of_results)
```
A parallel implementation would be equally desirable, and trivial, as each output can be computed independently of the others.
| open | 2015-03-09T15:22:54Z | 2023-12-08T12:31:50Z | https://github.com/SALib/SALib/issues/41 | [
"enhancement"
] | willu47 | 10 |
cvat-ai/cvat | tensorflow | 8,859 | Why does the position of an already marked annotation box change? | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1, fix a labeled object box use rectangle shape ,For example, changing the size of the box
2, Press the `F` key and then press `D` key back to image
3, The box changes to its original size instead of maintaining the changed size
why ?and how to fix it
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-12-23T09:44:11Z | 2025-03-05T20:45:37Z | https://github.com/cvat-ai/cvat/issues/8859 | [
"bug"
] | jaffe-fly | 11 |
httpie/cli | python | 1,417 | some issues with the copy button | when i click on the copy button . this what I get copied
"# Install httpie
choco install httpie"
so I think i will be be helpful if i can just copy the "choco install httpie" | open | 2022-06-20T09:26:53Z | 2022-10-10T21:22:56Z | https://github.com/httpie/cli/issues/1417 | [
"bug",
"website"
] | alidauda | 3 |
apragacz/django-rest-registration | rest-api | 143 | Cannot install dev dependencies | ### Describe the bug
Running `make install_dev` with Python 3.8 crashes, preventing to install dev dependencies.
### Expected behavior
Dependencies installed.
### Actual behavior
Crashes with:
```log
ERROR: Cannot install -r requirements/requirements-dev.lock.txt (line 164) and ipython==7.16.1 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested ipython==7.16.1
ipdb 0.13.7 depends on ipython>=7.17.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
### Steps to reproduce
```sh
git clone git@github.com:apragacz/django-rest-registration.git
cd django-rest-registration
# create virtual environment
git checkout 0.6.2 # optional
make install_dev
``` | closed | 2021-05-26T15:18:56Z | 2021-05-27T05:02:50Z | https://github.com/apragacz/django-rest-registration/issues/143 | [
"type:bug"
] | Neraste | 2 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 209 | Performance Confusion | Hi, appreciate the awesome work, very impressive and concise implementation of the original paper!
Here is something confuses me that, is the performance benchmark link at the [home page](https://github.com/jadore801120/attention-is-all-you-need-pytorch#performance) for the WMT 2016 dataset or WMT 2017 dataset?
| open | 2023-05-26T06:55:57Z | 2023-05-26T08:21:02Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/209 | [] | Zarca | 0 |
yeongpin/cursor-free-vip | automation | 365 | [讨论]: 如果额度到150后怎么办 | ### Issue 检查清单
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我确认自己需要的是提出问题并且讨论问题,而不是 Bug 反馈或需求建议。
- [x] 我已阅读 [Github Issues](https://github.com/yeongpin/cursor-free-vip/issues) 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues) 和 [已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
### 平台
Windows x32
### 版本
v1.7.17
### 您的问题
我用自己的github账号注册的终身访问 显示并非pro而是试用 只有150额度
### 补充信息
```shell
```
### 优先级
低 (有空再看) | open | 2025-03-23T16:04:21Z | 2025-03-24T15:44:56Z | https://github.com/yeongpin/cursor-free-vip/issues/365 | [
"question"
] | tianhuahao | 1 |
plotly/dash | flask | 2,547 | [BUG] Graphs in vertical tabs do not use available space | Installed Dash versions:
```
dash 2.9.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
```
Browsers/OS
- OS: Linux
- Browser ungoogled-chromium, firefox
**Describe the bug**
The width of the graph object does not extend to the right side of the screen. When removing `vertical=True` the graph fills the entire screen.
**Expected behavior**
The graph object should use the available space to the right.
**Screenshots**
`vertical=True`

`vertical=False`

**Minimal example used**
```
from dash import Dash, html, dcc
import plotly.express as px
import pandas as pd
app = Dash(__name__)
daily_profile = [0, 0, 0, 0, 0, 0, 0, 0.05, 0.15, 0.2, 0.4, 0.8, 0.7, 0.4, 0.2, 0.15, 0.05, 0, 0, 0, 0, 0, 0, 0]
daily_production = pd.DataFrame(data=daily_profile)
tab = dcc.Tab([dcc.Graph(figure=px.bar(daily_production))], label="tab")
app.layout = html.Div([
dcc.Tabs([tab]
, vertical=True # remove to fix graph behavior
)
])
if __name__ == "__main__":
app.run_server(debug=True)
```
| closed | 2023-05-28T13:11:46Z | 2023-05-31T13:06:14Z | https://github.com/plotly/dash/issues/2547 | [] | TheNyneR | 10 |
fastapi/sqlmodel | pydantic | 542 | Order of columns in the table created does not have 'id' first, despite the order in the SQLModel. Looks like it's prioritising fields with sa_column | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import Field, SQLModel, JSON, Column, Time
class MyTable(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
type: str
slug: str = Field(index=True, unique=True)
resource_data: dict | None = Field(default=None, sa_column=Column(JSON)) # type: ignore
# ... create engine
SQLModel.metadata.create_all(engine)
```
### Description
The CREATE table script generated for the model above ends up putting resource_data as the first column, instead of preserving the natural order of 'id' first
```
CREATE TABLE mytable (
resource_data JSON, <----- why is this the FIRST column created?
id SERIAL NOT NULL,
name VARCHAR NOT NULL,
type VARCHAR NOT NULL,
slug VARCHAR NOT NULL,
PRIMARY KEY (id)
)
```
This feels unusual when I inspect my postgresql tables in a db tool like pgAdmin.
How do I ensure the table is created with the 'natural' order?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.11.1
### Additional Context
_No response_ | open | 2023-01-29T14:11:08Z | 2024-11-22T11:57:33Z | https://github.com/fastapi/sqlmodel/issues/542 | [
"question"
] | epicwhale | 8 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 118 | 合并后没有量化的全量中文llama模型应该怎么推理?用原始的llama推理代码一直报词表数量无法整除 | AssertionError: 49953 is not divisible by 2
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2276862 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2276861) of binary: /home/platform/anaconda3/envs/hcs/bin/python
Traceback (most recent call last):
| closed | 2023-04-11T04:06:56Z | 2023-06-15T12:51:47Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/118 | [] | WUHU-G | 1 |
gee-community/geemap | streamlit | 1,525 | I encountered an error using function: netcdf_to_ee | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
```python
colab
```
### Description
Hello, Professor Wu. I hope to get your help. I want to load the netCDF file on the hard disk into colab and convert it into image type for subsequent calculation.
**The code is as follows:**
### Code
```python
image = geemap.netcdf_to_ee("/content/drive/MyDrive/VOD/Monthly/svodi_2005-09-01.nc", "svodi", band_names=None, lon="lon", lat="lat")
image = image.updateMask(image.neq(9999.0))
Map.addLayer(image)
Map
```
### Issue
```python
HttpError: <HttpError 400 when requesting https://earthengine.googleapis.com/v1alpha/projects/earthengine-legacy/maps?fields=name&alt=json returned "Request payload size exceeds the limit: 10485760 bytes.". Details: "Request payload size exceeds the limit: 10485760 bytes.">
During handling of the above exception, another exception occurred:
EEException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/ee/data.py](https://localhost:8080/#) in _execute_cloud_call(call, num_retries)
337 return call.execute(num_retries=num_retries)
338 except googleapiclient.errors.HttpError as e:
--> 339 raise _translate_cloud_exception(e)
340
341
EEException: Request payload size exceeds the limit: 10485760 bytes.
```
| closed | 2023-04-29T14:22:48Z | 2023-04-29T14:55:42Z | https://github.com/gee-community/geemap/issues/1525 | [
"bug"
] | xlsadai | 1 |
MagicStack/asyncpg | asyncio | 1,094 | Provide wheels for Python 3.12 as installing without C compiler currently fails | * **asyncpg version**: 0.28.0
* **Python version**: 3.12
* **Platform**: Linux
* **Did you install asyncpg with pip?**: yes
Hey, could you please either build the wheels for 0.28.0 / Python 3.12 or make a new release? We are moving the codebase to Python 3.12 and installations are currently failing without a C compiler. Thanks! | closed | 2023-10-26T17:01:26Z | 2025-02-18T19:53:39Z | https://github.com/MagicStack/asyncpg/issues/1094 | [] | zyv | 4 |
dantaki/vapeplot | seaborn | 5 | Pip install error | pip install raises following error
`Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/_1/hdrhn2y9719c6vnr54tsk2tc0000gn/T/pip-build-sl19a10_/vapeplot/` | closed | 2018-02-04T18:25:02Z | 2018-02-06T18:07:35Z | https://github.com/dantaki/vapeplot/issues/5 | [] | Dpananos | 5 |
noirbizarre/flask-restplus | api | 548 | "return make_response(body, status)" behaves differently from "return body, status" when @marshal_with is used | With "return make_response(body, status)" the status value is ignored (and set 200 by default)
With "return body, status" it isn't.
Is it an intended behavior?
It must be set somewhere here
https://github.com/noirbizarre/flask-restplus/blob/master/flask_restplus/marshalling.py#L248 | open | 2018-11-01T19:31:18Z | 2018-11-01T19:31:18Z | https://github.com/noirbizarre/flask-restplus/issues/548 | [] | andy-landy | 0 |
paperless-ngx/paperless-ngx | django | 8,795 | [BUG] Concise description of the issue | ### Description
While using devcontainer on VSCode I get the following error:
Start: Run: docker-compose -f /home/kanak/work/AI4Bhārat/contrib/paperless-ngx/.devcontainer/docker-compose.devcontainer.sqlite-tika.yml config
Stop (277 ms): Run: docker-compose -f /home/kanak/work/AI4Bhārat/contrib/paperless-ngx/.devcontainer/docker-compose.devcontainer.sqlite-tika.yml config
The Compose file '.../.devcontainer/docker-compose.devcontainer.sqlite-tika.yml' is invalid because:
**services.paperless-development.environment.PAPERLESS_DEBUG contains true, which is an invalid type, it should be a string, number, or a null**
### Steps to reproduce
1. Clone repo
2. Open in VS Code
3. Reopen in Container
### Webserver logs
```bash
No webserver logs.
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14
### Host OS
Ubuntu 22.04.4 LTS
### Installation method
Other (please describe above)
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-01-18T06:47:31Z | 2025-02-18T03:06:11Z | https://github.com/paperless-ngx/paperless-ngx/issues/8795 | [
"not a bug"
] | dteklavya | 3 |
matplotlib/matplotlib | data-visualization | 28,892 | [Doc]: Be more specific on dependencies that need to be installed for a "reasonable" dev environment | ### Documentation Link
https://matplotlib.org/devdocs/devel/development_setup.html#install-dependencies
### Problem
> Most Python dependencies will be installed when [setting up the environment](https://matplotlib.org/devdocs/devel/development_setup.html#dev-environment) but non-Python dependencies like C++ compilers, LaTeX, and other system applications must be installed separately.
### Suggested improvement
This is not actionable.
"*most*" and "*non-Python dependencies like [...] and other system applications*" makes impossible for a user to know what to install. Digging through following links is cumbersome.
We may not need to give detailed description, but should mention what to additionally install manually (or ensure it's there) for a reasonable working installation, e.g. (not checked for completeness):
> You additionally need
> - for a minimal working development environment: a [C++ compiler]()
> - for building the docs: [Graphviz]() and a [LateX]().
>
> The full list of required and optional dependencies is available here:
>
> [current links]
| closed | 2024-09-26T15:34:15Z | 2024-11-01T01:50:06Z | https://github.com/matplotlib/matplotlib/issues/28892 | [
"Documentation"
] | timhoffm | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 2,275 | Demo url not working | http://flaskappbuilder.pythonanywhere.com is currently not working.
I have seen in the past other times there has been this problem and issue opened.
| open | 2024-10-11T14:42:51Z | 2025-02-21T12:30:31Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2275 | [] | fedepad | 2 |
graphistry/pygraphistry | jupyter | 221 | [FEA] multi-gpu demo | See https://github.com/graphistry/graph-app-kit/issues/56
- [ ] download nb
- [ ] single gpu nb
- [ ] multi-gpu nb
- [ ] parallel io nb
- [ ] tutorial | open | 2021-03-18T06:31:33Z | 2021-03-18T06:32:22Z | https://github.com/graphistry/pygraphistry/issues/221 | [
"enhancement"
] | lmeyerov | 0 |
apachecn/ailearning | scikit-learn | 585 | AI | closed | 2020-05-13T11:04:46Z | 2020-11-23T02:05:17Z | https://github.com/apachecn/ailearning/issues/585 | [] | LiangJiaxin115 | 0 | |
wkentaro/labelme | computer-vision | 629 | [BUG] QT Error on Windows to Launch GUI | QT Error
```
qt.qpa.plugin: Could not find the Qt platform plugin "windows" in ""
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
``` | closed | 2020-03-24T19:14:12Z | 2021-09-23T15:19:28Z | https://github.com/wkentaro/labelme/issues/629 | [
"issue::bug"
] | Zumbalamambo | 4 |
piskvorky/gensim | data-science | 2,957 | Clean up OOP / stub methods | Do we really need such stub methods that only call the same superclass method with the same arguments? That's already the default which occurs if no method is present. By my understanding, doc-comment tools like Sphinx will, in their current versions, already propagate superclass API docs down to subclasses.
The only thing that's varying is the comment, and while it expresses a different expected-type from the superclass, in practice that doc may be misleading: I **think** (but have not recently checked) that these `SaveLoad` `.load()` methods can return objects that **may not be** what the caller expects. They return **the class that's in the file**, not the class-that-`.load()`-was-called-on.
If so, it might be a worthwhile short-term step as soon as 4.0.0 – for limiting the risk of confusion & requirement for redundant/caveat-filled docs – to **deprecate the practice of calling *SpecificClass*.load(filename) entirely**, despite its common appearance in previously-idiomatic gensim example code. Instead, either (1) call it only on class `SaveLoad` itself, to express that the only expectation for the returned type is that it's a `SaveLoad` subclass; (2) promote load functionality to model-specific top-level functions in each relevant model – a bit more like the `load_facebook_model()` function for loading Facebook-FasttText-native-format models – which might themselves do some type-checking, so any docs which imply they return a certain type are true; (3) just make one `utils.py` generic `load()` or `load_model()`, perhaps with an optional class-enforcement parameter, and encourage its use.
(For explicitness, I think I like this third option. In practice, it might appear in example code as:
```
from gensim.utils import load_model
from gensim.models import Word2Vec
w2v_model_we_hope = load_model('w2v.bin')
w2v_model_or_error = load_model('w2v.bin', expected_class=Word2Vec)
```
Plenty of code where the file is saved/loaded in the same example block, or under strong expectations & naming conventions, might skip the enforced-type-checking – but it'd be an option & true/explicit, rather than something that's implied-but-not-really-enforced in the current idiom `Word2Vec.load('kv.bin')`)
Despite the effort involved in making such changes, they could minimize duplicated code/comments & avoid some unintuitive gotchas in the current `SaveLoad` approach. They might also help make a future migration to some more standard big-model-serialization convention (as proposed by #2848) cleaner.
_Originally posted by @gojomo in https://github.com/RaRe-Technologies/gensim/pull/2939#discussion_r493807649_ | open | 2020-09-24T10:09:00Z | 2020-09-24T10:27:53Z | https://github.com/piskvorky/gensim/issues/2957 | [
"housekeeping"
] | piskvorky | 0 |
pytest-dev/pytest-cov | pytest | 288 | Regression in 2.7.1 when validating notebooks | I am running pytest for the [`krotov` package](https://travis-ci.org/qucontrol/krotov) with both pytest-cov and the [nbval plugin](https://nbval.readthedocs.io/en/latest/) to validate jupyter notebook in the documentation. Since `pytest-cov` was updated to version 2.7.1, there is extra output related to repr-strings of internal coverage objects appearing in the output of some notebook cells. See https://travis-ci.org/goerz/krotov/jobs/528562228 for the failure, and compare this with the working run at https://travis-ci.org/qucontrol/krotov/jobs/527616527. The first failure is for Cell 14 of https://krotov.readthedocs.io/en/latest/notebooks/05_example_transmon_xgate.html. I'm able to reproduce the problem locally (both on macOS and Linux), not just on Travis, and I can also verify that the problem disappears if I pin `pytest-cov` to version 2.6.1 in `krotov`'s `setup.py`. | open | 2019-05-06T06:46:08Z | 2019-05-06T07:21:35Z | https://github.com/pytest-dev/pytest-cov/issues/288 | [] | goerz | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,802 | Ui "screwed" up in Networks Tabs | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
The UI in all the networks tabs shows Grey icons and are missing the Scrollbar - makes it really hard to use.
tested on different browsers and without extensions
### Steps to reproduce the problem
Just use it (at least for me)
### What should have happened?
Show icons and scroll bar
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-05-15-14-38.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15323126/sysinfo-2024-05-15-14-38.json)
### Console logs
```Shell
Already up to date.
venv "H:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers --ckpt-dir H:\SD_MODEL_DIR\Models\StableDiffusion --embeddings-dir H:\SD_MODEL_DIR\SD_EMBEDDINGS --lora-dir H:\SD_MODEL_DIR\Models\Lora --gfpgan-dir H:\SD_MODEL_DIR\Models\GFPGAN --esrgan-models-path H:\SD_MODEL_DIR\Models\ESRGAN --realesrgan-models-path H:\SD_MODEL_DIR\Models\RealESRGAN
2024-05-15 16:33:51.134800: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-15 16:33:52.270905: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Loading weights [ef76aa2332] from H:\SD_MODEL_DIR\Models\StableDiffusion\realisticVisionV51_v51VAE.safetensors
Creating model from config: H:\stable-diffusion-webui\configs\v1-inference.yaml
H:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 23.7s (prepare environment: 3.6s, import torch: 7.5s, import gradio: 1.3s, setup paths: 5.3s, initialize shared: 0.4s, other imports: 0.8s, opts onchange: 0.3s, load scripts: 2.6s, create ui: 1.0s, gradio launch: 0.5s).
Applying attention optimization: xformers... done.
Model loaded in 8.4s (load weights from disk: 0.4s, create model: 1.3s, apply weights to model: 3.7s, apply dtype to VAE: 0.2s, load textual inversion embeddings: 2.4s, calculate empty prompt: 0.2s).
```
### Additional information
[I have updated my GPU driver recently.

| open | 2024-05-15T14:39:23Z | 2024-06-20T19:20:36Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15802 | [
"bug-report"
] | OleJ1964 | 3 |
httpie/cli | api | 606 | output formatting on debian... | hi and thanks for a great tool :)
i'm not seeing json output formatted as it is in the screenshots here - i get just a single string of json that wraps at the terminal window edge - is there anything i should be doing differently that:
http [domain.blah]
?
thanks
| closed | 2017-09-05T20:57:23Z | 2017-09-05T22:59:31Z | https://github.com/httpie/cli/issues/606 | [] | fake-fur | 7 |
paperless-ngx/paperless-ngx | machine-learning | 9,390 | [BUG] Edit Permission not set | ### Description
The "Edit" Permission is not set in a document, although the permission is set in the workflow.
Workflow:
<img width="531" alt="Image" src="https://github.com/user-attachments/assets/6e79e8ca-08a7-4ca0-8d71-4afa647e3e9a" />
Document:
<img width="413" alt="Image" src="https://github.com/user-attachments/assets/4a2ccd4a-a25a-435a-8d10-0456c4cb824c" />
### Steps to reproduce
1. Create a workflow
2. Assign an owner under Actions and add another user to the edit permissions
3. Add a document that meets the workflow's criteria
### Webserver logs
```bash
Not related
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7
### Host OS
Linux-4.4.302+-x86_64-with-glibc2.36
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.7",
"server_os": "Linux-4.4.302+-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 469010432000,
"available": 216089976832
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1061_workflowactionwebhook_as_json",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-03-13T14:37:54.904629+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-03-13T14:05:03.298540Z",
"classifier_error": null
}
}
```
### Browser
Safari
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-03-13T15:00:45Z | 2025-03-13T17:59:59Z | https://github.com/paperless-ngx/paperless-ngx/issues/9390 | [
"cant-reproduce"
] | weizenmanncom | 5 |
3b1b/manim | python | 1,947 | Unused Function `digest_mobject_attrs` | I'm not quite sure if this function is used anymore. It was last changed about 3 years ago and doesn't seem to be used anymore and has no references either.
https://github.com/3b1b/manim/blob/fcff44a66b58a3af4070381afed0b4fad80768be/manimlib/mobject/mobject.py#L416-L423
(Currently working on updating the CE version to the new state of your repo, so if theres anything i can do for you while i'm looking at the code anyway just leave it here. Always happy to help) | closed | 2022-12-28T21:15:38Z | 2023-01-05T00:54:47Z | https://github.com/3b1b/manim/issues/1947 | [] | MrDiver | 4 |
yt-dlp/yt-dlp | python | 12,377 | Add support for Public Radio Exchange (PRX) | ### Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [x] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
United States
### Example URLs
Sample link with a single stream: https://exchange.prx.org/pieces/558187?m=false
Sample link with multiple streams: https://exchange.prx.org/pieces/507265?m=false
Playlist link: https://exchange.prx.org/playlists/354104
### Provide a description that is worded well enough to be understood
This site features streams of public radio streams and podcasts using an embedded player.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://exchange.prx.org/pieces/507265?m=false']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds [6ca23ffaa] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0.1-full_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1841 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.02.11.232920 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://exchange.prx.org/pieces/507265?m=false
[generic] 507265?m=false: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 507265?m=false: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://exchange.prx.org/pieces/507265?m=false
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1637, in wrapper
File "yt_dlp\YoutubeDL.py", line 1772, in __extract_info
File "yt_dlp\extractor\common.py", line 747, in extract
File "yt_dlp\extractor\generic.py", line 2566, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://exchange.prx.org/pieces/507265?m=false
``` | open | 2025-02-16T02:08:15Z | 2025-02-16T16:02:51Z | https://github.com/yt-dlp/yt-dlp/issues/12377 | [
"site-request"
] | wkrick | 1 |
huggingface/datasets | numpy | 7,427 | Error splitting the input into NAL units. | ### Describe the bug
I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library
The error logging is like following:
```text
Converting format of dataset (num_proc=16): 100%|█████████▉| 19265/19267 [11:44<00:00, 5.88 examples/s]
Converting format of dataset (num_proc=16): 100%|█████████▉| 19266/19267 [11:44<00:00, 5.02 examples/s]
Converting format of dataset (num_proc=16): 100%|██████████| 19267/19267 [11:44<00:00, 5.44 examples/s]
Converting format of dataset (num_proc=16): 100%|██████████| 19267/19267 [11:44<00:00, 27.34 examples/s]
Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [00:00<?, ? examples/s]
Invalid NAL unit size (45405 > 35540).
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (7131 > 3225).
missing picture in access unit with size 54860
Invalid NAL unit size (48042 > 33645).
missing picture in access unit with size 3229
missing picture in access unit with size 33649
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (48042 > 33645).
Error splitting the input into NAL units.
missing picture in access unit with size 35544
Invalid NAL unit size (45405 > 35540).
Error splitting the input into NAL units.
Error splitting the input into NAL units.
Invalid NAL unit size (8187 > 7069).
missing picture in access unit with size 7073
Invalid NAL unit size (8187 > 7069).
Error splitting the input into NAL units.
Invalid NAL unit size (7131 > 3225).
Error splitting the input into NAL units.
Invalid NAL unit size (14013 > 5998).
missing picture in access unit with size 6002
Invalid NAL unit size (14013 > 5998).
Error splitting the input into NAL units.
Invalid NAL unit size (17173 > 7231).
missing picture in access unit with size 7235
Invalid NAL unit size (17173 > 7231).
Error splitting the input into NAL units.
Invalid NAL unit size (16964 > 6055).
missing picture in access unit with size 6059
Invalid NAL unit size (16964 > 6055).
Exception in thread Thread-9 (accepter)Error splitting the input into NAL units.
:
Traceback (most recent call last):
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run
Invalid NAL unit size (7032 > 2927).
missing picture in access unit with size 2931
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter
Invalid NAL unit size (7032 > 2927).
Error splitting the input into NAL units.
t.start()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start
Invalid NAL unit size (28973 > 6121).
missing picture in access unit with size 6125
_start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121).
RuntimeError: can't start new threadError splitting the input into NAL units.
Invalid NAL unit size (4411 > 296).
missing picture in access unit with size 300
Invalid NAL unit size (4411 > 296).
Error splitting the input into NAL units.
Invalid NAL unit size (14414 > 1471).
missing picture in access unit with size 1475
Invalid NAL unit size (14414 > 1471).
Error splitting the input into NAL units.
Invalid NAL unit size (5283 > 1792).
missing picture in access unit with size 1796
Invalid NAL unit size (5283 > 1792).
Error splitting the input into NAL units.
Invalid NAL unit size (79147 > 10042).
missing picture in access unit with size 10046
Invalid NAL unit size (79147 > 10042).
Error splitting the input into NAL units.
Invalid NAL unit size (45405 > 35540).
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (7131 > 3225).
missing picture in access unit with size 54860
Invalid NAL unit size (48042 > 33645).
missing picture in access unit with size 3229
missing picture in access unit with size 33649
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (48042 > 33645).
Error splitting the input into NAL units.
missing picture in access unit with size 35544
Invalid NAL unit size (45405 > 35540).
Error splitting the input into NAL units.
Error splitting the input into NAL units.
Invalid NAL unit size (8187 > 7069).
missing picture in access unit with size 7073
Invalid NAL unit size (8187 > 7069).
Error splitting the input into NAL units.
Invalid NAL unit size (7131 > 3225).
Error splitting the input into NAL units.
Invalid NAL unit size (14013 > 5998).
missing picture in access unit with size 6002
Invalid NAL unit size (14013 > 5998).
Error splitting the input into NAL units.
Invalid NAL unit size (17173 > 7231).
missing picture in access unit with size 7235
Invalid NAL unit size (17173 > 7231).
Error splitting the input into NAL units.
Invalid NAL unit size (16964 > 6055).
missing picture in access unit with size 6059
Invalid NAL unit size (16964 > 6055).
Exception in thread Thread-9 (accepter)Error splitting the input into NAL units.
:
Traceback (most recent call last):
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run
Invalid NAL unit size (7032 > 2927).
missing picture in access unit with size 2931
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter
Invalid NAL unit size (7032 > 2927).
Error splitting the input into NAL units.
t.start()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start
Invalid NAL unit size (28973 > 6121).
missing picture in access unit with size 6125
_start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121).
RuntimeError: can't start new threadError splitting the input into NAL units.
Invalid NAL unit size (4411 > 296).
missing picture in access unit with size 300
Invalid NAL unit size (4411 > 296).
Error splitting the input into NAL units.
Invalid NAL unit size (14414 > 1471).
missing picture in access unit with size 1475
Invalid NAL unit size (14414 > 1471).
Error splitting the input into NAL units.
Invalid NAL unit size (5283 > 1792).
missing picture in access unit with size 1796
Invalid NAL unit size (5283 > 1792).
Error splitting the input into NAL units.
Invalid NAL unit size (79147 > 10042).
missing picture in access unit with size 10046
Invalid NAL unit size (79147 > 10042).
Error splitting the input into NAL units.
Invalid NAL unit size (45405 > 35540).
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (7131 > 3225).
missing picture in access unit with size 54860
Invalid NAL unit size (48042 > 33645).
missing picture in access unit with size 3229
missing picture in access unit with size 33649
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (48042 > 33645).
Error splitting the input into NAL units.
missing picture in access unit with size 35544
Invalid NAL unit size (45405 > 35540).
Error splitting the input into NAL units.
Error splitting the input into NAL units.
Invalid NAL unit size (8187 > 7069).
missing picture in access unit with size 7073
Invalid NAL unit size (8187 > 7069).
Error splitting the input into NAL units.
Invalid NAL unit size (7131 > 3225).
Error splitting the input into NAL units.
Invalid NAL unit size (14013 > 5998).
missing picture in access unit with size 6002
Invalid NAL unit size (14013 > 5998).
Error splitting the input into NAL units.
Invalid NAL unit size (17173 > 7231).
missing picture in access unit with size 7235
Invalid NAL unit size (17173 > 7231).
Error splitting the input into NAL units.
Invalid NAL unit size (16964 > 6055).
missing picture in access unit with size 6059
Invalid NAL unit size (16964 > 6055).
Exception in thread Thread-9 (accepter)Error splitting the input into NAL units.
:
Traceback (most recent call last):
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run
Invalid NAL unit size (7032 > 2927).
missing picture in access unit with size 2931
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter
Invalid NAL unit size (7032 > 2927).
Error splitting the input into NAL units.
t.start()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start
Invalid NAL unit size (28973 > 6121).
missing picture in access unit with size 6125
_start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121).
RuntimeError: can't start new threadError splitting the input into NAL units.
Invalid NAL unit size (4411 > 296).
missing picture in access unit with size 300
Invalid NAL unit size (4411 > 296).
Error splitting the input into NAL units.
Invalid NAL unit size (14414 > 1471).
missing picture in access unit with size 1475
Invalid NAL unit size (14414 > 1471).
Error splitting the input into NAL units.
Invalid NAL unit size (5283 > 1792).
missing picture in access unit with size 1796
Invalid NAL unit size (5283 > 1792).
Error splitting the input into NAL units.
Invalid NAL unit size (79147 > 10042).
missing picture in access unit with size 10046
Invalid NAL unit size (79147 > 10042).
Error splitting the input into NAL units.
Invalid NAL unit size (45405 > 35540).
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (7131 > 3225).
missing picture in access unit with size 54860
Invalid NAL unit size (48042 > 33645).
missing picture in access unit with size 3229
missing picture in access unit with size 33649
Invalid NAL unit size (86720 > 54856).
Invalid NAL unit size (48042 > 33645).
Error splitting the input into NAL units.
missing picture in access unit with size 35544
Invalid NAL unit size (45405 > 35540).
Error splitting the input into NAL units.
Error splitting the input into NAL units.
Invalid NAL unit size (8187 > 7069).
missing picture in access unit with size 7073
Invalid NAL unit size (8187 > 7069).
Error splitting the input into NAL units.
Invalid NAL unit size (7131 > 3225).
Error splitting the input into NAL units.
Invalid NAL unit size (14013 > 5998).
missing picture in access unit with size 6002
Invalid NAL unit size (14013 > 5998).
Error splitting the input into NAL units.
Invalid NAL unit size (17173 > 7231).
missing picture in access unit with size 7235
Invalid NAL unit size (17173 > 7231).
Error splitting the input into NAL units.
Invalid NAL unit size (16964 > 6055).
missing picture in access unit with size 6059
Invalid NAL unit size (16964 > 6055).
Exception in thread Thread-9 (accepter)Error splitting the input into NAL units.
:
Traceback (most recent call last):
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run
Invalid NAL unit size (7032 > 2927).
missing picture in access unit with size 2931
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter
Invalid NAL unit size (7032 > 2927).
Error splitting the input into NAL units.
t.start()
File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start
Invalid NAL unit size (28973 > 6121).
missing picture in access unit with size 6125
_start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121).
RuntimeError: can't start new threadError splitting the input into NAL units.
Invalid NAL unit size (4411 > 296).
missing picture in access unit with size 300
Invalid NAL unit size (4411 > 296).
Error splitting the input into NAL units.
Invalid NAL unit size (14414 > 1471).
missing picture in access unit with size 1475
Invalid NAL unit size (14414 > 1471).
Error splitting the input into NAL units.
Invalid NAL unit size (5283 > 1792).
missing picture in access unit with size 1796
Invalid NAL unit size (5283 > 1792).
Error splitting the input into NAL units.
Invalid NAL unit size (79147 > 10042).
missing picture in access unit with size 10046
Invalid NAL unit size (79147 > 10042).
Error splitting the input into NAL units.
```
### Others
_No response_
### Steps to reproduce the bug
None
### Expected behavior
excpect to run successfully
### Environment info
```
transformers==4.49.0
datasets==3.2.0
accelerate==1.2.1
peft==0.12.0
trl==0.9.6
tokenizers==0.21.0
gradio>=4.38.0,<=5.18.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
av
librosa
tyro<0.9.0
openlm-hub
qwen-vl-utils
``` | open | 2025-02-28T02:30:15Z | 2025-03-04T01:40:28Z | https://github.com/huggingface/datasets/issues/7427 | [] | MengHao666 | 2 |
huggingface/datasets | tensorflow | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150
2151 # Download and prepare data
-> 2152 builder_instance.download_and_prepare(
2153 download_config=download_config,
2154 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
946 if num_proc is not None:
947 prepare_split_kwargs["num_proc"] = num_proc
--> 948 self._download_and_prepare(
949 dl_manager=dl_manager,
950 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1059
1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS:
-> 1061 verify_splits(self.info.splits, split_dict)
1062
1063 # Update the info object with the splits.
[/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits)
98 ]
99 if len(bad_splits) > 0:
--> 100 raise NonMatchingSplitsSizesError(str(bad_splits))
101 logger.info("All the splits matched successfully.")
102
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}]
```
I think I had this dataset updated, might be related to #6271
It is working fine as late in `2.10.0` , but not in `2.13.0` onwards.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("pysentimiento/spanish-tweets-small")
```
You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg)
### Expected behavior
Load the dataset without any error
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- PyArrow version: 14.0.2
- Pandas version: 2.0.3 | open | 2024-05-13T15:41:57Z | 2024-05-13T15:44:48Z | https://github.com/huggingface/datasets/issues/6896 | [] | finiteautomata | 0 |
nolar/kopf | asyncio | 810 | Handler starts but never finishes | ### Long story short
I have recently deploy our operator to an AKS cluster (its been running on EKS & on-prem clusters without any issue) & started noticing that it wasn't
handling changes to CRDs, restarting the container would trigger the handler successfully. At first I thought that we were missing events so explicitly added some [api timeouts](https://kopf.readthedocs.io/en/stable/configuration/#api-timeouts) at startup.
However this didn't make a difference & events still seemed to be being missed. On further investigation (running with --debug startup param) the event did seem to get picked up & the handler triggered but it just seems to hang without ever finishing.
```
debug | 2021-07-26T17:14:00.672642+00:00 | [thor/thor-configs] Updating diff: (('change', ('spec', 'patch'), '2021-07-26T17:04:08Z', '2021-07-26T17:13:56Z'),)
debug | 2021-07-26T17:14:00.675029+00:00 | [thor/thor-configs] Handler 'config_update_handler' is invoked.
```
There would be no `Handler 'config_update_handler' succeeded.` log line (I left it for over 1 hour). It seems it would stay in this state forever.
When restarting the container there are some log lines that take about `Unprocessed streams` but I haven't been able to figure out why.
```
debug | 2021-07-26T17:20:03.375018+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d2c6df0> starts SSL handshake
debug | 2021-07-26T17:20:03.394016+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d2c6df0>: SSL handshake took 18.8 ms
debug | 2021-07-26T17:20:03.394890+00:00 | <asyncio.TransportSocket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.0.0.48', 57824), raddr=('10.1.0.1', 443)> connected to 10.1.0.1:443: (<asyncio.sslproto._SSLProtocolTransport object at 0x7fe41d4abfa0>, <aiohttp.client_proto.ResponseHandler object at 0x7fe41da1d760>)
debug | 2021-07-26T17:20:03.404854+00:00 | Keep-alive in 'default' cluster-wide: not found.
info | 2021-07-26T17:20:35.960403+00:00 | Signal SIGTERM is received. Operator is stopping.
debug | 2021-07-26T17:20:35.961961+00:00 | Stopping the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
debug | 2021-07-26T17:20:35.965829+00:00 | Namespace observer is cancelled.
debug | 2021-07-26T17:20:35.966790+00:00 | Credentials retriever is cancelled.
debug | 2021-07-26T17:20:35.968802+00:00 | Poster of events is cancelled.
debug | 2021-07-26T17:20:35.976232+00:00 | Stopping the watch-stream for clusterkopfpeerings.v1.kopf.dev cluster-wide.
debug | 2021-07-26T17:20:35.981848+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8c9af0>: SSL error in data received
debug | 2021-07-26T17:20:35.998038+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8ec430>: SSL error in data received
debug | 2021-07-26T17:20:36.002980+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8ec130>: SSL error in data received
debug | 2021-07-26T17:20:36.004243+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8c9c70>: SSL error in data received
debug | 2021-07-26T17:20:36.007808+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8ec1f0>: SSL error in data received
debug | 2021-07-26T17:20:36.008655+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8c9e80>: SSL error in data received
debug | 2021-07-26T17:20:36.010025+00:00 | Stopping the watch-stream for secrets.v1 cluster-wide.
debug | 2021-07-26T17:20:36.012810+00:00 | Stopping the watch-stream for configs.v1.company.com cluster-wide.
debug | 2021-07-26T17:20:36.014347+00:00 | Stopping the watch-stream for apiaccesscredentials.v1.company.com cluster-wide.
debug | 2021-07-26T17:20:36.015304+00:00 | Stopping the watch-stream for certificates.v1.company.com cluster-wide.
debug | 2021-07-26T17:20:36.020593+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d8ec040>: SSL error in data received
debug | 2021-07-26T17:20:36.038528+00:00 | [thor/certificate-server-tls-cert] Timer 'renew_tls_certificate' has exited gracefully.
debug | 2021-07-26T17:20:36.046421+00:00 | Daemon killer is cancelled.
debug | 2021-07-26T17:20:36.053462+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d4b80d0> starts SSL handshake
debug | 2021-07-26T17:20:36.055670+00:00 | Resource observer is cancelled.
debug | 2021-07-26T17:20:36.082348+00:00 | <asyncio.sslproto.SSLProtocol object at 0x7fe41d4b80d0>: SSL handshake took 28.7 ms
debug | 2021-07-26T17:20:36.083909+00:00 | <asyncio.TransportSocket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.0.0.48', 58122), raddr=('10.1.0.1', 443)> connected to 10.1.0.1:443: (<asyncio.sslproto._SSLProtocolTransport object at 0x7fe41d9c5580>, <aiohttp.client_proto.ResponseHandler object at 0x7fe4232cf4c0>)
debug | 2021-07-26T17:20:36.092817+00:00 | Keep-alive in 'default' cluster-wide: not found.
warn | 2021-07-26T17:20:38.033284+00:00 | Unprocessed streams left for [(configs.v1.company.com, 'c1a72afd-d36c-45e3-8cf1-3e713f39ac79')].
debug | 2021-07-26T17:20:45.976898+00:00 | Streaming tasks are not stopped: finishing normally; tasks left: {<Task pending name='watcher for configs.v1.company.com@None' coro=<guard() running at /usr/local/lib/python3.8/site-packages/kopf/utilities/aiotasks.py:69> wait_for=<Future pending cb=[shield.<locals>._outer_done_callback() at /usr/local/lib/python3.8/asyncio/tasks.py:902, <TaskWakeupMethWrapper object at 0x7fe41d2c4a30>()] created at /usr/local/lib/python3.8/asyncio/base_events.py:422> created at /usr/local/lib/python3.8/asyncio/tasks.py:382>}
debug | 2021-07-26T17:20:55.980860+00:00 | Streaming tasks are not stopped: finishing normally; tasks left: {<Task pending name='watcher for configs.v1.company.com@None' coro=<guard() running at /usr/local/lib/python3.8/site-packages/kopf/utilities/aiotasks.py:69> wait_for=<Future pending cb=[shield.<locals>._outer_done_callback() at /usr/local/lib/python3.8/asyncio/tasks.py:902, <TaskWakeupMethWrapper object at 0x7fe41d2c4a30>()] created at /usr/local/lib/python3.8/asyncio/base_events.py:422> created at /usr/local/lib/python3.8/asyncio/tasks.py:382>}
parse error: Invalid numeric literal at line 556, column 4
```
Not sure wether this is related to #718 as it does seem similar. One thing to note is that the cluster is not running at scale there are less that 5 resources being watched.
### Kopf version
1.29.0
### Kubernetes version
1.19.11
### Python version
3.8
### Code
_No response_
### Logs
_No response_
### Additional information
_No response_ | closed | 2021-07-26T17:42:57Z | 2021-08-05T15:43:06Z | https://github.com/nolar/kopf/issues/810 | [
"bug"
] | euan-tilley | 3 |
widgetti/solara | fastapi | 182 | Passthrough kwargs to Tooltip | The Tooltip only allows a handful of options, but since it's a fairly thin wrapper around ipyvuetify Tooltip we could pass through **kwargs to the underlying tooltip (or enumerate every option but just passing kwargs is easier).
One situation where I would have found this useful is using together with ipyleaflet, since leaflet maps have high z-index it's necessary to set higher z-index CSS property via style arg on floating elements like tooltips and Dialogues. | open | 2023-06-29T07:43:07Z | 2023-06-29T07:43:07Z | https://github.com/widgetti/solara/issues/182 | [] | mangecoeur | 0 |
geopandas/geopandas | pandas | 3,402 | Test with zoneinfo alongside pytz | > [...]we should probably parametrise the tests so we test the behaviour is right with zoneinfo too
_Originally posted by @m-richards in https://github.com/geopandas/geopandas/pull/3401#pullrequestreview-2242358553_
| closed | 2024-08-16T10:08:54Z | 2024-10-28T07:55:16Z | https://github.com/geopandas/geopandas/issues/3402 | [] | martinfleis | 0 |
microsoft/MMdnn | tensorflow | 730 | error converting from keras to caffe | Platform (like ubuntu 16.04/win10):ubuntu 16.04
Python version:Python 2.7.16
Source framework with version (like Tensorflow 1.4.1 with GPU):tensorflow 1.9.0
Destination framework with version (like CNTK 2.3 with GPU):caffe 1.0.0
Pre-trained model path (webpath or webdisk path):custom file
Running scripts: mmconvert -sf keras -iw model_076720.h5 -df caffe -om mycaffemodel
Using TensorFlow backend.
Traceback (most recent call last):
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/bin/mmconvert", line 10, in <module>
sys.exit(_main())
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/lib/python2.7/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 46, in _convert
parser = Keras2Parser(model)
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/lib/python2.7/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 120, in __init__
'DepthwiseConv2D': layers.DepthwiseConv2D
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/lib/python2.7/site-packages/keras/engine/saving.py", line 417, in load_model
f = h5dict(filepath, 'r')
File "/home/taeheej/work/nobkup/anaconda2/envs/deephipy2f/lib/python2.7/site-packages/keras/utils/io_utils.py", line 197, in __init__
'Received: {}.'.format(type(path)))
TypeError: Required Group, str or dict. Received: <type 'unicode'>.
| open | 2019-09-05T16:46:33Z | 2019-09-09T02:44:19Z | https://github.com/microsoft/MMdnn/issues/730 | [] | TaeheeJeong | 1 |
aminalaee/sqladmin | asyncio | 721 | CSS and Javascript files for the Admin not loaded | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
I deployed my project on the server but the Admin is only HTML, I followed the steps in this link (https://aminalaee.dev/sqladmin/cookbook/deployment_with_https/) and added the uvicoorn options but also it still not working and I am getting the same problem
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
Python 3.10 ,FastAPI, SQLAdmin
### Additional context
_No response_ | closed | 2024-02-29T10:12:28Z | 2024-03-13T12:21:05Z | https://github.com/aminalaee/sqladmin/issues/721 | [] | sharbelAllouneh | 2 |
recommenders-team/recommenders | data-science | 1,274 | [ASK] Rename master branch -> main | ### Description
Rename master branch -> main
### Other Comments
| closed | 2021-01-07T14:20:25Z | 2021-01-22T09:06:23Z | https://github.com/recommenders-team/recommenders/issues/1274 | [
"help wanted"
] | gramhagen | 1 |
HumanSignal/labelImg | deep-learning | 293 | Previous labels do not appear when I reopen an image file | <!--
Hi,
I worked on LableImg Windows_v1.6.1 and labelled some images.
Then I had the issue #221:
Before I use the solution on GitHub (i.e. to remove the file .labelImgSettings.pkl), I re-installed the same version of LabelImg. With the solution of GitHub, I can launch the program and continue my annotation task.
However, when I reopen a image file that I have previously annotated (i.e. before the re-installation of LabelImg), the boundary box does not appear anymore (the xml file is still availaible in the same directory as the image file).
How can I fix that?
Many thanks!
-->
- **OS:** Windows 7
- **PyQt version:** I have downloaded LabelImg from prebuilt binaries
| closed | 2018-05-11T09:39:24Z | 2018-05-22T06:51:18Z | https://github.com/HumanSignal/labelImg/issues/293 | [] | Bigoudom | 5 |
deepinsight/insightface | pytorch | 2,373 | Please provide 256 or 512 model for face swap. It will improve output quality. | Thanks for providing 128 size model for face swap.
Please provide 256 or 512 model.
It will greatly improve output quality.
Thanks a lot.
Waiting for your positive reply. | open | 2023-07-16T13:33:21Z | 2024-07-03T03:00:53Z | https://github.com/deepinsight/insightface/issues/2373 | [] | arnold408 | 2 |
fastapi/sqlmodel | sqlalchemy | 312 | TypeError: issubclass() arg 1 must be a class | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import date
from typing import Optional, Union
from sqlmodel import SQLModel
class Person(SQLModel):
name: str
birthdate: Optional[Union[date, str]]
....
```
### Description
I want to store information about some people in a MySQL database. Due to the nature of the information, birth dates can be full dates (1940-05-02), month and year (1932-07) or years only (1965). I searched the pydantic documentation and it says to use Unions to accept multiple data types. However, when I try to do this sqlmodel raises the error `TypeError: issubclass() arg 1 must be a class`. I know the issue comes from union because if I remove it then the code works just fine.
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context
_No response_ | open | 2022-04-23T23:47:10Z | 2022-04-24T21:43:04Z | https://github.com/fastapi/sqlmodel/issues/312 | [
"question"
] | Maypher | 1 |
litestar-org/polyfactory | pydantic | 563 | Bug: `ModelFactory._create_model` not handling `_build_context = None` | ### Description
I'm getting errors from the `ModelFactory._create_model` function which is not handling when the passed in build context is `None`: https://github.com/litestar-org/polyfactory/pull/549/files#diff-58f44001d9e3d42e5c8fc55d621181dc66f78b37b46a414bb1bf6b6dd1d2bcbbR504
Should that line be:
```py
if cls._get_build_context(_build_context).get("factory_use_construct"):
```
On another note, it seems this issue has highlighted the fact that the build context isn't being passed down to factories for models in collection fields when using `.coverage()` (see traceback for the snippet in the mcve)
EDIT: Reading the PR that introduced this... Seems like this has already been raised.
@guacs @Reskov
### MCVE
```python
from pydantic import BaseModel
from polyfactory.factories.pydantic_factory import ModelFactory
class T(BaseModel):
i: int
class L(BaseModel):
l: list[T]
list(ModelFactory.create_factory(L).coverage())
```
### Logs
```bash
@classmethod
def _create_model(cls, _build_context: PydanticBuildContext, **kwargs: Any) -> T:
"""Create an instance of the factory's __model__
:param _build_context: BuildContext instance.
:param kwargs: Model kwargs.
:returns: An instance of type T.
"""
> if _build_context.get("factory_use_construct"):
E AttributeError: 'NoneType' object has no attribute 'get'
.venv/lib/python3.10/site-packages/polyfactory/factories/pydantic_factory.py:504: AttributeError
```
### Release Version
2.16.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-07-08T05:24:59Z | 2025-03-20T15:53:17Z | https://github.com/litestar-org/polyfactory/issues/563 | [
"bug"
] | sam-or | 1 |
benbusby/whoogle-search | flask | 196 | [BUG] !bang-operators only work as prefix | **Describe the bug**
The !bang-operators adopted from DuckDuckGo only seem to be identified correctly when typed _in front of_ other search terms. On DDG they can be placed anywhere in the search e.g. "hello world !reddit" or even "hello !reddit world". Both would search the website reddit.com for "hello world".
**To Reproduce**
Steps to reproduce the behavior:
1. Search for something using a !bang-operator _at the end_
2. !bang-operator is ignored and no redirection to target website occurs
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [X] Docker (with "buildx-experimental" Docker tag)
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: macOS 10.15.7
- Browser: Brave
- Version: 1.19.92
| closed | 2021-02-12T08:53:52Z | 2021-02-20T20:32:55Z | https://github.com/benbusby/whoogle-search/issues/196 | [
"bug"
] | Wever1985 | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 781 | CUDA error: an illegal memory access was encountered | I have identified that the `identifyTileRanges` function is causing the issue, but I'm not quite sure how to resolve it. Do you have any constructive suggestions?
https://github.com/graphdeco-inria/diff-gaussian-rasterization/blob/59f5f77e3ddbac3ed9db93ec2cfe99ed6c5d121d/cuda_rasterizer/rasterizer_impl.cu#L116
**Oddly, I didn't encounter any issues when I directly loaded the saved input parameters for rendering.**
[dump_file.zip](https://github.com/graphdeco-inria/gaussian-splatting/files/15159771/dump_file.zip)
```
pytorch 1.13.1
cuda 11.4
A100 SXM4 80G
```
After several days of debugging, I found that the concatenation of keys might have caused anomalies, leading to memory errors. By storing them separately instead of concatenating, this issue was resolved. However, another exception occurred in the `FORWARD::render` function, and the problem couldn't be reproduced with the parameters saved in debug mode. | closed | 2024-04-30T07:21:18Z | 2025-01-05T10:48:49Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/781 | [] | baoachun | 16 |
unionai-oss/pandera | pandas | 1,032 | Cannot set pa.Column.nullable after it's been set | ```python
schm = pa.infer_schema(df)
for name, column in schm.columns.items():
column.checks = []
column.coerce = True
# column.nullable cannot be set :(
column.nullable = False
print(column.nullable)
break
```
I'm trying to correct the DataFrameSchema to do what I need it to do. However,
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
3 column.checks = []
4 column.coerce = True
----> 5 column.nullable = True
6 # column.co
7 print(column.nullable)
AttributeError: can't set attribute 'nullable'
```
And this is due to the fact that infer_schema, as far as I could see so far, has no option to prohibit nullables. | open | 2022-11-22T13:36:25Z | 2022-11-23T14:12:12Z | https://github.com/unionai-oss/pandera/issues/1032 | [
"bug"
] | bgalvao | 2 |
ultralytics/ultralytics | deep-learning | 18,796 | Number of class instances does not increase with training parameter `augment=True` | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I was recently testing out the augmentation parameter `augment` to increase my training samples.
I did this via `augment=True`
```
model.train(data='/content/data.yaml', epochs=50, imgsz=800, augment=True)
```
I observed the output but noticed that my instances did not increase.
Is this normal functionality?
<img width="514" alt="Image" src="https://github.com/user-attachments/assets/77644c83-10c8-4305-83fe-5048638de63a" />
### Additional
_No response_ | open | 2025-01-21T10:08:12Z | 2025-01-21T10:15:39Z | https://github.com/ultralytics/ultralytics/issues/18796 | [
"question",
"detect"
] | fninsiima | 2 |
benbusby/whoogle-search | flask | 167 | [BUG] In consistent dark mode. | **Describe the bug**
Dark mode is inconsistent.
**To Reproduce**
Just deployed the latest container from docker hub and found it.
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [*] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [* ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Windows
- Browser -Vivaldi(Based on chromium)
- Version [e.g. 22]
**Smartphone (please complete the following information):**
None
**Additional context**
Screenshot
https://telegra.ph/file/70bf1f8ab3dd3e1efde7f.png | closed | 2021-01-15T10:21:41Z | 2021-01-23T19:25:37Z | https://github.com/benbusby/whoogle-search/issues/167 | [
"bug"
] | doloresjose | 3 |
twopirllc/pandas-ta | pandas | 27 | Minor issue in the example notebook | The example notebook shows
`e.ta.indicators()`
but I think this needs to be
`e().ta.indicators()`
Or you need to assign `e = pd.DataFrame()` ... coders choice. :)
| closed | 2019-07-20T04:35:50Z | 2019-07-31T17:38:11Z | https://github.com/twopirllc/pandas-ta/issues/27 | [] | bdowling | 5 |
aio-libs-abandoned/aioredis-py | asyncio | 1,206 | Health check fails when a pubsub has no subscriptions | ### Describe the bug
When a PubSub needs to issue a PING due to the health check feature, it does not consider that there might be no subscriptions at the moment. Redis responds differently to PING depending on whether there are active subscriptions or not: if there are no subscriptions it just returns the argument as a bulk response, instead of a multi-bulk with "pong" and the response. This breaks the code that detects the health check response, and instead the individual bytes of the `aioredis-py-health-check` string get inserted into the returned message.
### To Reproduce
1. Install aioredis 2.0.0
2. Run this code:
```python
#!/usr/bin/env python3
import asyncio
import aioredis
async def poll(ps):
while True:
message = await ps.get_message(timeout=1)
if message is not None:
print(message)
async def main():
r = aioredis.Redis.from_url("redis://localhost", health_check_interval=2)
ps = r.pubsub()
await ps.subscribe("foo")
poller = asyncio.create_task(poll(ps))
await asyncio.sleep(5)
await ps.unsubscribe("foo")
await asyncio.sleep(5)
await ps.subscribe("baz")
poller.cancel()
try:
await poller
except asyncio.CancelledError:
pass
asyncio.run(main())
```
### Expected behavior
Expected all messages printed to have proper types.
### Logs/tracebacks
```python-traceback
{'type': 'subscribe', 'pattern': None, 'channel': b'foo', 'data': 1}
{'type': 'unsubscribe', 'pattern': None, 'channel': b'foo', 'data': 0}
{'type': 97, 'pattern': None, 'channel': 105, 'data': 111}
{'type': 97, 'pattern': None, 'channel': 105, 'data': 111}
```
Note that 97, 105, 111 are the result of indexing b"aioredis-py-health-check" with indices 0, 1, 2.
```
### Python Version
```console
$ python --version
Python 3.8.10
```
### aioredis Version
```console
$ python -m pip show aioredis
Name: aioredis
Version: 2.0.0
```
### Additional context
redis-py seems to have a [similar bug](https://github.com/redis/redis-py/issues/1720) with the interaction between health checks and pub-sub, but the failure mode is not the same (in redis-py it seems to be some sort of race condition, whereas in aioredis it appears reliably reproducible), so this might need an aioredis-specific fix.
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | open | 2021-11-17T07:08:41Z | 2021-11-18T05:02:49Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1206 | [
"bug"
] | bmerry | 2 |
twopirllc/pandas-ta | pandas | 559 | chandelier_exit | Here is a code that i have written for chandelier_exit, if possible add to repo
```python
def chandelier_exit(df1, atr_length=14, roll_length=22, mult=2, use_close=False):
df = df1.copy()
df.columns = df.columns.str.lower()
my_atr = ta.Strategy(
name="atr",
ta=[{"kind": "atr", "length": atr_length, "col_names": ("ATR",)}]
)
# Run it
df.ta.strategy(my_atr, append=True)
if use_close:
df['chandelier_long'] = df.rolling(roll_length)["close"].max() + df.iloc[-1]["ATR"] * mult
df['chandelier_short'] = df.rolling(roll_length)["close"].min() - df.iloc[-1]["ATR"] * mult
else:
df['chandelier_long'] = df.rolling(roll_length)["high"].max() - df.iloc[-1]["ATR"] * mult
df['chandelier_short'] = df.rolling(roll_length)["low"].min() + df.iloc[-1]["ATR"] * mult
df.loc[df['close'] > df['chandelier_long'].shift(1), 'chd_dir'] = 1
df.loc[df['close'] < df['chandelier_short'].shift(1), 'chd_dir'] = -1
# chd = df[['chandelier_long', 'chandelier_short', 'chd_dir']]
return df
``` | open | 2022-07-02T06:58:19Z | 2022-07-03T18:00:37Z | https://github.com/twopirllc/pandas-ta/issues/559 | [
"enhancement",
"help wanted",
"good first issue"
] | bibinvargheset | 5 |
521xueweihan/HelloGitHub | python | 2,898 | 【开源自荐】Graph Maker:免费在线折线图生成器,轻松在线制作各种图表 | 一、Graph Maker是什么
Graph Maker是一个功能强大的在线图表生成工具,专为需要可视化数据的用户量身打造。无论你是学生、老师,还是职场人士,只需简单几步,就能将数据转化为美观的图表,帮助你更好地理解和展示信息。
Graph Maker:免费在线折线图生成器,轻松在线制作各种图表

二、功能特征
Graph Maker的功能非常丰富,主要有以下几个亮点:
多种图表类型:支持折线图、柱状图、直方图等多种图表形式,满足不同数据展示需求。
免费使用:完全免费,不需要注册或下载任何软件,让用户随时随地都能使用。
用户友好:界面简洁明了,操作简单,即使没有图表制作经验的用户也能轻松上手。
高质量输出:生成的图表质量高,适合用于报告、演示等多种场景。
实时预览:在制作过程中可以实时查看图表效果,及时调整,确保最终呈现符合预期。
网址:https://graph-maker.online/
代码地址:https://github.com/zhugezifang/graphmaker | open | 2025-02-08T03:09:23Z | 2025-02-08T03:09:23Z | https://github.com/521xueweihan/HelloGitHub/issues/2898 | [] | zhugezifang | 0 |
alteryx/featuretools | scikit-learn | 1,778 | Bug with parallel feature matrix calculation within sklearn cross-validation | ### Bug with parallel feature matrix calculation within sklearn cross-validation
-----
#### Bug Description
Hello, guys! Thank you for the quick release of featuretools 1.1.0 !
During my research I have faced the following bug:
I have an estimator which is actually an `imblearn Pipeline`. The estimator consists of several steps including my custom transformer which calculates feature matrix with `featuretools`. And I want to check the quality of the model with `sklearn cross_validate` function . If I set `n_jobs` > 1 both in `featuretools.calculate_feature_matrix` and in `sklearn.cross_validate`, then I get an unexpected error `ValueError: cannot find context for 'loky'`. When either one of `n_jobs` is set to 1, then everything works fine.
I googled for some time and I understood that such error might happen when parallelization is used without `if __name__ == '__main__'` - but it's the best information I've got - nothing more valuable. So for me it looks like there is some conflict in parallelization usage in `sklearn` and `featuretools`. And as far both of the libraries are essential as well as parallelization working with big data, i really hope you will be able to find a way to fix it :)
P.S this problem was actual before 1.0.0 release - previously I used 0.24.0 and still faced it
#### Output of ``featuretools.show_info()``
<details>
Featuretools version: 1.1.0
SYSTEM INFO
-----------
python: 3.7.5.final.0
python-bits: 64
OS: Darwin
OS-release: 19.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: ru_RU.UTF-8
LOCALE: ru_RU.UTF-8
INSTALLED VERSIONS
------------------
numpy: 1.21.1
pandas: 1.3.2
tqdm: 4.62.2
PyYAML: 5.4.1
cloudpickle: 1.6.0
dask: 2021.10.0
distributed: 2021.10.0
psutil: 5.8.0
pip: 19.2.3
setuptools: 41.2.0
</details>
| closed | 2021-11-11T10:49:38Z | 2021-11-25T13:40:54Z | https://github.com/alteryx/featuretools/issues/1778 | [] | VGODIE | 14 |
CTFd/CTFd | flask | 1,987 | Not able to install/run | I have cloned [CTFd](https://github.com/CTFd/CTFd.git) repository and then created a `secret key` I have docker and docker-compose working as well, but when I try to run `docker-compose` it shows me an error like this one, I have no idea can you please help me, I am installing on a VPS server and a codomain
```
ctfd_cache_1 is up-to-date
ctfd_db_1 is up-to-date
ctfd_ctfd_1 is up-to-date
Recreating ctfd_nginx_1 ...
Recreating ctfd_nginx_1 ... error
ERROR: for ctfd_nginx_1 Cannot start service nginx: driver failed programming external connectivity on endpoint ctfd_nginx_1 (efe970f59d3b0335ab6a43f822240f977f35aa02b4456f8dcbd6b2e11a1a8231): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint ctfd_nginx_1 (efe970f59d3b0335ab6a43f822240f977f35aa02b4456f8dcbd6b2e11a1a8231): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: addressalready in use
ERROR: Encountered errors while bringing up the project.
``` | closed | 2021-09-13T14:02:49Z | 2021-09-14T19:39:22Z | https://github.com/CTFd/CTFd/issues/1987 | [] | TheBlapse | 0 |
giotto-ai/giotto-tda | scikit-learn | 356 | Kernels on diagrams | #### Description
During development of #343 and following the discussion in #348 , we noticed that we are missing kernel methods as such. Maybe we could implement a `Kernel` transformer, with an API similar to `PairwiseDistances`, and a parameter `method`.
#### Steps/Code to Reproduce
```
kernel_method = Kernel(method='heat')
X = kernel_method.fit_transform(diagrams)
```
#### Expected Results
Something along the lines of
%2C%20%5Cphi(diagrams_j)%20%5Crangle_%7BL%5E2%7D),
where  is the feature map of the heat kernel. | closed | 2020-03-10T09:38:03Z | 2020-04-17T11:28:14Z | https://github.com/giotto-ai/giotto-tda/issues/356 | [
"enhancement",
"discussion"
] | wreise | 0 |
modin-project/modin | data-science | 7,051 | Update Exception message for `astype` function in the case of duplicated values | closed | 2024-03-11T19:46:26Z | 2024-03-12T09:44:17Z | https://github.com/modin-project/modin/issues/7051 | [
"Code Quality 💯"
] | anmyachev | 0 | |
miguelgrinberg/Flask-Migrate | flask | 169 | Generate upgrade command is wrong | """heheh
Revision ID: 10d1d4585f92
Revises: c6b2d0ca1b68
Create Date: 2017-08-29 15:44:42.160000
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '10d1d4585f92'
down_revision = 'c6b2d0ca1b68'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_index(op.f('ix_Bars_bar_name'), 'Bars', ['bar_name'], unique=True)
op.drop_index('ix_Bars_bar_name', table_name='Bars')
op.create_index(op.f('ix_Images_image_url'), 'Images', ['image_url'], unique=False)
op.drop_index('ix_Images_image_url', table_name='Images')
op.create_index(op.f('ix_Topics_title'), 'Topics', ['title'], unique=False)
op.drop_index('ix_Topics_title', table_name='Topics')
op.create_index(op.f('ix_UploadTopics_topic_title'), 'UploadTopics', ['topic_title'], unique=False)
op.drop_index('ix_UploadTopics_topic_title', table_name='UploadTopics')
op.create_index(op.f('ix_Users_username'), 'Users', ['username'], unique=True)
op.drop_index('ix_Users_username', table_name='Users')
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_index('ix_Users_username', 'Users', ['username'], unique=1)
op.drop_index(op.f('ix_Users_username'), table_name='Users')
op.create_index('ix_UploadTopics_topic_title', 'UploadTopics', ['topic_title'], unique=False)
op.drop_index(op.f('ix_UploadTopics_topic_title'), table_name='UploadTopics')
op.create_index('ix_Topics_title', 'Topics', ['title'], unique=1)
op.drop_index(op.f('ix_Topics_title'), table_name='Topics')
op.create_index('ix_Images_image_url', 'Images', ['image_url'], unique=False)
op.drop_index(op.f('ix_Images_image_url'), table_name='Images')
op.create_index('ix_Bars_bar_name', 'Bars', ['bar_name'], unique=1)
op.drop_index(op.f('ix_Bars_bar_name'), table_name='Bars')
# ### end Alembic commands ###
when I run
`python manage.py db migrate -m "heheh"`
the code was generated as blow.
I guess the order of op.create_index and op.drop_index is reversed.
when I run
`python manage.py db upgrade`
always tell me wrong
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) index ix_Bars_bar_name already exists [SQL: u'CREATE UNIQUE INDEX "ix_Bars_bar_name" ON "Bars" (bar_name)']
| closed | 2017-08-29T07:57:32Z | 2019-01-13T22:20:35Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/169 | [
"question",
"auto-closed"
] | beckjing | 3 |
rio-labs/rio | data-visualization | 95 | Make `MultiLineTextInput` Always Fit Its Content | As it stands, multi line text inputs are awkward to use. By default they are (vertically) small, only fitting in very little text. This requires scrolling in any but the shortest texts.
A common solution to this is to assign a larger `height` on the Python side, but this results in awkwardly large inputs, and still requires scrolling once the text becomes too long.
It would be nice for them to always resize to fit the text, maybe up to a given max-size. | open | 2024-07-07T21:12:56Z | 2024-07-07T21:12:56Z | https://github.com/rio-labs/rio/issues/95 | [
"enhancement"
] | mad-moo | 0 |
awesto/django-shop | django | 819 | Problems accessing customerproxy addresses | > It's `admin:shop_customerproxy_changelist`
>
> Small hint to find out yourself:
>
> * Go to the admin page you are looking for.
> * Copy the URL without the site part, here `/en/admin/shop/customerproxy/`.
> * Run `./manage.py shell`
> * Use that URL in function `resolve`.
> * Recheck with `reverse` if the URL is the same.
>
> ```
> >>> from django.urls import *
> >>> resolve('/en/admin/shop/customerproxy/')
> ResolverMatch(func=django.contrib.admin.options.changelist_view, args=(), kwargs={}, url_name=shop_customerproxy_changelist, app_names=['admin'], namespaces=['admin'])
> >>> reverse('admin:shop_customerproxy_changelist')
> '/en/admin/shop/customerproxy/'
> ```
Regarding that URL, I have problems accessing the following addresses:
http://localhost:8000/es/admin/shop/customerproxy/4/change/
or
http://localhost:8000/es/admin/shop/customerproxy/add/
I have the following error:
```
TypeError at /es/admin/shop/customerproxy/4/change/
has_add_permission() takes 2 positional arguments but 3 were given
Request Method: GET
Request URL: http://localhost:8000/es/admin/shop/customerproxy/4/change/
Django Version: 3.0.7
Exception Type: TypeError
Exception Value:
has_add_permission() takes 2 positional arguments but 3 were given
Exception Location: /home/ikki/.virtualenvs/vmedexample/lib/python3.6/site-packages/django/contrib/admin/options.py in get_inline_instances, line 596
Python Executable: /home/ikki/.virtualenvs/vmedexample/bin/python
Python Version: 3.6.0
Python Path:
['/home/ikki/Desarrollo/implementacion/medexample',
'/home/ikki/Desarrollo/implementacion/medexample',
'/home/ikki/Desarrollo/ide/pycharm-2020.1.1/plugins/python/helpers/pycharm_display',
'/home/ikki/.pyenv/versions/3.6.0/lib/python36.zip',
'/home/ikki/.pyenv/versions/3.6.0/lib/python3.6',
'/home/ikki/.pyenv/versions/3.6.0/lib/python3.6/lib-dynload',
'/home/ikki/.virtualenvs/vmedexample/lib/python3.6/site-packages',
'/home/ikki/Desarrollo/ide/pycharm-2020.1.1/plugins/python/helpers/pycharm_matplotlib_backend']
Server time: Lun, 15 Jun 2020 16:33:36 +0000
```
I have made from the installation `cookiecutter-django-shop` by default with all the sample data.
what is the solution?
_Originally posted by @jhonvidal in https://github.com/awesto/django-shop/issues/785#issuecomment-644244405_ | closed | 2020-06-16T13:29:01Z | 2020-06-16T23:45:12Z | https://github.com/awesto/django-shop/issues/819 | [] | jhonvidal | 2 |
microsoft/hummingbird | scikit-learn | 528 | Fix icon consistency for dark theme | Sorry to say but this bugs the hell out of me 😁 Which one doesn't belong:

Can you change it to be more consistent or add an option for it? | closed | 2021-06-22T20:14:17Z | 2021-06-22T20:50:20Z | https://github.com/microsoft/hummingbird/issues/528 | [] | ronilaukkarinen | 2 |
seleniumbase/SeleniumBase | pytest | 2,252 | Can I change WebGL information | Is there a way to enter random information? When we tested on https://pixelscan.net/
Exemple:
languages=["en-US", "en"],
vendor="Google Inc.",
platform="Win32",
webgl_vendor="Intel Inc.",
renderer="Intel Iris OpenGL Engine",
fix_hairline=True,
more using in the SB model
```
with SB(uc=True, mobile=False) as sb:
ua = UserAgent()
user_agent = ua.random
sb.execute_cdp_cmd( "Network.setUserAgentOverride", {"userAgent": user_agent})
sb.get("https://nowsecure.nl")
sb.sleep(150.75)
```
| closed | 2023-11-08T06:20:00Z | 2023-11-08T07:23:49Z | https://github.com/seleniumbase/SeleniumBase/issues/2252 | [
"duplicate",
"UC Mode / CDP Mode"
] | gupta723 | 1 |
thtrieu/darkflow | tensorflow | 934 | Annotation file type | I have 1000 image files and their annotation (file type txt). They are used in YOLO.
But Darkflow's annotation is required xml file type annotation.
Can I change my txt file to xml easily?
or I have to marking again?
| closed | 2018-11-14T14:26:14Z | 2018-12-27T13:12:10Z | https://github.com/thtrieu/darkflow/issues/934 | [] | murras | 7 |
Guovin/iptv-api | api | 602 | 新版docker缺少依赖 |

Starting periodic command scheduler: cron.
Traceback (most recent call last):
File "/tv-driver/main.py", line 5, in <module>
from utils.channel import (
File "/tv-driver/utils/channel.py", line 13, in <module>
from utils.speed import (
File "/tv-driver/utils/speed.py", line 9, in <module>
import yt_dlp
ModuleNotFoundError: No module named 'yt_dlp'
/tv_entrypoint.sh: line 15: gunicorn: command not found
Starting periodic command scheduler: cron.
Traceback (most recent call last):
File "/tv-driver/main.py", line 5, in <module>
from utils.channel import (
File "/tv-driver/utils/channel.py", line 13, in <module>
from utils.speed import (
File "/tv-driver/utils/speed.py", line 9, in <module>
import yt_dlp
ModuleNotFoundError: No module named 'yt_dlp'
/tv_entrypoint.sh: line 15: gunicorn: command not found
Starting periodic command scheduler: cron.
Traceback (most recent call last):
File "/tv-driver/main.py", line 5, in <module>
from utils.channel import (
File "/tv-driver/utils/channel.py", line 13, in <module>
from utils.speed import (
File "/tv-driver/utils/speed.py", line 9, in <module>
import yt_dlp
ModuleNotFoundError: No module named 'yt_dlp'
/tv_entrypoint.sh: line 15: gunicorn: command not found
Starting periodic command scheduler: cron.
Traceback (most recent call last):
File "/tv-driver/main.py", line 5, in <module>
from utils.channel import (
File "/tv-driver/utils/channel.py", line 13, in <module>
from utils.speed import (
File "/tv-driver/utils/speed.py", line 9, in <module>
import yt_dlp
ModuleNotFoundError: No module named 'yt_dlp'
/tv_entrypoint.sh: line 15: gunicorn: command not found
Starting periodic command scheduler: cron.
Traceback (most recent call last):
File "/tv-driver/main.py", line 5, in <module>
from utils.channel import (
File "/tv-driver/utils/channel.py", line 13, in <module>
from utils.speed import (
File "/tv-driver/utils/speed.py", line 9, in <module>
import yt_dlp
ModuleNotFoundError: No module named 'yt_dlp'
/tv_entrypoint.sh: line 15: gunicorn: command not found | closed | 2024-11-30T08:17:11Z | 2024-12-02T02:27:37Z | https://github.com/Guovin/iptv-api/issues/602 | [
"invalid"
] | vbskycn | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 271 | gemini api is not set proxy , resulting in a 60s timeout. | I can't find any specific functionality in the LangChain framework or the langchain_google_genai module that allows you to set or modify the user agent in requests to the Google SDK.
I found that using the following code can force the proxy in Linux, but not in Windows.
```python
import os
os.environ["http_proxy"] = 'http://192.168.166.8:7890'
os.environ["https_proxy"] = 'http://192.168.166.8:7890'
``` | closed | 2024-05-20T05:52:55Z | 2024-10-10T09:11:26Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/271 | [] | wrench1997 | 2 |
flairNLP/flair | pytorch | 2,758 | fine-tuning on ner-pooled with new dataset | Hey!
Right now, I am trying to fine-tune ner-pooled model on another dataset. After the training, I want to evaluate on validation and hyperparameter tune. And finalize by testing. However, when I train on the new dataset, I get scores of zeros.
My code is as follows, taking [issue: #1178](https://github.com/flairNLP/flair/issues/1178) and #53 as inspiration:


Epoch 20 is as follows:

And the results are:


The dataset is build-up as follows:

I have also tried with WNUT-17 getting the same results. So it is probably because of my own code, but I am not seeing the issue. Changing the mini-batch size to 2 did not work either.
Maybe important to add, I am running it on a Windows laptop using cpu. | closed | 2022-05-10T21:23:36Z | 2022-11-01T15:05:06Z | https://github.com/flairNLP/flair/issues/2758 | [
"question",
"wontfix"
] | Nuveyla | 1 |
deeppavlov/DeepPavlov | tensorflow | 1,582 | 👩💻📞DeepPavlov Community Call #18 | Привет, друзья!
Мы рады вернуться в этом месяце с DeepPavlov Community Call на **русском языке.**
На предстоящем вебинаре к нам придет приглашенный гость **Борис Галицкий**, ассоциированный сотрудник лаборатории Интеллектуальных систем и структурного анализа НИУ ВШЭ, основатель нескольких стартапов в области ИИ, профессор ANECA, а также бывший сотрудник Oracle, представит доклад на тему **“Дискурсивный анализ текста для организации диалога”.**
Сделать диалог с чат-ботом логичным и интересным — важнейшая задача области Conversational AI. Для этого применяются самые разные подходы, и один из них — дискурсивный анализ текста. Его идея состоит в том, чтобы чат-бот помог пользователю сфокусироваться лишь на каком-либо предложении из всего текста. В дискурсивном дереве текст разбивается на части, связанные логическими отношениями, и чат-бот направляет по ним пользователя, развивая диалог. Например, это могут быть временные (temporal) отношения, когда пользователя наверняка заинтересует, что будет после описанного события или что было до него.
На нашем вебинаре Борис Галицкий подробно расскажет о способе управления ходом диалога в чат-боте на основе дискурсивного анализа текста.
> DeepPavlov Community Call #11, Русская версия (27 июля, 2022)
> Мы проведем следующий звонок 27 июля 2022 в 19.00 по Московскому времени (19 MSK).
> Добавьте напоминание в календарь:
> [http://bit.ly/MonthlyDPCommunityCall2021Ru](http://bit.ly/MonthlyDPCommunityCall2021RE)
> Повестка DeepPavlov Community Call #18, Русская версия:
>
> 7:00pm–7:10pm | Приветствие
> 7:10 –7:45pm | Борис Галицкий: Дискурсивный анализ текста для организации диалога
> 7:45pm–8:00pm | Вопросы и ответы с Борисом Галицким и командой инженеров DeepPavlov
В случае, если вы пропустили Community Calls ранее, вы всегда их можете найти в [плейлисте](http://bit.ly/DPCommunityCall10RE_Video).
Мы приглашаем вас присоединиться к нам, чтобы сообщить, что вы думаете о последних изменениях, поделиться своими ожиданиями от предстоящей версии библиотеки и рассказать, как DeepPavlov помогает вам в ваших проектах!
**Оставьте отзыв о библиотеке DeepPavlov**
Мы хотим услышать вас. Вы можете заполнить форму ниже, чтобы сообщить нам, как вы используете DeepPavlov Library, что вы хотите, чтобы мы добавили или улучшили!
http://bit.ly/DPLibrary2021Survey
Заинтересовались? Не упускайте шанс и присоединяйтесь к нам! Этот Call открыт для всех энтузиастов в области Conversational AI. | closed | 2022-07-22T16:29:26Z | 2023-07-06T11:59:13Z | https://github.com/deeppavlov/DeepPavlov/issues/1582 | [
"discussion"
] | PolinaMrdv | 0 |
deepset-ai/haystack | pytorch | 8,114 | docs: clean up docstrings of PyPDFToDocument | closed | 2024-07-30T06:47:44Z | 2024-07-30T09:08:46Z | https://github.com/deepset-ai/haystack/issues/8114 | [] | agnieszka-m | 0 | |
scikit-learn/scikit-learn | machine-learning | 30,512 | Fail to pickle `SplineTransformer` with `scipy==1.15.0rc1` | ### Describe the bug
Spotted in scikit-lego, running `check_estimators_pickle` fails with `SplineTransformer` and `readonly_memmap=True`.
cc: @koaning
### Steps/Code to Reproduce
```py
from sklearn.utils.estimator_checks import check_estimators_pickle
from sklearn.preprocessing import SplineTransformer
check_estimators_pickle(
name="hello",
estimator_orig=SplineTransformer(),
readonly_memmap=True,
)
```
### Expected Results
Not to raise
### Actual Results
```
Traceback (most recent call last):
File "/home/fbruzzesi/open-source/scikit-lego/t.py", line 5, in <module>
check_estimators_pickle(
File "/home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/sklearn/utils/_testing.py", line 147, in wrapper
return fn(*args, **kwargs)
File "/home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/sklearn/utils/estimator_checks.py", line 2354, in check_estimators_pickle
unpickled_result = getattr(unpickled_estimator, method)(X)
File "/home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/sklearn/utils/_set_output.py", line 319, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
File "/home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/sklearn/preprocessing/_polynomial.py", line 1036, in transform
f_min, f_max = spl(xmin), spl(xmax)
File "/home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/scipy/interpolate/_bsplines.py", line 523, in __call__
_dierckx.evaluate_spline(self.t, cc.reshape(cc.shape[0], -1),
ValueError: Expected a 1-dim C contiguous array of dtype = 12( got 12 )
```
### Versions
```shell
System:
python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
machine: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python dependencies:
sklearn: 1.6.0
pip: 24.1
setuptools: None
numpy: 2.2.0
scipy: 1.15.0rc1
Cython: None
pandas: 2.2.3
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 12
prefix: libscipy_openblas
filepath: /home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/numpy.libs/libscipy_openblas64_-6bb31eeb.so
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: blas
internal_api: openblas
num_threads: 12
prefix: libscipy_openblas
filepath: /home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/scipy.libs/libscipy_openblas-68440149.so
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 12
prefix: libgomp
filepath: /home/fbruzzesi/open-source/scikit-lego/.venv-pre/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
```
| closed | 2024-12-19T15:36:53Z | 2025-01-04T04:32:31Z | https://github.com/scikit-learn/scikit-learn/issues/30512 | [
"Bug"
] | FBruzzesi | 8 |
profusion/sgqlc | graphql | 244 | Allow updating Operation name and args after its initialization | ## 🚀 Feature Request
## Description
Allow updating Operation name and args after its initialization
```
operation = Operation(query)
operation.name = "newlyNamedQuery"
operation.args = {"some_arg": Arg(SomeKindScalar)}
operation.find_something(some_args=Variable("some_arg"))
```
In some cases (for automated tests) this allows to avoid code duplication while using pytest fixtures
*
## Implementation details
just putting this piece of code into Operation class worked for me:
```
@property
def name(self):
return self.__name
@name.setter
def name(self, value):
self.__name = value
@property
def args(self):
return self.__args
@args.setter
def args(self, variables_dict):
variable_args = OrderedDict()
for k, v in variables_dict.items():
variable_args['$' + k] = v
self.__args = ArgDict(variable_args)
self.__args._set_container(self.__type.__schema__, self)
```
*
| open | 2024-08-19T18:34:22Z | 2024-09-15T14:19:07Z | https://github.com/profusion/sgqlc/issues/244 | [] | pwyllcrusader | 3 |
huggingface/pytorch-image-models | pytorch | 1,002 | vit_large_patch16_384 has incorrect settings | **Describe the bug**
When trying to use boilerplate code to predict using vit_large_patch16_384 an error occurs. This is due to the config for vit_large_patch16_384 containing input size (3, 224, 224) instead of (3, 384, 384). I suspect that vit_base_patch16_384 has the same issue, perhaps even more of these models.
**To Reproduce**
Steps to reproduce the behavior:
1. load the model named vit_large_patch16_384
2. use timm.data.transforms_factory.create_transform to automatically create the transform
3. try to predict imagenet validation data with this transform
**Expected behavior**
I expected the prediction to work just fine as it does with other models.
**Screenshots**
Not applicable
**Desktop (please complete the following information):**
- OS: Linux ## 4.15.0-55-generic # 60-Ubuntu SMP ## x86_64 x86_64 x86_64 GNU/Linux
- This repository version: timm==0.4.12
- PyTorch version w/ CUDA/cuDNN: torch==1.10.0/Cuda 10.1/Not applicable I think
**Additional context**
None to speak of.
| closed | 2021-11-27T14:49:55Z | 2021-11-27T23:05:27Z | https://github.com/huggingface/pytorch-image-models/issues/1002 | [
"bug"
] | bjfranks | 3 |
matplotlib/matplotlib | matplotlib | 29,775 | [Doc]: Improve install guidance | ### Problem
We have
- https://matplotlib.org/stable/#install thought as short instructions on the landing page
- https://matplotlib.org/stable/install/index.html as the complete install guide
They are mostly fine.
Issues:
- [ ] There's https://matplotlib.org/stable/users/getting_started/#installation-quick-start, which seems a bit outdated. I'm not sure whether this is still needed, and I haven't found a link to it from any page below https://matplotlib.org/stable/. However the landing page https://matplotlib.org/ links to it from the "Getting started" icon.
*Suggested improvement*: Minimally replace the install instructions at by https://matplotlib.org/stable/users/getting_started/#installation-quick-start, a link to https://matplotlib.org/stable/#install. Alternatively, investigate to remove https://matplotlib.org/stable/users/getting_started/ completely.
- [ ] AFAICS there is no obvious link from https://matplotlib.org/ to any installation information. The "documentation" icon connects to the version-specific landing page https://matplotlib.org/stable/
*Suggested improvement*: It may be difficult to resolve this duality of global landing page and version-specific landing page. Minmally, move the "Documentation" icon to the very right in the icon row. It's the most general one. The other icons are direct links to sub-topics.
- [ ] https://matplotlib.org/stable/install/index.html as the complete install guide could/should get more information on package managers. We only focus on the packages wheel/conda, but don't mention the package managers. While not strictly our business, it may be a good service to our users to add 1-2 sentences on the package managers, because that's the interface they will use.
| open | 2025-03-18T23:35:39Z | 2025-03-19T00:08:23Z | https://github.com/matplotlib/matplotlib/issues/29775 | [
"Documentation"
] | timhoffm | 1 |
chatopera/Synonyms | nlp | 72 | 用nearby查处的是相关词,而不是同义词,有很多时候甚至是反义词 | # description
用nearby查处的是相关词,而不是同义词,有很多时候甚至是反义词,比如喜欢和讨厌
* version:
python 3.6.5
| closed | 2018-11-30T07:25:36Z | 2020-10-27T01:11:24Z | https://github.com/chatopera/Synonyms/issues/72 | [] | weihaixiaoseu | 9 |
pyeve/eve | flask | 1,259 | Improper authorization error when authenticating in eve/auth.py | ### Expected Behavior
When invalid credentials are supplied for Token based auth, 401 should be thrown as the last exception and that should be there in response message returned.
### Actual Behavior
```json
{
"_status": "ERR",
"_error": {
"code": 500,
"message": "__init__() got an unexpected keyword argument 'response'"
}
}
```
```pytb
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/eve/methods/common.py", line 317, in rate_limited
return f(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/eve/auth.py", line 79, in decorated
return auth.authenticate()
File "/usr/local/lib/python3.6/site-packages/eve/auth.py", line 270, in authenticate
abort(401, description="Please provide proper credentials", response=resp)
File "/usr/local/lib/python3.6/site-packages/werkzeug/exceptions.py", line 752, in abort
return _aborter(status, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/werkzeug/exceptions.py", line 733, in __call__
raise self.mapping[code](*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'response'
```
### Environment
* Python version: 3.6
* Eve version: 0.8.1
| closed | 2019-04-08T07:27:23Z | 2019-04-08T09:28:58Z | https://github.com/pyeve/eve/issues/1259 | [] | checkaayush | 6 |
ivy-llc/ivy | tensorflow | 27,969 | Fix Ivy Failing Test: torch - elementwise.maximum | closed | 2024-01-20T16:19:21Z | 2024-01-25T09:53:52Z | https://github.com/ivy-llc/ivy/issues/27969 | [
"Sub Task"
] | samthakur587 | 0 | |
microsoft/unilm | nlp | 1,562 | Fine tuning kosmos-2 | Hi @pengzhiliang. I want to finetune kosmos-2 on a VQA task that answer is a single word (like a multi-class classification task) and I call this single word label. I only have question answer pairs but not bounding boxes. I was wondering that I should use `<grounding>` or not. I mean should I use `<grounding> Question: Are there any <phrase>cats</phrase> in the image? Answer: label` or `Question: Are there any <phrase>cats</phrase> in the image? Answer: label`. I am using Kosmos2ForConditionalGeneration.
and another question: is it rational to use Kosmos2ForConditionalGeneration for fine tuning or not? | closed | 2024-05-23T11:08:27Z | 2024-06-21T11:22:18Z | https://github.com/microsoft/unilm/issues/1562 | [] | FarzanRahmani | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,402 | [Bug]: ImportError: DLL load failed while importing onnx_cpp2py_export | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
```
venv "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
CUDA 12.1
Launching Web UI with arguments: --xformers --api
CHv1.8.11: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 10
ControlNet preprocessor location: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-18 11:21:27,174 - ControlNet - INFO - ControlNet v1.1.455
*** Error loading script: console_log_patch.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 4, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_api.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_api.py", line 28, in <module>
from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_faceswap.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 18, in <module>
from reactor_ui import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\__init__.py", line 2, in <module>
import reactor_ui.reactor_tools_ui as ui_tools
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\reactor_tools_ui.py", line 2, in <module>
from scripts.reactor_swapper import build_face_model, blend_faces
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_helpers.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_logger.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_swapper.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_version.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_version.py", line 4, in <module>
from scripts.reactor_logger import logger, get_Run, set_Run
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_xyz.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_xyz.py", line 8, in <module>
from scripts.reactor_helpers import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
Loading weights [a3f5346925] from C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\Stable-diffusion\1.5\Pony\fastPhotoPony_v40WithT5xxl_2.safetensors
CHv1.8.11: Set Proxy:
2024-08-18 11:21:49,221 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 43.4s (prepare environment: 9.7s, import torch: 5.0s, import gradio: 0.8s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.5s, load scripts: 23.8s, create ui: 0.7s, gradio launch: 0.5s, add APIs: 0.6s).
Loading VAE weights specified in settings: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 7.0s (load weights from disk: 0.8s, create model: 0.5s, apply weights to model: 5.1s, load VAE: 0.1s, calculate empty prompt: 0.2s).
```
### Steps to reproduce the problem
Start the app.
### What should have happened?
NO show problems.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-08-18-14-23.json](https://github.com/user-attachments/files/16649048/sysinfo-2024-08-18-14-23.json)
### Console logs
```Shell
venv "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
CUDA 12.1
Launching Web UI with arguments: --xformers --api
CHv1.8.11: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 10
ControlNet preprocessor location: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-18 11:21:27,174 - ControlNet - INFO - ControlNet v1.1.455
*** Error loading script: console_log_patch.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 4, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_api.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_api.py", line 28, in <module>
from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_faceswap.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 18, in <module>
from reactor_ui import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\__init__.py", line 2, in <module>
import reactor_ui.reactor_tools_ui as ui_tools
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\reactor_tools_ui.py", line 2, in <module>
from scripts.reactor_swapper import build_face_model, blend_faces
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_helpers.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_logger.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_swapper.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_version.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_version.py", line 4, in <module>
from scripts.reactor_logger import logger, get_Run, set_Run
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_xyz.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_xyz.py", line 8, in <module>
from scripts.reactor_helpers import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
Loading weights [a3f5346925] from C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\Stable-diffusion\1.5\Pony\fastPhotoPony_v40WithT5xxl_2.safetensors
CHv1.8.11: Set Proxy:
2024-08-18 11:21:49,221 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 43.4s (prepare environment: 9.7s, import torch: 5.0s, import gradio: 0.8s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.5s, load scripts: 23.8s, create ui: 0.7s, gradio launch: 0.5s, add APIs: 0.6s).
Loading VAE weights specified in settings: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 7.0s (load weights from disk: 0.8s, create model: 0.5s, apply weights to model: 5.1s, load VAE: 0.1s, calculate empty prompt: 0.2s).
```
```
### Additional information
(venv) C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts>pip show onnx
Name: onnx
Version: 1.16.2
Summary: Open Neural Network Exchange
Home-page: https://onnx.ai/
Author:
Author-email: ONNX Contributors <onnx-technical-discuss@lists.lfaidata.foundation>
License: Apache License v2.0
Location: c:\users\zerocool22\desktop\autosdxl\stable-diffusion-webui\venv\lib\site-packages
Requires: numpy, protobuf
Required-by: insightface
C:\Users\ZeroCool22>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.20
Build cuda_12.6.r12.6/compiler.34431801_0
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.81 Driver Version: 560.81 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... WDDM | 00000000:06:00.0 On | N/A |
| 40% 25C P8 8W / 285W | 7822MiB / 16376MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+ | open | 2024-08-18T14:28:25Z | 2024-09-20T15:40:36Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16402 | [
"bug-report"
] | ZeroCool22 | 5 |
graphdeco-inria/gaussian-splatting | computer-vision | 786 | problems with camera pose recovery and rendering in different settings | Hello,
Currently, I'm working on recovering camera poses and 3D reconstruction using Gaussian splattings. The approach involves using depth maps to unproject pixels for a specific frame, fitting them tightly (overfitting), and then determining the relative transformation to the next frame. My aim is to learn these transformations by adjusting parameters in the camera code, such as the world_to_view matrix. However, it appears that gradients will only propagate if I directly manipulate the rotation and translation of the Gaussian parameters themselves. Currently, I'm applying rotation and translation only to the xyz coordinates. Should I also apply them to the rotation parameters (rot params)? Q1: Is it possible for gradients to flow through the camera in the renderer, or is applying R+T to the params dirrectly the only possibility with the current version of diff_gaussian_rasterizer?
I've also conducted an experiment where I've learned an object and aim to render it using a rotation (R) and translation (T). When I apply these R+T transformations to the camera parameters and then render using 3DGS, everything seems fine. However, when I apply R+T to the xyz parameters of the Gaussians and use the default camera (Identity) with zero translation, parts of the Gaussians render strangely. Perhaps you can suggest what might be the problem? I've experimented with the camera_center parameter, and it seems to slightly influence the result.
Attached is an example: on the left is the result when I directly transform 3DGS, and on the right is when I transform camera parameters:
Here is an example - on left, when I transform 3DGS direcly, on right when I transform camera params:

I would appreciate any help, thanks in advance! | open | 2024-05-02T22:09:51Z | 2024-05-08T03:26:11Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/786 | [] | ostapagon | 1 |
dmlc/gluon-cv | computer-vision | 1,769 | Reporting a vulnerability | Hello!
I hope you are doing well!
We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.
Can you enable it, so that we can report it?
Thanks in advance!
PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository | closed | 2023-04-10T11:07:21Z | 2023-07-18T06:32:57Z | https://github.com/dmlc/gluon-cv/issues/1769 | [
"Stale"
] | igibek | 1 |
pywinauto/pywinauto | automation | 1,152 | COMError when tested application puts data in Datagrid | ## Expected Behavior
Application IO-Link Analyzer 1.1.2 that allows to read data communication on IO-Link lines (something like Wireshark).
Pywinauto allows to start application and start sniffing, controls work proper.
Data is sniffed into control:
ListView - '' (L121, T275, R1087, B653)
['TransfersListView', 'ListView']
child_window(auto_id="lvTransfers", control_type="DataGrid")
After that controlling other controls should be available.
## Actual Behavior
Controlling other controls after sniffing data into mentioned DataGrid is possible only in rare cases (c.a.20% cases) in other cases it gives an error:
File "D:/Python/Projects/calc_auto.py", line 39, in <module>
print(dlg.VendorIdEdit.texts())
File "D:\Python\Python38\lib\site-packages\pywinauto\application.py", line 379, in __getattribute__
ctrls = self.__resolve_control(self.criteria)
File "D:\Python\Python38\lib\site-packages\pywinauto\application.py", line 250, in __resolve_control
ctrl = wait_until_passes(
File "D:\Python\Python38\lib\site-packages\pywinauto\timings.py", line 436, in wait_until_passes
func_val = func(*args, **kwargs)
File "D:\Python\Python38\lib\site-packages\pywinauto\application.py", line 222, in __get_ctrl
ctrl = self.backend.generic_wrapper_class(findwindows.find_element(**ctrl_criteria))
File "D:\Python\Python38\lib\site-packages\pywinauto\findwindows.py", line 84, in find_element
elements = find_elements(**kwargs)
File "D:\Python\Python38\lib\site-packages\pywinauto\findwindows.py", line 305, in find_elements
elements = findbestmatch.find_best_control_matches(best_match, wrapped_elems)
File "D:\Python\Python38\lib\site-packages\pywinauto\findbestmatch.py", line 495, in find_best_control_matches
name_control_map = build_unique_dict(controls)
File "D:\Python\Python38\lib\site-packages\pywinauto\findbestmatch.py", line 474, in build_unique_dict
ctrl_names = get_control_names(ctrl, controls, text_ctrls)
File "D:\Python\Python38\lib\site-packages\pywinauto\findbestmatch.py", line 320, in get_control_names
non_text_names = get_non_text_control_name(control, allcontrols, textcontrols)
File "D:\Python\Python38\lib\site-packages\pywinauto\findbestmatch.py", line 218, in get_non_text_control_name
text_r = text_ctrl.rectangle()
File "D:\Python\Python38\lib\site-packages\pywinauto\base_wrapper.py", line 367, in rectangle
return self.element_info.rectangle
File "D:\Python\Python38\lib\site-packages\pywinauto\uia_element_info.py", line 326, in rectangle
bound_rect = self._element.CurrentBoundingRectangle
_ctypes.COMError: (-2147220991, 'An event was unable to invoke any of the subscribers', (None, None, None, 0, None))
Extra info:
If datagrid is emptied before reading control values (app allows that manually) then requested value is always read.
In my opinion somehow all the data in datagrid blocks proper controlling of other controls.
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
from pywinauto import Application
import requests
import time
app = Application(backend="uia").start('C:\Program Files (x86)\Germbedded GmbH\IO-Link Analyzer 1.1.2\IolinkAnalyzer.exe')
dlg = app.IOLAnalyzer
dlg.btnConnect.click()
dlg.btnStartCapture.click()
time.sleep(10) #during that time data is sniffed
print(dlg.VendorIdEdit.texts())
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.8.10
- Platform and OS: Windows 10
| open | 2021-11-29T13:42:51Z | 2021-11-30T13:56:49Z | https://github.com/pywinauto/pywinauto/issues/1152 | [] | danielpyc | 1 |
tensorpack/tensorpack | tensorflow | 779 | HOW to fix assertionError:0, while running this code on single GPU (GeForce GTX 1080 Ti) | #!/usr/bin/env python
# -*- coding: UTF-8 -*-
import numpy as np
import tensorflow as tf
import argparse
import os
import cv2
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from tensorpack import *
from tensorpack.tfutils.symbolic_functions import *
from tensorpack.tfutils.summary import *
from tensorpack.utils.gpu import get_nr_gpu
"""
CIFAR10 DenseNet example. See: http://arxiv.org/abs/1608.06993
Code is developed based on Yuxin Wu's ResNet implementation: https://github.com/ppwwyyxx/tensorpack/tree/master/examples/ResNet
Results using DenseNet (L=40, K=12) on Cifar10 with data augmentation: ~5.77% test error.
Running time:
On one TITAN X GPU (CUDA 7.5 and cudnn 5.1), the code should run ~5iters/s on a batch size 64.
"""
BATCH_SIZE = 64
class Model(ModelDesc):
def __init__(self, depth):
super(Model, self).__init__()
self.N = int((depth - 4) / 3)
self.growthRate =12
def _get_inputs(self):
return [InputDesc(tf.float32, [None, 32, 32, 3], 'input'),
InputDesc(tf.int32, [None], 'label')
]
def _build_graph(self, input_vars):
image, label = input_vars
image = image / 128.0 - 1
def conv(name, l, channel, stride):
return Conv2D(name, l, channel, 3, stride=stride,
nl=tf.identity, use_bias=False,
W_init=tf.random_normal_initializer(stddev=np.sqrt(2.0/9/channel)))
def add_layer(name, l):
shape = l.get_shape().as_list()
in_channel = shape[3]
with tf.variable_scope(name) as scope:
c = BatchNorm('bn1', l)
c = tf.nn.relu(c)
c = conv('conv1', c, self.growthRate, 1)
l = tf.concat([c, l], 3)
return l
def add_transition(name, l):
shape = l.get_shape().as_list()
in_channel = shape[3]
with tf.variable_scope(name) as scope:
l = BatchNorm('bn1', l)
l = tf.nn.relu(l)
l = Conv2D('conv1', l, in_channel, 1, stride=1, use_bias=False, nl=tf.nn.relu)
l = AvgPooling('pool', l, 2)
return l
def dense_net(name):
l = conv('conv0', image, 16, 1)
with tf.variable_scope('block1') as scope:
for i in range(self.N):
l = add_layer('dense_layer.{}'.format(i), l)
l = add_transition('transition1', l)
with tf.variable_scope('block2') as scope:
for i in range(self.N):
l = add_layer('dense_layer.{}'.format(i), l)
l = add_transition('transition2', l)
with tf.variable_scope('block3') as scope:
for i in range(self.N):
l = add_layer('dense_layer.{}'.format(i), l)
l = BatchNorm('bnlast', l)
l = tf.nn.relu(l)
l = GlobalAvgPooling('gap', l)
logits = FullyConnected('linear', l, out_dim=10, nl=tf.identity)
return logits
logits = dense_net("dense_net")
prob = tf.nn.softmax(logits, name='output')
cost = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=label)
cost = tf.reduce_mean(cost, name='cross_entropy_loss')
wrong = prediction_incorrect(logits, label)
# monitor training error
add_moving_summary(tf.reduce_mean(wrong, name='train_error'))
# weight decay on all W
wd_cost = tf.multiply(1e-4, regularize_cost('.*/W', tf.nn.l2_loss), name='wd_cost')
add_moving_summary(cost, wd_cost)
add_param_summary(('.*/W', ['histogram'])) # monitor W
self.cost = tf.add_n([cost, wd_cost], name='cost')
def _get_optimizer(self):
lr = tf.get_variable('learning_rate', initializer=0.1, trainable=False)
tf.summary.scalar('learning_rate', lr)
return tf.train.MomentumOptimizer(lr, 0.9, use_nesterov=True)
def get_batch_factor():
nr_gpu = get_nr_gpu()
assert nr_gpu in [0,1, 2, 4, 8], nr_gpu
return 8 // nr_gpu
def get_data(train_or_test):
isTrain = train_or_test == 'train'
ds = dataset.Cifar10(train_or_test)
pp_mean = ds.get_per_pixel_mean()
if isTrain:
augmentors = [
imgaug.CenterPaste((40, 40)),
imgaug.RandomCrop((32, 32)),
imgaug.Flip(horiz=True),
#imgaug.Brightness(20),
#imgaug.Contrast((0.6,1.4)),
imgaug.MapImage(lambda x: x - pp_mean),
]
else:
augmentors = [
imgaug.MapImage(lambda x: x - pp_mean)
]
ds = AugmentImageComponent(ds, augmentors)
ds = BatchData(ds, BATCH_SIZE, remainder=not isTrain)
if isTrain:
ds = PrefetchData(ds, 3, 2)
return ds
def get_config():
log_dir = 'train_log/cifar10-single-fisrt%s-second%s-max%s' % (str(args.drop_1), str(args.drop_2), str(args.max_epoch))
logger.set_logger_dir(log_dir, action='n')
# prepare dataset
dataset_train = get_data('train')
steps_per_epoch = dataset_train.size()
dataset_test = get_data('test')
return TrainConfig(
dataflow=dataset_train,
callbacks=[
ModelSaver(),
InferenceRunner(dataset_test,
[ScalarStats('cost'), ClassificationError()]),
ScheduledHyperParamSetter('learning_rate',
[(1, 0.1), (args.drop_1, 0.01), (args.drop_2, 0.001)])
],
model=Model(depth=args.depth),
steps_per_epoch=steps_per_epoch,
max_epoch=args.max_epoch,
)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--gpu', help='comma separated list of GPU(s) to use.') # nargs='*' in multi mode
parser.add_argument('--load', help='load model')
parser.add_argument('--drop_1',default=150, help='Epoch to drop learning rate to 0.01.') # nargs='*' in multi mode
parser.add_argument('--drop_2',default=225,help='Epoch to drop learning rate to 0.001')
parser.add_argument('--depth',default=40, help='The depth of densenet')
parser.add_argument('--max_epoch',default=300,help='max epoch')
args = parser.parse_args()
if args.gpu:
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
config = get_config()
if args.load:
config.session_init = SaverRestore(args.load)
nr_tower = 0
if args.gpu:
nr_tower = max(get_nr_gpu(),1)
# SyncMultiGPUTrainer(config).train()
launch_train_with_config(config, SyncMultiGPUTrainerParameterServer(nr_tower))
After running a get the following error
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
[0601 17:45:09 @logger.py:126] Use a new log directory train_log/cifar10-single-fisrt150-second225-max3000601-174509
[0601 17:45:09 @logger.py:74] Argv: cifar10-densenet.py
[0601 17:45:09 @fs.py:89] WRN Env var $TENSORPACK_DATASET not set, using /home/abid/tensorpack_data for datasets.
[0601 17:45:09 @cifar.py:33] Found cifar10 data in /home/abid/tensorpack_data/cifar10_data.
[0601 17:45:10 @parallel.py:185] [MultiProcessPrefetchData] Will fork a dataflow more than one times. This assumes the datapoints are i.i.d.
[0601 17:45:10 @cifar.py:33] Found cifar10 data in /home/abid/tensorpack_data/cifar10_data.
Traceback (most recent call last):
File "cifar10-densenet.py", line 187, in <module>
launch_train_with_config(config, SyncMultiGPUTrainerParameterServer(nr_tower))
File "/usr/local/lib/python3.5/dist-packages/tensorpack/utils/argtools.py", line 36, in wrapper
argmap[k] = map_func(argmap[k])
File "/usr/local/lib/python3.5/dist-packages/tensorpack/train/trainers.py", line 42, in _int_to_range
assert (x > 0), x
AssertionError: 0
| closed | 2018-06-01T09:53:59Z | 2018-06-15T07:40:47Z | https://github.com/tensorpack/tensorpack/issues/779 | [
"usage"
] | AbidHussain70 | 1 |
openapi-generators/openapi-python-client | rest-api | 205 | Remove optional generated class attributes set to None from serialized payload | **Is your feature request related to a problem? Please describe.**
Given a generated class like:
```python
class MyModel:
required_property: str
optional_property: Optional[Dict[Any, Any]]
```
When communicating with an API, the requests are getting rejected for certain optional properties in which we're sending `None` / `null` which causes 400 Bad Request like:
```
Invalid input at "optional_property": None is not of type 'object'
```
The idea being that the API is expecting if a field is present, it has a non-null value. Not sending `optional_property` solves the problem.
**Describe the solution you'd like**
I'd like to have an option to only serialize fields which have a value unless they are marked `required`, where it's assumed None/null would have semantic value. I'm suggesting a configuration option as I realize this behavior may break other cases where folks may be relying on always sending `None` now.
The result would be that the generated `to_dict()` method in the classes only adds the key/value if the key is required or is not None.
**Describe alternatives you've considered**
Currently we are extending the generated classes and overriding the `to_dict()` method to perform the desired behavior. | closed | 2020-10-02T21:03:13Z | 2020-11-06T16:48:53Z | https://github.com/openapi-generators/openapi-python-client/issues/205 | [
"✨ enhancement"
] | bowenwr | 7 |
freqtrade/freqtrade | python | 11,291 | FreqAI not finding the datasieve module | <!--
Have you searched for similar issues before posting it? I have searched and found one that was the same problem. It said that not all of the requirements had been installed and to reinstall them. So I did, but didn't fix the issue. I used "pip list" to check if it had been installed and it was installed. I cant seem to figure out what is going on. Any help would be appreciated.
## Describe your environment
* Operating system: ____Windows 11pro
* Python Version: _____ Python 3.13.1
* CCXT version: _____ CCXT 4.4.49
* Freqtrade Version: ____ FreqAI 2024.12.dev0.dist-info
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
ModuleNotFoundError: No module named 'datasieve'
### Steps to reproduce:
1. Open VS Code, open an integrated terminal for CCXT and run command "docker-compose up"
2. Open VS Code, open an integrated terminal for FreqAI and run command "docker-compose up"
3. _____
### Observed Results:
* I get the error code "ModuleNotFoundError: No module named 'datasieve'".
I used "pip list" and it shows that the module is installed.
* I expected it to find all the modules, finish booting and run.
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2025-01-26 23:32:46,922 - freqtrade.loggers - INFO - Verbosity set to 0
2025-01-26 23:32:46,923 - freqtrade.configuration.configuration - INFO - Runmode set to dry_run.
2025-01-26 23:32:46,923 - freqtrade.configuration.configuration - INFO - Parameter --db-url detected ...
2025-01-26 23:32:46,924 - freqtrade.configuration.configuration - WARNING - `force_entry_enable` RPC message enabled.
2025-01-26 23:32:46,924 - freqtrade.configuration.configuration - INFO - Dry run is enabled
2025-01-26 23:32:46,925 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:////freqtrade/user_data/tradesv3.sqlite"
2025-01-26 23:32:46,925 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 5 ...
2025-01-26 23:32:46,967 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2025-01-26 23:32:46,968 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/coinbaseadvanced ...
2025-01-26 23:32:46,969 - freqtrade.configuration.configuration - INFO - Using freqaimodel class name: LightGBMRegressor
2025-01-26 23:32:46,969 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2025-01-26 23:32:46,973 - freqtrade.exchange.check_exchange - WARNING - Exchange "coinbaseadvanced" is known to the ccxt library, available for the bot, but not officially supported by the Freqtrade development team. It may work flawlessly (please report back) or have serious issues. Use it at your own discretion.
2025-01-26 23:32:46,973 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2025-01-26 23:32:46,995 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy FreqaiExampleStrategy from '/freqtrade/user_data/strategies/FreqaiExampleStrategy.py'...
2025-01-26 23:32:46,996 - freqtrade.strategy.hyper - INFO - Found no parameter file.
2025-01-26 23:32:46,997 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'timeframe' with value in config file: 5m.
2025-01-26 23:32:46,997 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
2025-01-26 23:32:46,997 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: 200.
2025-01-26 23:32:46,997 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 10, 'exit': 30}.
2025-01-26 23:32:46,997 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 5.
2025-01-26 23:32:46,998 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.1, '240': -1}
2025-01-26 23:32:46,998 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 5m
2025-01-26 23:32:46,998 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.05
2025-01-26 23:32:46,999 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
2025-01-26 23:32:46,999 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
2025-01-26 23:32:47,000 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
2025-01-26 23:32:47,000 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
2025-01-26 23:32:47,000 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
2025-01-26 23:32:47,001 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}
2025-01-26 23:32:47,001 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
2025-01-26 23:32:47,002 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
2025-01-26 23:32:47,002 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: 200
2025-01-26 23:32:47,002 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 40
2025-01-26 23:32:47,003 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 10, 'exit': 30}
2025-01-26 23:32:47,003 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
2025-01-26 23:32:47,003 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
2025-01-26 23:32:47,003 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
2025-01-26 23:32:47,003 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
2025-01-26 23:32:47,004 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
2025-01-26 23:32:47,004 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
2025-01-26 23:32:47,004 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
2025-01-26 23:32:47,004 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
2025-01-26 23:32:47,004 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 5
2025-01-26 23:32:47,005 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2025-01-26 23:32:47,007 - freqtrade.resolvers.exchange_resolver - INFO - No Coinbaseadvanced specific subclass found. Using the generic class instead.
2025-01-26 23:32:47,007 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2025-01-26 23:32:47,007 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.42
2025-01-26 23:32:47,014 - freqtrade.exchange.exchange - INFO - Using Exchange "Coinbase Advanced"
2025-01-26 23:32:49,075 - freqtrade.wallets - INFO - Wallets synced.
2025-01-26 23:32:49,265 - freqtrade.rpc.rpc_manager - INFO - Enabling rpc.api_server
2025-01-26 23:32:49,419 - freqtrade.rpc.api_server.webserver - INFO - Starting HTTP Server at 0.0.0.0:8080
2025-01-26 23:32:49,420 - freqtrade.rpc.api_server.webserver - WARNING - SECURITY WARNING - `jwt_secret_key` seems to be default.Others may be able to log into your bot.
2025-01-26 23:32:49,420 - freqtrade.rpc.api_server.webserver - INFO - Starting Local Rest Server.
2025-01-26 23:32:49,441 - uvicorn.error - INFO - Started server process [1]
2025-01-26 23:32:49,442 - uvicorn.error - INFO - Waiting for application startup.
2025-01-26 23:32:49,443 - uvicorn.error - INFO - Application startup complete.
2025-01-26 23:32:49,443 - uvicorn.error - INFO - Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
2025-01-26 23:32:49,457 - freqtrade.resolvers.iresolver - INFO - Using resolved pairlist StaticPairList from '/freqtrade/freqtrade/plugins/pairlist/StaticPairList.py'...
2025-01-26 23:32:49,461 - freqtrade.plugins.pairlistmanager - INFO - Whitelist with 4 pairs: ['XRP/USDT', 'BTC/USDT', 'ETH/USDT', 'DOGE/USDT']
2025-01-26 23:32:49,471 - freqtrade - ERROR - Fatal exception!
Traceback (most recent call last):
File "/freqtrade/freqtrade/main.py", line 44, in main
return_code = args["func"](args)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/commands/trade_commands.py", line 24, in start_trading
worker = Worker(args)
^^^^^^^^^^^^
File "/freqtrade/freqtrade/worker.py", line 39, in __init__
self._init(False)
File "/freqtrade/freqtrade/worker.py", line 55, in _init
self.freqtrade = FreqtradeBot(self._config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/freqtradebot.py", line 177, in __init__
self.strategy.ft_bot_start()
File "/freqtrade/freqtrade/strategy/interface.py", line 208, in ft_bot_start
self.load_freqAI_model()
File "/freqtrade/freqtrade/strategy/interface.py", line 175, in load_freqAI_model
from freqtrade.freqai.utils import download_all_data_for_training
File "/freqtrade/freqtrade/freqai/utils.py", line 16, in <module>
from freqtrade.freqai.data_drawer import FreqaiDataDrawer
File "/freqtrade/freqtrade/freqai/data_drawer.py", line 25, in <module>
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
File "/freqtrade/freqtrade/freqai/data_kitchen.py", line 14, in <module>
from datasieve.pipeline import Pipeline
ModuleNotFoundError: No module named 'datasieve'
```
| closed | 2025-01-27T01:41:37Z | 2025-01-29T05:57:02Z | https://github.com/freqtrade/freqtrade/issues/11291 | [
"Question",
"Install",
"freqAI"
] | Jigaliath1 | 1 |
wger-project/wger | django | 1,897 | Proposal: Back button to return to dashboard | ## Use case
When I was using the application, I found that it was hard for me to understand how to go back to the main page after I'd clicked into one of the sections, the only way I found to was to click the icon in the top left, which isn't the best from a usability heuristic standpoint.
## Proposal
I'd love to add a simple back button to the 3 details pages once you click into them, so that users can very easily return back to their main dashboard without hitting the back button on their browser or trying to find the icon.
| open | 2025-02-24T06:26:01Z | 2025-03-14T05:51:24Z | https://github.com/wger-project/wger/issues/1897 | [] | bernstna | 8 |
xinntao/Real-ESRGAN | pytorch | 296 | How can I test my model | When I finish the training and know the location of the model, how to test and verify it
I have put it in to experiments/pretrained_models/ | open | 2022-04-12T13:30:08Z | 2022-04-12T13:30:08Z | https://github.com/xinntao/Real-ESRGAN/issues/296 | [] | zzshzyl | 0 |
microsoft/MMdnn | tensorflow | 141 | Warning: MXNet Parser has not supported operator SoftmaxActivation with name cls_prob. | Platform (like ubuntu 16.04/win10): Ubuntu 16.04
Python version: 2.7.12
Source framework with version (like Tensorflow 1.4.1 with GPU): mxnet 1.1.0 with cu80 GPU
Destination framework with version (like CNTK 2.3 with GPU): IR
Pre-trained model path (webpath or webdisk path): N/A
Running scripts: python -m mmdnn.conversion._script.convertToIR -f mxnet -n mynet-symbol.json -w mynet.params -d ir/mynet --inputShape 3 12 12
/home/jason/.virtualenvs/caffe-mxnet-cuda8/local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
/home/jason/.virtualenvs/caffe-mxnet-cuda8/local/lib/python2.7/site-packages/mxnet/module/base_module.py:53: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
data
warnings.warn(msg)
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Warning: MXNet Parser has not supported operator SoftmaxActivation with name cls_prob.
IR network structure is saved as [../ir/pnet.json].
IR network structure is saved as [../ir/pnet.pb].
IR weights are saved as [../ir/pnet.npy].
And here is my symbol.json of mxnet model
{
"nodes": [
{
"op": "null",
"name": "data",
"inputs": []
},
{
"op": "null",
"name": "conv1_weight",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "10"
},
"inputs": []
},
{
"op": "null",
"name": "conv1_bias",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "10"
},
"inputs": []
},
{
"op": "Convolution",
"name": "conv1",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "10"
},
"inputs": [[0, 0, 0], [1, 0, 0], [2, 0, 0]]
},
{
"op": "null",
"name": "prelu1_gamma",
"attrs": {
"__init__": "[\"Constant\", {\"value\": 0.25}]",
"act_type": "prelu"
},
"inputs": []
},
{
"op": "LeakyReLU",
"name": "prelu1",
"attrs": {"act_type": "prelu"},
"inputs": [[3, 0, 0], [4, 0, 0]]
},
{
"op": "Pooling",
"name": "pool1",
"attrs": {
"kernel": "(2, 2)",
"pool_type": "max",
"pooling_convention": "full",
"stride": "(2, 2)"
},
"inputs": [[5, 0, 0]]
},
{
"op": "null",
"name": "conv2_weight",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "16"
},
"inputs": []
},
{
"op": "null",
"name": "conv2_bias",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "16"
},
"inputs": []
},
{
"op": "Convolution",
"name": "conv2",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "16"
},
"inputs": [[6, 0, 0], [7, 0, 0], [8, 0, 0]]
},
{
"op": "null",
"name": "prelu2_gamma",
"attrs": {
"__init__": "[\"Constant\", {\"value\": 0.25}]",
"act_type": "prelu"
},
"inputs": []
},
{
"op": "LeakyReLU",
"name": "prelu2",
"attrs": {"act_type": "prelu"},
"inputs": [[9, 0, 0], [10, 0, 0]]
},
{
"op": "null",
"name": "conv3_weight",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "32"
},
"inputs": []
},
{
"op": "null",
"name": "conv3_bias",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "32"
},
"inputs": []
},
{
"op": "Convolution",
"name": "conv3",
"attrs": {
"kernel": "(3, 3)",
"num_filter": "32"
},
"inputs": [[11, 0, 0], [12, 0, 0], [13, 0, 0]]
},
{
"op": "null",
"name": "prelu3_gamma",
"attrs": {
"__init__": "[\"Constant\", {\"value\": 0.25}]",
"act_type": "prelu"
},
"inputs": []
},
{
"op": "LeakyReLU",
"name": "prelu3",
"attrs": {"act_type": "prelu"},
"inputs": [[14, 0, 0], [15, 0, 0]]
},
{
"op": "null",
"name": "conv4_1_weight",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "2"
},
"inputs": []
},
{
"op": "null",
"name": "conv4_1_bias",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "2"
},
"inputs": []
},
{
"op": "Convolution",
"name": "conv4_1",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "2"
},
"inputs": [[16, 0, 0], [17, 0, 0], [18, 0, 0]]
},
{
"op": "SoftmaxActivation",
"name": "cls_prob",
"attrs": {"mode": "channel"},
"inputs": [[19, 0, 0]]
},
{
"op": "null",
"name": "conv4_2_weight",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "4"
},
"inputs": []
},
{
"op": "null",
"name": "conv4_2_bias",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "4"
},
"inputs": []
},
{
"op": "Convolution",
"name": "conv4_2",
"attrs": {
"kernel": "(1, 1)",
"num_filter": "4"
},
"inputs": [[16, 0, 0], [21, 0, 0], [22, 0, 0]]
}
],
"arg_nodes": [
0,
1,
2,
4,
7,
8,
10,
12,
13,
15,
17,
18,
21,
22
],
"node_row_ptr": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24
],
"heads": [[20, 0, 0], [23, 0, 0]],
"attrs": {"mxnet_version": ["int", 10100]}
}%
| closed | 2018-04-08T04:33:59Z | 2018-08-20T17:25:56Z | https://github.com/microsoft/MMdnn/issues/141 | [] | IamJasonYe | 3 |
gevent/gevent | asyncio | 1,129 | Update to latest libuv release for 1.3 | Currently we're on 1.18.x, should try 1.19.x
To solve #1126 we're going to have to apply patches to the libuv source code *anyway* so there's little point in being sure we work with older versions. | closed | 2018-03-02T12:31:43Z | 2018-03-30T22:13:35Z | https://github.com/gevent/gevent/issues/1129 | [] | jamadden | 0 |
NullArray/AutoSploit | automation | 642 | Unhandled Exception (12d820452) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-parrot1-13t-amd64-x86_64-with-Parrot-4.5-stable`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/greenterminal/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/greenterminal/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-04-08T16:15:50Z | 2019-04-17T18:33:01Z | https://github.com/NullArray/AutoSploit/issues/642 | [] | AutosploitReporter | 0 |
dask/dask | numpy | 11,314 | Expose a blockwise - reshape operation that doesn't guarantee to keep the ordering consistent for downstream libraries | @dcherian when we chatted you mentioned that Xarray would benefit if we could expose a blockwise reshaping operation that doesn't guarantee the same ordering as the NumPy equivalent.
The requirement that Xarray would have is that
```
arr = da.random.random((100, 100, 100), chunks=(10, 10, 10))
result = arr.blockwise_reshape(100, 10_000).blockwise_reshape(100, 100, 100)
```
would result in the same array, correct?
This would keep the graph a lot simpler and allow actual blockwise operations if you would transform things back into the original shape anyway.
@dcherian is this what you are looking for?
A side-note: This is something that we should be able to detect automatically if / when we have array-expr
(more context https://github.com/pydata/xarray/issues/5629#issuecomment-960133879) | closed | 2024-08-14T13:29:45Z | 2024-08-27T17:41:07Z | https://github.com/dask/dask/issues/11314 | [
"array",
"array-expr"
] | phofl | 5 |
allure-framework/allure-python | pytest | 752 | allure-pytest: using the same @pytest.mark.parametrize for tests with different browsers appear as retries of the same test in the test report. | Hi, I am conducting login tests on different browsers using `pytest-playwright`. I am using `@pytest.mark.parametrize` to run tests with different emails and passwords. However, I noticed that in the allure report, my tests for Edge and Chrome are grouped together as the same test case. The test that runs first becomes the retries for the subsequent test case. I want them to appear as separate tests.
What I would want it to be like would be:
```
test_file_name
|__ Chrome
| |__ Test if the website can be successfully logged in
|
|__ Edge
| |__ Test if the website can be successfully logged in
|
|__ Firefox
| |__ Test if the website can be successfully logged in
```
I've tried using `allure.dynamic.tag` and `allure.dynamic.parameter`, but they didn't help. Thanks in advance.
#### I'm submitting a ...
- [ ] bug report
#### What is the current behavior?

#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
test_login.py
```
login_correct=[{"email": "123", "password": "123"}]
@allure.story("Test if the website can be successfully logged in")
@allure.title("{test_input}")
@allure.description("Enter the website and input correct email and password to check if can login.")
@pytest.mark.parametrize("test_input",login_correct)
def test_login_success(setup,test_input):
page = setup
login(page, test_input["email"], test_input["password"])
expect(page.get_by_text("Dashboard").first).to_be_visible(timeout=50*1000)
```
conftest.py
```
@pytest.fixture(scope="function")
def setup(page: Page,pytestconfig):
page.set_viewport_size({"width": 1920, "height": 1080})
page.goto(".....")
browser = pytestconfig.getoption("--browser")
browser_channel = pytestconfig.getoption("--browser-channel")
if browser_channel != None:
allure.dynamic.feature(f"{browser_channel}")
allure.dynamic.tag(browser_channel)
allure.dynamic.parameter("browser_name",browser_channel)
else:
allure.dynamic.feature(f"{browser[0]}")
allure.dynamic.tag(browser[0])
allure.dynamic.parameter("browser_name",browser[0])
yield page
```
I am testing different browsers using the following command:
```
pytest -n auto --browser-channel=chrome --alluredir=allure-results
pytest -n auto --browser=firefox --alluredir=allure-results
pytest -n auto --browser-channel=msedge --alluredir=allure-results
```
#### What is the expected behavior?
The Edge test case and Chrome Test case need to be seperate.
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.22.3
- Test framework: pytest@7.3.2
- Allure adaptor: allure-pytest@2.13.2
- pytest-playwright 0.3.3
| open | 2023-07-19T04:52:55Z | 2023-07-21T09:39:59Z | https://github.com/allure-framework/allure-python/issues/752 | [
"bug",
"theme:pytest",
"contribute"
] | win5923 | 7 |
shibing624/text2vec | nlp | 154 | 能否在文档说下 text2vec.SentenceModel 和 SentenceTransformer 到底有什么区别? | 能否在文档说下 text2vec.SentenceModel 和 SentenceTransformer 到底有什么区别?
我试了readme 给的例子,发现他们计算出的示例句子的词向量是一样的。
| closed | 2024-09-24T13:34:52Z | 2024-09-26T11:18:49Z | https://github.com/shibing624/text2vec/issues/154 | [
"question"
] | qiulang | 5 |
flaskbb/flaskbb | flask | 16 | Localization | Use Flask-Babel to support different languages.
| closed | 2014-02-27T13:18:41Z | 2018-04-15T07:47:30Z | https://github.com/flaskbb/flaskbb/issues/16 | [
"enhancement"
] | sh4nks | 27 |
svc-develop-team/so-vits-svc | deep-learning | 126 | [Help]: 每次训练都在 Epoch: 2 [42%], step: 800 的位置报错退出 | ### 请勾选下方的确认框。
- [x] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
Windows 10 家庭版
### GPU 型号
NVIDIA GeForce RTX 2060
### Python版本
python3.9.7
### PyTorch版本
2.0.0+cu118
### sovits分支
4.0-v2
### 数据集来源(用于判断数据集质量)
动画原声采集
### 出现问题的环节或执行的命令
训练
### 问题描述
每次训练都在 INFO:44k:Train Epoch: 2 [42%], step: 800, 的位置报错退出
即使把logs/44k文件夹清空重新训练,也是一样在这个位置报错退出
由于显卡是6G显存,config.json中我修改了batch_size,从6改为2 ,"batch_size": 2
### 日志
```python
INFO:44k:Loaded checkpoint './logs\44k\D_0.pth' (iteration 1)
E:\Python\lib\site-packages\torch\functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\SpectralOps.cpp:867.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
E:\Python\lib\site-packages\torch\autograd\__init__.py:200: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [32, 1, 4], strides() = [4, 1, 1]
bucket_view.sizes() = [32, 1, 4], strides() = [4, 4, 1] (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\distributed\c10d\reducer.cpp:337.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
INFO:44k:Train Epoch: 1 [0%]
INFO:44k:Losses: [1.8350424766540527, 3.087881326675415, 15.519474029541016, 43.46516799926758, 2.739884614944458], step: 0, lr: 0.0001
INFO:44k:Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth
INFO:44k:Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:44k:Train Epoch: 1 [35%]
INFO:44k:Losses: [3.2042624950408936, 1.6023280620574951, 4.85037899017334, 24.364368438720703, 1.8815621137619019], step: 200, lr: 0.0001
INFO:44k:Train Epoch: 1 [71%]
INFO:44k:Losses: [2.024649143218994, 2.5297534465789795, 12.023140907287598, 29.055946350097656, 1.7351597547531128], step: 400, lr: 0.0001
INFO:44k:====> Epoch: 1, cost 366.75 s
INFO:44k:Train Epoch: 2 [6%]
INFO:44k:Losses: [1.9109034538269043, 3.0717015266418457, 15.881654739379883, 31.35890769958496, 2.1832027435302734], step: 600, lr: 9.99875e-05
INFO:44k:Train Epoch: 2 [42%]
INFO:44k:Losses: [2.1936745643615723, 2.2962450981140137, 8.9740629196167, 23.723608016967773, 1.4403268098831177], step: 800, lr: 9.99875e-05
Traceback (most recent call last):
File "E:\1\2\so-vits-svc\train.py", line 315, in <module>
main()
File "E:\1\2\so-vits-svc\train.py", line 53, in main
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
File "E:\Python\lib\site-packages\torch\multiprocessing\spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "E:\Python\lib\site-packages\torch\multiprocessing\spawn.py", line 197, in start_processes
while not context.join():
File "E:\Python\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "E:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "E:\Python\lib\multiprocessing\queues.py", line 114, in get
raise Empty
_queue.Empty
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "E:\Python\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
fn(i, *args)
File "E:\1\2\so-vits-svc\train.py", line 124, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
File "E:\1\2\so-vits-svc\train.py", line 245, in train_and_evaluate
evaluate(hps, net_g, eval_loader, writer_eval)
File "E:\1\2\so-vits-svc\train.py", line 269, in evaluate
for batch_idx, items in enumerate(eval_loader):
File "E:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 634, in __next__
data = self._next_data()
File "E:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "E:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "E:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 1146, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 18212) exited unexpectedly
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处


### 补充说明
_No response_ | closed | 2023-04-05T18:59:38Z | 2023-04-13T04:03:13Z | https://github.com/svc-develop-team/so-vits-svc/issues/126 | [
"help wanted"
] | AGuanDao | 5 |
unionai-oss/pandera | pandas | 1,551 | `import pandera` breaks SparkSession in AWS EMR | ## Problem
Whenever I import pandera in my EMR spark application it breaks the SparkSession.
```python
import os
import findspark
# sets SPARK_HOME
findspark.init()
# commenting out this fixes the error
import pandera as pa
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
# errors
spark.sql("show tables").show()
```
Spark fails to connect:
```py
Caused by: java.io.IOException: Failed to connect to localhost/127.0.0.1:34859
```
## Root Cause
I have narrowed the root cause down to this line in `external_config.py`. Commenting out this line fixes the problem. What was the logic here? It seems dangerous to be setting the users environment variables in an `import`.
https://github.com/unionai-oss/pandera/blob/18717fb88f88f4c8ab2ce4bafafe87ac578137a2/pandera/external_config.py#L13
My suspicion is that this `catch` was meant to be a `finally`:
https://github.com/unionai-oss/pandera/blob/18717fb88f88f4c8ab2ce4bafafe87ac578137a2/pandera/external_config.py#L19-L21
Because adding a `pop` before I create my SparkSession also fixes the problem:
```py
os.environ.pop("SPARK_LOCAL_IP")
```
## Solution
Should `external_config.py` be changed to pop the environment variables in a `finally` instead of in the catch that way its side effects don't live on?
```py
"""Configuration for external packages."""
import os
try:
# try importing pyspark to see if it exists. This is important because the
# pandera.typing module defines a Series type that inherits from
# pandas.Series, and pyspark v1+ injects a __getitem__ method to pandas
# Series and DataFrames to support type hinting:
# https://spark.apache.org/docs/3.2.0/api/python/user_guide/pandas_on_spark/typehints.html#type-hinting-with-names
# pylint: disable=unused-import
if os.getenv("SPARK_LOCAL_IP") is None:
os.environ["SPARK_LOCAL_IP"] = "127.0.0.1"
if os.getenv("PYARROW_IGNORE_TIMEZONE") is None:
# This can be overriden by the user
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
import pyspark.pandas
finally:
os.environ.pop("SPARK_LOCAL_IP")
os.environ.pop("PYARROW_IGNORE_TIMEZONE")
```
| closed | 2024-04-03T08:22:53Z | 2024-04-03T19:30:07Z | https://github.com/unionai-oss/pandera/issues/1551 | [
"bug"
] | sam-goodwin | 0 |
bmoscon/cryptofeed | asyncio | 535 | Cannot save TRADES to Arctic DB | I am trying to save TRADES to Arctic db using cryptofeed but i am facing the issue below.
(I can save TICKER without any issues but TRADES seems to throw the error.)
Can someone please help ?
```
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\cryptofeed\connection_handler.py", line 64, in _create_connection
await self._handler(connection, self.handler)
File "C:\Anaconda3\lib\site-packages\cryptofeed\connection_handler.py", line 113, in _handler
await handler(message, connection, self.conn.last_message)
File "C:\Anaconda3\lib\site-packages\cryptofeed\exchange\bitfinex.py", line 293, in message_handler
await chan_handler(msg, timestamp)
File "C:\Anaconda3\lib\site-packages\cryptofeed\exchange\bitfinex.py", line 137, in _trades
await _trade_update(trade, timestamp)
File "C:\Anaconda3\lib\site-packages\cryptofeed\exchange\bitfinex.py", line 132, in _trade_update
receipt_timestamp=timestamp)
File "C:\Anaconda3\lib\site-packages\cryptofeed\feed.py", line 304, in callback
await cb(**kwargs)
File "C:\Anaconda3\lib\site-packages\cryptofeed\backends\backend.py", line 67, in __call__
await self.write(feed, symbol, timestamp, receipt_timestamp, data)
File "C:\Anaconda3\lib\site-packages\cryptofeed\backends\arctic.py", line 48, in write
self.lib.append(self.key, df, upsert=True)
File "C:\Anaconda3\lib\site-packages\arctic\decorators.py", line 50, in f_retry
return f(*args, **kwargs)
File "C:\Anaconda3\lib\site-packages\arctic\store\version_store.py", line 605, in append
raise Exception("Append not implemented for handler %s" % handler)
Exception: Append not implemented for handler <arctic.store._pickle_store.PickleStore object at 0x00000251D8422EC8>
```
Code i'm using to save TRADES
```
f.add_feed(Bitfinex(channels=[TRADES], symbols=['BTC-USD'], callbacks={TRADES: TradeArctic('cryptofeed-test')}))
```

| closed | 2021-06-24T06:32:03Z | 2021-07-01T23:50:21Z | https://github.com/bmoscon/cryptofeed/issues/535 | [
"bug"
] | sauravskumar | 4 |
healthchecks/healthchecks | django | 1,049 | Webhook should accept $SLUG as a placeholder | I am trying to create a Webhook integration that connects to [Home Assistant binary sensor](https://www.home-assistant.io/integrations/http/#binary-sensor). It requires a URL like `http://IP_ADDRESS:8123/api/states/binary_sensor.DEVICE_NAME` with `DEVICE_NAME` confirmed to its entity ID character limits, i.e., no space nor dash (`-`).
With that I can't use `$NAME` nor `$CODE`. I could use `$TAG1` to workaround this, but it would be so much easier if it has a `$SLUG` placeholder available.
Thank you! I am using Healthchecks v3.4. | closed | 2024-08-16T07:26:50Z | 2024-08-16T10:34:47Z | https://github.com/healthchecks/healthchecks/issues/1049 | [] | timdream | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.