repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pytest-dev/pytest-django | pytest | 465 | Update setuptools_scm-1.11.1 requirement or unpin | I have a little question:
Do you really need a hard dependency to setuptools_scm-1.11.1.tar.gz?
I have some packages which needs setuptools_scm-1.15.0.tar.gz and collide with this.
Actually I do the following:
1. install setuptools_scm-1.11.1.tar.gz
2. install pytest-django-3.1.2.tar.gz
3. install setuptools_scm-1.15.0.tar.gz
I am on a win7 64bit pc with python 3.5.3 64bit *without* internet connection.
Actually all my pytest’s with django works well.
If it’s not a problem for you, could you put the dependency to
setuptools_scm-1.15.0 Or setuptools_scm only?
| closed | 2017-02-22T10:22:55Z | 2017-02-22T19:32:23Z | https://github.com/pytest-dev/pytest-django/issues/465 | [] | stephanema | 3 |
miguelgrinberg/microblog | flask | 2 | setup instructions don't mention mysql_config dependency | From the same build circumstances as mentioned in issue https://github.com/miguelgrinberg/microblog/issues/1
The step to build mysql-python fails for lack of mysql_config.
The fix is to run :
sudo apt-get -y install libmysqlclient-dev
| closed | 2013-06-29T10:57:09Z | 2013-06-30T01:31:19Z | https://github.com/miguelgrinberg/microblog/issues/2 | [] | martinhbramwell | 1 |
davidteather/TikTok-Api | api | 585 | [BUG] - playwright._impl._api_types.Error: Protocol error (Playwright.enable): Browser closed. on ubuntu server | It runs fine on my computer, but it doesn't run on my ubuntu server. I installed the package, and then I did the command
"python3 -m install playwright", and it installed everything but I still get the error on my ubuntu server. Any ideas on how to fix this? Here is the error that I get
- ubuntu: 20.04.2
- TikTokApi Version 3.9.5
- Everything installed was on 5/10/2021
Traceback (most recent call last):
File "tvs_ubuntu.py", line 118, in <module>
tiktok_vid, video_id = AutomatedTVS()
File "tvs_ubuntu.py", line 10, in AutomatedTVS
api = TikTokApi.get_instance(custom_verifyFp="keeping it a secret")
File "/home/ubuntu/.local/lib/python3.8/site-packages/TikTokApi/tiktok.py", li ne 148, in get_instance
TikTokApi(**kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/TikTokApi/tiktok.py", li ne 58, in __init__
self.browser = browser(**kwargs)
File "/home/ubuntu/.local/lib/python3.8/site-packages/TikTokApi/browser.py", l ine 82, in __init__
raise e
File "/home/ubuntu/.local/lib/python3.8/site-packages/TikTokApi/browser.py", l ine 78, in __init__
self.browser = get_playwright().webkit.launch(
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/sync_api/_gen erated.py", line 8941, in launch
self._sync(
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/_impl/_sync_b ase.py", line 103, in _sync
return task.result()
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/_impl/_browse r_type.py", line 79, in launch
raise e
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/_impl/_browse r_type.py", line 75, in launch
return from_channel(await self._channel.send("launch", params))
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/_impl/_connec tion.py", line 36, in send
return await self.inner_send(method, params, False)
File "/home/ubuntu/.local/lib/python3.8/site-packages/playwright/_impl/_connec tion.py", line 47, in inner_send
result = await callback.future
playwright._impl._api_types.Error: Protocol error (Playwright.enable): Browser c losed.
==================== Browser output: ====================
<launching> /home/ubuntu/.cache/ms-playwright/webkit-1446/pw_run.sh --inspector- pipe --headless --no-startup-window
<launched> pid=22644
[pid=22644][err] /home/ubuntu/.cache/ms-playwright/webkit-1446/minibrowser-wpe/b in/MiniBrowser: error while loading shared libraries: libatk-1.0.so.0: cannot op en shared object file: No such file or directory
=========================== logs ===========================
<launching> /home/ubuntu/.cache/ms-playwright/webkit-1446/pw_run.sh --inspector- pipe --headless --no-startup-window
<launched> pid=22644
[pid=22644][err] /home/ubuntu/.cache/ms-playwright/webkit-1446/minibrowser-wpe/b in/MiniBrowser: error while loading shared libraries: libatk-1.0.so.0: cannot op en shared object file: No such file or directory
============================================================
Note: use DEBUG=pw:api environment variable to capture Playwright logs.
| closed | 2021-05-10T22:52:41Z | 2021-05-14T17:05:51Z | https://github.com/davidteather/TikTok-Api/issues/585 | [
"bug"
] | DevJChen | 3 |
mljar/mercury | data-visualization | 1 | Clear all tasks on refresh in watch mode | closed | 2022-01-05T11:20:50Z | 2022-01-07T15:32:19Z | https://github.com/mljar/mercury/issues/1 | [] | pplonski | 0 | |
PrefectHQ/prefect | automation | 17,281 | Duplicate flow runs scheduled after upgrading server to 3.2.2 | ### Bug summary
After upgrading our self-hosted prefect server to 3.2.2 we found all already (cron) scheduled flow runs were duplicated. As many of our flows require resource locks we noticed this bug when flows started failing to acquire those locks as the duplicated flows created a race condition.
It appears that the cause of this issue was an update to a flow run's idempotency key, [ref](https://github.com/PrefectHQ/prefect/pull/17123/files#diff-b800cb4cdcfc999e05a4124de591bcc4de6967795b707592853d67fa3bfbf06eR737), which (I believe) caused the system to not detect already scheduled flow runs, resulting in the duplicates, see example below of a scheduled flow run with the same start time and different idempotency keys.

### Steps to Reproduce
1. With a prefect server running version < 3.2.2 create
1. a process work pool
```
prefect worker start -p my-pool
```
2. a source deployment with
```
# flow.py
from prefect import flow
from pathlib import Path
@flow(log_prints=True)
def my_flow(name: str = "World"):
print(f"Hello {name}!")
print(str(Path(__file__).parent)) # dynamic path
if __name__ == "__main__":
my_flow.from_source(
source=str(Path(__file__).parent), # code stored in local directory
entrypoint="flow.py:my_flow",
).deploy(
name="test dep",
work_pool_name="my-pool",
cron="0 * * * *",
)
```
```
python flow.py
```
2. Check to see that there's a few scheduled runs for the next few hours
3. Stop server and work pool
4. Upgrade prefect, `pip install prefect==3.2.2`
5. Start server and work pool again
6. Check to see duplicate scheduled runs for the next few hours
### Version info
```Text
Version: 3.2.0
API version: 0.8.4
Python version: 3.11.11
Git commit: c8986ede
Built: Fri, Feb 7, 2025 6:02 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: ephemeral
Pydantic version: 2.10.6
Server:
Database: postgresql
Integrations:
prefect-gcp: 0.6.2
```
### Additional context
Not understanding the root cause when we saw our manifestation of the bug, we downgraded our prefect server so are looking for guidance on how to perform the upgrade without running into this issue again. | closed | 2025-02-25T20:55:15Z | 2025-02-26T16:04:04Z | https://github.com/PrefectHQ/prefect/issues/17281 | [
"bug"
] | Ultramann | 2 |
huggingface/diffusers | pytorch | 10,412 | SD3.5-Large DreamBooth Training - Over 80GB VRAM Usage | ### Describe the bug
⚠️ We are running out of memory on step 0
❕It does work without '--train_text_encoder'. It seems that there might be a memory leak or issue with training the text encoder with the current script / model.
❓Does it make sense that the model uses over 80GB of VRAM?
❓Do you have any recommendations on decreasing VRAM usage
Other than:
. 8bit Adam
. Mixed precision 16fp
. xformers (that doesn't work with SD3.5)
💡Idea:
After successfully training with the _Kohya-ss_ scripts: [Relevant Repo](https://github.com/kohya-ss/sd-scripts/tree/sd3),
I have deducted that the issue might be with the _Dreambooth_ scripts here not using 8bitAdam properly; either ignoring or a bug might be in the implementation itself. This is due to the fact that the only single parameter that had a massive effect on VRAM and caused a massive surge is not using Adam8Bit optimizer, otherwise the seemingly same parameters in _Kohya-ss_.
### Kohya-ss Parameters for reference 📝
```
# Models
pretrained_model_name_or_path = "/kohya_ss/models/sd3.5_large.safetensors"
# Captioning
cache_latents = true
caption_dropout_every_n_epochs = 0
caption_dropout_rate = 0
caption_extension = ".txt"
clip_skip = 1
keep_tokens = 0
# Text Encoder Training
use_t5xxl_cache_only = true
t5xxl_dtype = "fp16"
train_text_encoder = true
# Learning Rates
learning_rate = 5e-6
learning_rate_te1 = 1e-5
learning_rate_te2 = 1e-5
loss_type = "l2"
lr_scheduler = "cosine"
lr_scheduler_args = []
lr_scheduler_num_cycles = 1
lr_scheduler_power = 0.5
lr_warmup_steps = 0
optimizer_type = "AdamW8bit"
# Batch Sizes
text_encoder_batch_size = 1
train_batch_size = 1
epoch = 1
persistent_data_loader_workers = 0
max_data_loader_n_workers = 0
# Buckets, Noise & SNR
max_bucket_reso = 2048
min_bucket_reso = 256
bucket_no_upscale = true
bucket_reso_steps = 64
huber_c = 0.1
huber_schedule = "snr"
min_snr_gamma = 5
prior_loss_weight = 1
max_timestep = 1000
multires_noise_discount = 0.3
multires_noise_iterations = 0
noise_offset = 0
noise_offset_type = "Original"
adaptive_noise_scale = 0
# SD3 Logits
mode_scale = 1.29
weighting_scheme = "logit_normal"
logit_mean = 0
logit_std = 1
# VRAM Optimization
resolution = "512,512"
max_token_length = 75
max_train_steps = 800
mem_eff_attn = true
mixed_precision = "fp16"
full_fp16 = true
gradient_accumulation_steps = 1
gradient_checkpointing = true
xformers = true
dynamo_backend = "no"
# Sampling
sample_every_n_epochs = 50
sample_sampler = "euler"
# Model Saving
save_every_n_steps = 200
save_model_as = "diffusers"
save_precision = "fp16"
# General
output_name = "last"
log_with = "tensorboard"
```
### Reproduction
We are running the following command in _Jupyter Notebook_:
```
!accelerate launch train_dreambooth_sd3.py
--pretrained_model_name_or_path="stabilityai/stable-diffusion-3.5-large"
--output_dir="sd_outputs"
--instance_data_dir="ogo"
--instance_prompt="the face of ogo person"
--resolution=512
--train_batch_size=1
--gradient_accumulation_steps=2
--gradient_checkpointing
--checkpointing_steps=200
--learning_rate=2e-6
--text_encoder_lr=1e-6
--train_text_encoder
--lr_scheduler="constant"
--lr_warmup_steps=0
--max_train_steps=800
--seed="0"
--use_8bit_adam
--mixed_precision="fp16"
```
### Logs
```shell
2024-12-02 12:36:35.615846: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733142995.629356 226993 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733142995.633681 226993 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
12/02/2024 12:36:39 - INFO - main - Distributed environment: DistributedType.NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'base_shift', 'max_image_seq_len', 'max_shift', 'base_image_seq_len', 'invert_sigmas', 'use_dynamic_shifting'} was not found in config. Values will be initialized to default values.
Downloading shards: 100%|███████████████████████| 2/2 [00:00<00:00, 3450.68it/s]
Loading checkpoint shards: 100%|██████████████████| 2/2 [00:03<00:00, 1.73s/it]
Fetching 2 files: 100%|█████████████████████████| 2/2 [00:00<00:00, 7476.48it/s]
{'dual_attention_layers'} was not found in config. Values will be initialized to default values.
12/02/2024 12:37:04 - INFO - main - ***** Running training *****
12/02/2024 12:37:04 - INFO - main - Num examples = 1
12/02/2024 12:37:04 - INFO - main - Num batches each epoch = 1
12/02/2024 12:37:04 - INFO - main - Num Epochs = 800
12/02/2024 12:37:04 - INFO - main - Instantaneous batch size per device = 1
12/02/2024 12:37:04 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 2
12/02/2024 12:37:04 - INFO - main - Gradient Accumulation steps = 2
12/02/2024 12:37:04 - INFO - main - Total optimization steps = 800
Steps: 0%| | 0/800 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/azureuser/Picturethis/Dima/train_dreambooth_sd3.py", line 1811, in
main(args)
File "/home/azureuser/Picturethis/Dima/train_dreambooth_sd3.py", line 1666, in main
optimizer.step()
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/accelerate/optimizer.py", line 171, in step
self.optimizer.step(closure)
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/torch/optim/lr_scheduler.py", line 137, in wrapper
return func.get(opt, opt.class)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/torch/optim/optimizer.py", line 487, in wrapper
out = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/bitsandbytes/optim/optimizer.py", line 288, in step
self.init_state(group, p, gindex, pindex)
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/bitsandbytes/optim/optimizer.py", line 474, in init_state
state["state2"] = self.get_state_buffer(p, dtype=torch.uint8)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/bitsandbytes/optim/optimizer.py", line 328, in get_state_buffer
return torch.zeros_like(p, dtype=dtype, device=p.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 79.15 GiB of which 10.62 MiB is free. Process 68964 has 530.00 MiB memory in use. Including non-PyTorch memory, this process has 78.45 GiB memory in use. Of the allocated memory 75.60 GiB is allocated by PyTorch, and 2.35 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Steps: 0%| | 0/800 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/home/azureuser/mambaforge/envs/picturevenv/bin/accelerate", line 8, in
sys.exit(main())
^^^^^^
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1168, in launch_command
simple_launcher(args)
File "/home/azureuser/mambaforge/envs/picturevenv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 763, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/azureuser/mambaforge/envs/picturevenv/bin/python3.11', 'train_dreambooth_sd3.py', '--pretrained_model_name_or_path=stabilityai/stable-diffusion-3.5-large', '--output_dir=sd_outputs', '--instance_data_dir=ogo', '--instance_prompt=the face of ogo person', '--resolution=512', '--train_batch_size=1', '--gradient_accumulation_steps=2', '--gradient_checkpointing', '--checkpointing_steps=200', '--learning_rate=2e-6', '--text_encoder_lr=1e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=800', '--seed=0', '--use_8bit_adam']' returned non-zero exit status 1.
```
### System Info
### System 🖥️
A100 Azure Remote Server.
Running the code from _Jupyter Notebook_.
### Libraries 📚
```
torch==2.5.1+cu124
torchvision==0.20.0+cu124
xformers==0.0.28.post2
bitsandbytes==0.44.0
tensorboard==2.15.2
tensorflow==2.15.0.post1
onnxruntime-gpu==1.19.2
accelerate==0.33.0
aiofiles==23.2.1
altair==4.2.2
dadaptation==3.2
diffusers[torch]==0.25.0
easygui==0.98.3
einops==0.7.0
fairscale==0.4.13
ftfy==6.1.1
gradio==5.4.0
huggingface-hub==0.25.2
imagesize==1.4.1
invisible-watermark==0.2.0
lion-pytorch==0.0.6
lycoris_lora==3.1.0
omegaconf==2.3.0
onnx==1.16.1
prodigyopt==1.0
protobuf==3.20.3
open-clip-torch==2.20.0
opencv-python==4.10.0.84
prodigyopt==1.0
pytorch-lightning==1.9.0
rich>=13.7.1
safetensors==0.4.4
schedulefree==1.2.7
scipy==1.11.4
# for T5XXL tokenizer (SD3/FLUX)
sentencepiece==0.2.0
timm==0.6.12
tk==0.1.0
toml==0.10.2
transformers==4.44.2
voluptuous==0.13.1
wandb==0.18.0
```
### Who can help?
_No response_ | open | 2024-12-30T15:01:12Z | 2025-01-29T15:02:52Z | https://github.com/huggingface/diffusers/issues/10412 | [
"bug",
"stale"
] | deman311 | 2 |
gradio-app/gradio | deep-learning | 10,795 | [NPM PACKAGE] unable to import Client. ERR_PACKAGE_PATH_NOT_EXPORTED | ### Describe the bug
Using NESTjs and want to call Gradio client, getting error:
```bash
[7:20:35 PM] File change detected. Starting incremental compilation...
[7:20:35 PM] Found 0 errors. Watching for file changes.
node:internal/modules/cjs/loader:553
throw e;
^
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './dist/index.js' is not defined by "exports" in /Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/node_modules/@gradio/client/package.json
at __node_internal_captureLargerStackTrace (node:internal/errors:497:5)
at new NodeError (node:internal/errors:406:5)
at exportsNotFound (node:internal/modules/esm/resolve:268:10)
at packageExportsResolve (node:internal/modules/esm/resolve:598:9)
at resolveExports (node:internal/modules/cjs/loader:547:36)
at Module._findPath (node:internal/modules/cjs/loader:621:31)
at Module._resolveFilename (node:internal/modules/cjs/loader:1034:27)
at Module._load (node:internal/modules/cjs/loader:901:27)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:130:18)
at Object.<anonymous> (/Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/src/ai_service/ai_service.service.ts:10:1)
at Module._compile (node:internal/modules/cjs/loader:1241:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module._load (node:internal/modules/cjs/loader:938:12)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:130:18)
at Object.<anonymous> (/Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/src/ai_service/ai_service.controller.ts:17:1)
at Module._compile (node:internal/modules/cjs/loader:1241:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module._load (node:internal/modules/cjs/loader:938:12)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:130:18)
at Object.<anonymous> (/Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/src/ai_service/ai_service.module.ts:2:1)
at Module._compile (node:internal/modules/cjs/loader:1241:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module._load (node:internal/modules/cjs/loader:938:12)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:130:18)
at Object.<anonymous> (/Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/src/app.module.ts:14:1)
at Module._compile (node:internal/modules/cjs/loader:1241:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module._load (node:internal/modules/cjs/loader:938:12)
at Module.require (node:internal/modules/cjs/loader:1115:19)
at require (node:internal/modules/helpers:130:18)
at Object.<anonymous> (/Users/himanshu/codes/aurax/aurax_monorepo/apps/backend/src/main.ts:2:1)
at Module._compile (node:internal/modules/cjs/loader:1241:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
at Module.load (node:internal/modules/cjs/loader:1091:32)
at Module._load (node:internal/modules/cjs/loader:938:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:83:12)
at node:internal/main/run_main_module:23:47 {
code: 'ERR_PACKAGE_PATH_NOT_EXPORTED'
}
Node.js v20.9.0
```
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
> [!IMPORTANT]
> Need to modify client as required
```typescript
import { Client } from '@gradio/client/dist/index.js';
// OR
import { Client } from '@gradio/client';
const client = new Client(INPAINTING_GRADIO_URL, {
auth: [INPAINTING_GRADIO_USERNAME, INPAINTING_GRADIO_PASSWORD],
});
const result = await client.predict('/infer', [
edit_images, // Input parameter 0: edit_images
prompt, // Input parameter 1: prompt
width, // Input parameter 2: width
height, // Input parameter 3: height
lora_model, // Input parameter 4: lora_model
strength, // Input parameter 5: strength
seed, // Input parameter 6: seed
randomize_seed, // Input parameter 7: randomize_seed
guidance_scale, // Input parameter 8: guidance_scale
inference_steps, // Input parameter 9: inference_steps
]);
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
NO REPONSE
```
### Severity
Blocking usage of gradio | open | 2025-03-12T13:57:54Z | 2025-03-12T19:56:36Z | https://github.com/gradio-app/gradio/issues/10795 | [
"bug",
"svelte",
"API"
] | Himasnhu-AT | 0 |
coqui-ai/TTS | pytorch | 3,735 | [Bug] Error during installation on Mac | ### Describe the bug
Error during installation throws the following error:
```
Error compiling Cython file:
------------------------------------------------------------
...
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
blas_functions.axpy = _axpy[double]
blas_functions.scal = _scal[double]
blas_functions.nrm2 = _nrm2[double]
^
------------------------------------------------------------
sklearn/svm/_liblinear.pyx:58:31: Cannot assign type 'double (int, double *, int) except * nogil' to 'nrm2_func' (alias of 'double (*)(int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Traceback (most recent call last):
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1345, in cythonize_one_helper
return cythonize_one(*m)
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: sklearn/svm/_liblinear.pyx
....
```
### To Reproduce
- Run `pip install TTS` command
### Expected behavior
- Successfully installed library
### Logs
```shell
pip logs
Collecting TTS
Downloading TTS-0.14.3.tar.gz (1.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 21.3 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting cython==0.29.28 (from TTS)
Using cached Cython-0.29.28-py2.py3-none-any.whl.metadata (2.8 kB)
Requirement already satisfied: scipy>=1.4.0 in ./.conda/lib/python3.8/site-packages (from TTS) (1.10.1)
Collecting torch>=1.7 (from TTS)
Downloading torch-2.3.0-cp38-none-macosx_11_0_arm64.whl.metadata (26 kB)
Collecting torchaudio (from TTS)
Downloading torchaudio-2.3.0-cp38-cp38-macosx_11_0_arm64.whl.metadata (6.4 kB)
Collecting soundfile (from TTS)
Downloading soundfile-0.12.1-py2.py3-none-macosx_11_0_arm64.whl.metadata (14 kB)
Collecting librosa==0.10.0.* (from TTS)
Downloading librosa-0.10.0.post2-py3-none-any.whl.metadata (8.3 kB)
Collecting inflect==5.6.0 (from TTS)
Downloading inflect-5.6.0-py3-none-any.whl.metadata (21 kB)
Collecting tqdm (from TTS)
Downloading tqdm-4.66.4-py3-none-any.whl.metadata (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 6.8 MB/s eta 0:00:00
Collecting anyascii (from TTS)
Downloading anyascii-0.3.2-py3-none-any.whl.metadata (1.5 kB)
Collecting pyyaml (from TTS)
Downloading PyYAML-6.0.1.tar.gz (125 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.2/125.2 kB 13.0 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting fsspec>=2021.04.0 (from TTS)
Downloading fsspec-2024.3.1-py3-none-any.whl.metadata (6.8 kB)
Collecting aiohttp (from TTS)
Downloading aiohttp-3.9.5-cp38-cp38-macosx_11_0_arm64.whl.metadata (7.5 kB)
Collecting packaging (from TTS)
Using cached packaging-24.0-py3-none-any.whl.metadata (3.2 kB)
Collecting flask (from TTS)
Downloading flask-3.0.3-py3-none-any.whl.metadata (3.2 kB)
Collecting pysbd (from TTS)
Downloading pysbd-0.3.4-py3-none-any.whl.metadata (6.1 kB)
Collecting umap-learn==0.5.1 (from TTS)
Downloading umap-learn-0.5.1.tar.gz (80 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.9/80.9 kB 8.4 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting pandas (from TTS)
Downloading pandas-2.0.3-cp38-cp38-macosx_11_0_arm64.whl.metadata (18 kB)
Collecting matplotlib (from TTS)
Downloading matplotlib-3.7.5-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.7 kB)
Collecting trainer==0.0.20 (from TTS)
Downloading trainer-0.0.20-py3-none-any.whl.metadata (5.6 kB)
Collecting coqpit>=0.0.16 (from TTS)
Downloading coqpit-0.0.17-py3-none-any.whl.metadata (11 kB)
Collecting jieba (from TTS)
Downloading jieba-0.42.1.tar.gz (19.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.2/19.2 MB 54.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting pypinyin (from TTS)
Downloading pypinyin-0.51.0-py2.py3-none-any.whl.metadata (12 kB)
Collecting mecab-python3==1.0.5 (from TTS)
Downloading mecab-python3-1.0.5.tar.gz (77 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.6/77.6 kB 10.1 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting unidic-lite==1.0.8 (from TTS)
Downloading unidic-lite-1.0.8.tar.gz (47.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 47.4/47.4 MB 48.0 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gruut==2.2.3 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut-2.2.3.tar.gz (73 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.5/73.5 kB 9.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting jamo (from TTS)
Downloading jamo-0.4.1-py3-none-any.whl.metadata (2.3 kB)
Collecting nltk (from TTS)
Downloading nltk-3.8.1-py3-none-any.whl.metadata (2.8 kB)
Collecting g2pkk>=0.1.1 (from TTS)
Downloading g2pkk-0.1.2-py3-none-any.whl.metadata (2.0 kB)
Collecting bangla==0.0.2 (from TTS)
Downloading bangla-0.0.2-py2.py3-none-any.whl.metadata (4.5 kB)
Collecting bnnumerizer (from TTS)
Downloading bnnumerizer-0.0.2.tar.gz (4.7 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting bnunicodenormalizer==0.1.1 (from TTS)
Downloading bnunicodenormalizer-0.1.1.tar.gz (38 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting k-diffusion (from TTS)
Downloading k_diffusion-0.1.1.post1-py3-none-any.whl.metadata (3.9 kB)
Collecting einops (from TTS)
Downloading einops-0.8.0-py3-none-any.whl.metadata (12 kB)
Collecting transformers (from TTS)
Downloading transformers-4.40.2-py3-none-any.whl.metadata (137 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 138.0/138.0 kB 15.4 MB/s eta 0:00:00
Collecting numpy==1.21.6 (from TTS)
Using cached numpy-1.21.6-cp38-cp38-macosx_11_0_arm64.whl.metadata (2.1 kB)
Collecting numba==0.55.1 (from TTS)
Downloading numba-0.55.1.tar.gz (2.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 54.1 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting Babel<3.0.0,>=2.8.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading Babel-2.15.0-py3-none-any.whl.metadata (1.5 kB)
Collecting dateparser~=1.1.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading dateparser-1.1.8-py2.py3-none-any.whl.metadata (27 kB)
Collecting gruut-ipa<1.0,>=0.12.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut-ipa-0.13.0.tar.gz (101 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.6/101.6 kB 13.1 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gruut_lang_en~=2.0.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_en-2.0.0.tar.gz (15.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.2/15.2 MB 57.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting jsonlines~=1.2.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading jsonlines-1.2.0-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting networkx<3.0.0,>=2.5.0 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading networkx-2.8.8-py3-none-any.whl.metadata (5.1 kB)
Collecting num2words<1.0.0,>=0.5.10 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading num2words-0.5.13-py3-none-any.whl.metadata (12 kB)
Collecting python-crfsuite~=0.9.7 (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading python-crfsuite-0.9.10.tar.gz (478 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 478.3/478.3 kB 34.8 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting importlib_resources (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading importlib_resources-6.4.0-py3-none-any.whl.metadata (3.9 kB)
Collecting gruut_lang_es~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_es-2.0.0.tar.gz (31.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 31.4/31.4 MB 51.3 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gruut_lang_de~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_de-2.0.0.tar.gz (18.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.1/18.1 MB 61.6 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gruut_lang_fr~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_fr-2.0.2.tar.gz (10.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.9/10.9 MB 40.0 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting audioread>=2.1.9 (from librosa==0.10.0.*->TTS)
Downloading audioread-3.0.1-py3-none-any.whl.metadata (8.4 kB)
Requirement already satisfied: scikit-learn>=0.20.0 in ./.conda/lib/python3.8/site-packages (from librosa==0.10.0.*->TTS) (1.3.2)
Requirement already satisfied: joblib>=0.14 in ./.conda/lib/python3.8/site-packages (from librosa==0.10.0.*->TTS) (1.4.2)
Collecting decorator>=4.3.0 (from librosa==0.10.0.*->TTS)
Downloading decorator-5.1.1-py3-none-any.whl.metadata (4.0 kB)
Collecting pooch<1.7,>=1.0 (from librosa==0.10.0.*->TTS)
Downloading pooch-1.6.0-py3-none-any.whl.metadata (10 kB)
Collecting soxr>=0.3.2 (from librosa==0.10.0.*->TTS)
Downloading soxr-0.3.7-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.5 kB)
Collecting typing-extensions>=4.1.1 (from librosa==0.10.0.*->TTS)
Using cached typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Collecting lazy-loader>=0.1 (from librosa==0.10.0.*->TTS)
Downloading lazy_loader-0.4-py3-none-any.whl.metadata (7.6 kB)
Collecting msgpack>=1.0 (from librosa==0.10.0.*->TTS)
Downloading msgpack-1.0.8-cp38-cp38-macosx_11_0_arm64.whl.metadata (9.1 kB)
Collecting llvmlite<0.39,>=0.38.0rc1 (from numba==0.55.1->TTS)
Downloading llvmlite-0.38.1-cp38-cp38-macosx_11_0_arm64.whl.metadata (4.7 kB)
Requirement already satisfied: setuptools in ./.conda/lib/python3.8/site-packages (from numba==0.55.1->TTS) (69.5.1)
Collecting psutil (from trainer==0.0.20->TTS)
Downloading psutil-5.9.8-cp38-abi3-macosx_11_0_arm64.whl.metadata (21 kB)
Collecting tensorboardX (from trainer==0.0.20->TTS)
Downloading tensorboardX-2.6.2.2-py2.py3-none-any.whl.metadata (5.8 kB)
Collecting protobuf<3.20,>=3.9.2 (from trainer==0.0.20->TTS)
Downloading protobuf-3.19.6-py2.py3-none-any.whl.metadata (828 bytes)
Collecting pynndescent>=0.5 (from umap-learn==0.5.1->TTS)
Downloading pynndescent-0.5.12-py3-none-any.whl.metadata (6.8 kB)
Collecting cffi>=1.0 (from soundfile->TTS)
Downloading cffi-1.16.0.tar.gz (512 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 512.9/512.9 kB 35.5 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting filelock (from torch>=1.7->TTS)
Downloading filelock-3.14.0-py3-none-any.whl.metadata (2.8 kB)
Collecting sympy (from torch>=1.7->TTS)
Downloading sympy-1.12-py3-none-any.whl.metadata (12 kB)
Collecting jinja2 (from torch>=1.7->TTS)
Downloading jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->TTS)
Downloading aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting attrs>=17.3.0 (from aiohttp->TTS)
Downloading attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->TTS)
Downloading frozenlist-1.4.1-cp38-cp38-macosx_11_0_arm64.whl.metadata (12 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->TTS)
Downloading multidict-6.0.5-cp38-cp38-macosx_11_0_arm64.whl.metadata (4.2 kB)
Collecting yarl<2.0,>=1.0 (from aiohttp->TTS)
Downloading yarl-1.9.4-cp38-cp38-macosx_11_0_arm64.whl.metadata (31 kB)
Collecting async-timeout<5.0,>=4.0 (from aiohttp->TTS)
Downloading async_timeout-4.0.3-py3-none-any.whl.metadata (4.2 kB)
Collecting Werkzeug>=3.0.0 (from flask->TTS)
Downloading werkzeug-3.0.3-py3-none-any.whl.metadata (3.7 kB)
Collecting itsdangerous>=2.1.2 (from flask->TTS)
Downloading itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
Collecting click>=8.1.3 (from flask->TTS)
Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting blinker>=1.6.2 (from flask->TTS)
Downloading blinker-1.8.2-py3-none-any.whl.metadata (1.6 kB)
Collecting importlib-metadata>=3.6.0 (from flask->TTS)
Downloading importlib_metadata-7.1.0-py3-none-any.whl.metadata (4.7 kB)
Collecting accelerate (from k-diffusion->TTS)
Downloading accelerate-0.30.1-py3-none-any.whl.metadata (18 kB)
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.35-py3-none-any.whl.metadata (36 kB)
Collecting clip-anytorch (from k-diffusion->TTS)
Downloading clip_anytorch-2.6.0-py3-none-any.whl.metadata (8.4 kB)
Collecting dctorch (from k-diffusion->TTS)
Downloading dctorch-0.1.2-py3-none-any.whl.metadata (607 bytes)
Collecting jsonmerge (from k-diffusion->TTS)
Downloading jsonmerge-1.9.2-py3-none-any.whl.metadata (21 kB)
Collecting kornia (from k-diffusion->TTS)
Downloading kornia-0.7.2-py2.py3-none-any.whl.metadata (12 kB)
Collecting Pillow (from k-diffusion->TTS)
Downloading pillow-10.3.0-cp38-cp38-macosx_11_0_arm64.whl.metadata (9.2 kB)
Collecting safetensors (from k-diffusion->TTS)
Downloading safetensors-0.4.3-cp38-cp38-macosx_11_0_arm64.whl.metadata (3.8 kB)
Collecting scikit-image (from k-diffusion->TTS)
Downloading scikit_image-0.21.0-cp38-cp38-macosx_12_0_arm64.whl.metadata (14 kB)
Collecting torchdiffeq (from k-diffusion->TTS)
Downloading torchdiffeq-0.2.3-py3-none-any.whl.metadata (488 bytes)
Collecting torchsde (from k-diffusion->TTS)
Downloading torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB)
Collecting torchvision (from k-diffusion->TTS)
Downloading torchvision-0.18.0-cp38-cp38-macosx_11_0_arm64.whl.metadata (6.6 kB)
Collecting wandb (from k-diffusion->TTS)
Downloading wandb-0.17.0-py3-none-macosx_11_0_arm64.whl.metadata (10 kB)
Collecting contourpy>=1.0.1 (from matplotlib->TTS)
Downloading contourpy-1.1.1-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.9 kB)
Collecting cycler>=0.10 (from matplotlib->TTS)
Downloading cycler-0.12.1-py3-none-any.whl.metadata (3.8 kB)
Collecting fonttools>=4.22.0 (from matplotlib->TTS)
Downloading fonttools-4.51.0-cp38-cp38-macosx_10_9_universal2.whl.metadata (159 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 159.5/159.5 kB 11.7 MB/s eta 0:00:00
Collecting kiwisolver>=1.0.1 (from matplotlib->TTS)
Downloading kiwisolver-1.4.5-cp38-cp38-macosx_11_0_arm64.whl.metadata (6.4 kB)
Collecting pyparsing>=2.3.1 (from matplotlib->TTS)
Downloading pyparsing-3.1.2-py3-none-any.whl.metadata (5.1 kB)
Requirement already satisfied: python-dateutil>=2.7 in ./.conda/lib/python3.8/site-packages (from matplotlib->TTS) (2.9.0.post0)
Collecting regex>=2021.8.3 (from nltk->TTS)
Downloading regex-2024.5.10-cp38-cp38-macosx_11_0_arm64.whl.metadata (40 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.9/40.9 kB 3.8 MB/s eta 0:00:00
Collecting pytz>=2020.1 (from pandas->TTS)
Downloading pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.1 (from pandas->TTS)
Downloading tzdata-2024.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting huggingface-hub<1.0,>=0.19.3 (from transformers->TTS)
Downloading huggingface_hub-0.23.0-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: requests in ./.conda/lib/python3.8/site-packages (from transformers->TTS) (2.31.0)
Collecting tokenizers<0.20,>=0.19 (from transformers->TTS)
Downloading tokenizers-0.19.1-cp38-cp38-macosx_11_0_arm64.whl.metadata (6.7 kB)
Collecting pycparser (from cffi>=1.0->soundfile->TTS)
Downloading pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Collecting tzlocal (from dateparser~=1.1.0->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading tzlocal-5.2-py3-none-any.whl.metadata (7.8 kB)
Collecting zipp>=0.5 (from importlib-metadata>=3.6.0->flask->TTS)
Downloading zipp-3.18.1-py3-none-any.whl.metadata (3.5 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7->TTS)
Downloading MarkupSafe-2.1.5-cp38-cp38-macosx_10_9_universal2.whl.metadata (3.0 kB)
Requirement already satisfied: six in ./.conda/lib/python3.8/site-packages (from jsonlines~=1.2.0->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (1.16.0)
Collecting docopt>=0.6.2 (from num2words<1.0.0,>=0.5.10->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS)
Downloading docopt-0.6.2.tar.gz (25 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting appdirs>=1.3.0 (from pooch<1.7,>=1.0->librosa==0.10.0.*->TTS)
Downloading appdirs-1.4.4-py2.py3-none-any.whl.metadata (9.0 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in ./.conda/lib/python3.8/site-packages (from requests->transformers->TTS) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in ./.conda/lib/python3.8/site-packages (from requests->transformers->TTS) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./.conda/lib/python3.8/site-packages (from requests->transformers->TTS) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in ./.conda/lib/python3.8/site-packages (from requests->transformers->TTS) (2024.2.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in ./.conda/lib/python3.8/site-packages (from scikit-learn>=0.20.0->librosa==0.10.0.*->TTS) (3.5.0)
Collecting ftfy (from clip-anytorch->k-diffusion->TTS)
Downloading ftfy-6.2.0-py3-none-any.whl.metadata (7.3 kB)
INFO: pip is looking at multiple versions of dctorch to determine which version is compatible with other requirements. This could take a while.
Collecting dctorch (from k-diffusion->TTS)
Downloading dctorch-0.1.1-py3-none-any.whl.metadata (607 bytes)
Downloading dctorch-0.1.0-py3-none-any.whl.metadata (558 bytes)
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.34-py3-none-any.whl.metadata (36 kB)
Collecting requests (from transformers->TTS)
Downloading requests-2.25.1-py2.py3-none-any.whl.metadata (4.2 kB)
Collecting chardet<5,>=3.0.2 (from requests->transformers->TTS)
Downloading chardet-4.0.0-py2.py3-none-any.whl.metadata (3.5 kB)
Collecting idna>=2.0 (from yarl<2.0,>=1.0->aiohttp->TTS)
Downloading idna-2.10-py2.py3-none-any.whl.metadata (9.1 kB)
Collecting urllib3<1.27,>=1.21.1 (from requests->transformers->TTS)
Downloading urllib3-1.26.18-py2.py3-none-any.whl.metadata (48 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 5.4 MB/s eta 0:00:00
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.33-py3-none-any.whl.metadata (36 kB)
INFO: pip is still looking at multiple versions of dctorch to determine which version is compatible with other requirements. This could take a while.
Downloading clean_fid-0.1.32-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.31-py3-none-any.whl.metadata (36 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Downloading clean_fid-0.1.30-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.29-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.28-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.26-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.25-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.24-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.23-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.22-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.21-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.19-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.18-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.17-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.16-py3-none-any.whl.metadata (36 kB)
Downloading clean_fid-0.1.15-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.14-py3-none-any.whl.metadata (35 kB)
Downloading clean_fid-0.1.13-py3-none-any.whl.metadata (27 kB)
Downloading clean_fid-0.1.12-py3-none-any.whl.metadata (22 kB)
Downloading clean_fid-0.1.11-py3-none-any.whl.metadata (22 kB)
Downloading clean_fid-0.1.10-py3-none-any.whl.metadata (10 kB)
Downloading clean_fid-0.1.9-py3-none-any.whl.metadata (9.5 kB)
Downloading clean_fid-0.1.8-py3-none-any.whl.metadata (9.5 kB)
Downloading clean_fid-0.1.6-py3-none-any.whl.metadata (8.5 kB)
Collecting accelerate (from k-diffusion->TTS)
Downloading accelerate-0.30.0-py3-none-any.whl.metadata (19 kB)
Downloading accelerate-0.29.3-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.29.2-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.29.1-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.29.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.28.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.27.2-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.27.1-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.27.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.26.1-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.26.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.25.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.24.1-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.24.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.23.0-py3-none-any.whl.metadata (18 kB)
Downloading accelerate-0.22.0-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.21.0-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.20.3-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.20.2-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.20.1-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.20.0-py3-none-any.whl.metadata (17 kB)
Downloading accelerate-0.19.0-py3-none-any.whl.metadata (16 kB)
Downloading accelerate-0.18.0-py3-none-any.whl.metadata (16 kB)
Downloading accelerate-0.17.1-py3-none-any.whl.metadata (16 kB)
Downloading accelerate-0.17.0-py3-none-any.whl.metadata (16 kB)
Downloading accelerate-0.16.0-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.15.0-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.14.0-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.13.2-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.13.1-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.13.0-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.12.0-py3-none-any.whl.metadata (15 kB)
Downloading accelerate-0.11.0-py3-none-any.whl.metadata (14 kB)
Downloading accelerate-0.10.0-py3-none-any.whl.metadata (14 kB)
Downloading accelerate-0.9.0-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.8.0-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.7.1-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.7.0-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.6.2-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.6.1-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.6.0-py3-none-any.whl.metadata (13 kB)
Downloading accelerate-0.5.1-py3-none-any.whl.metadata (11 kB)
Downloading accelerate-0.5.0-py3-none-any.whl.metadata (11 kB)
Downloading accelerate-0.4.0-py3-none-any.whl.metadata (11 kB)
Collecting soxr>=0.3.2 (from librosa==0.10.0.*->TTS)
Downloading soxr-0.3.6-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.4 kB)
Downloading soxr-0.3.5-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.4 kB)
Downloading soxr-0.3.4-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.4 kB)
Downloading soxr-0.3.3-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.0 kB)
Downloading soxr-0.3.2-cp38-cp38-macosx_11_0_arm64.whl.metadata (5.0 kB)
Collecting scikit-learn>=0.20.0 (from librosa==0.10.0.*->TTS)
Downloading scikit_learn-1.3.2-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.3.1-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.3.0-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.2.2-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.2.1-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.2.0-cp38-cp38-macosx_12_0_arm64.whl.metadata (11 kB)
Downloading scikit_learn-1.1.3-cp38-cp38-macosx_12_0_arm64.whl.metadata (10 kB)
Downloading scikit_learn-1.1.2-cp38-cp38-macosx_12_0_arm64.whl.metadata (10 kB)
Downloading scikit_learn-1.1.1-cp38-cp38-macosx_12_0_arm64.whl.metadata (10 kB)
Downloading scikit_learn-1.1.0-cp38-cp38-macosx_12_0_arm64.whl.metadata (10 kB)
Downloading scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl.metadata (10 kB)
Downloading scikit-learn-1.0.1.tar.gz (6.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 MB 60.3 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
```
Error log
```
Error compiling Cython file:
------------------------------------------------------------
...
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
blas_functions.axpy = _axpy[double]
blas_functions.scal = _scal[double]
blas_functions.nrm2 = _nrm2[double]
^
------------------------------------------------------------
sklearn/svm/_liblinear.pyx:58:31: Cannot assign type 'double (int, double *, int) except * nogil' to 'nrm2_func' (alias of 'double (*)(int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Traceback (most recent call last):
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1345, in cythonize_one_helper
return cythonize_one(*m)
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: sklearn/svm/_liblinear.pyx
Error compiling Cython file:
------------------------------------------------------------
...
if error_msg:
# for SVR: epsilon is called p in libsvm
error_repl = error_msg.decode('utf-8').replace("p < 0", "epsilon < 0")
raise ValueError(error_repl)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm.pyx:194:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
class_weight_label.data, class_weight.data)
model = set_model(¶m, <int> nSV.shape[0], SV.data, SV.shape,
support.data, support.shape, sv_coef.strides,
sv_coef.data, intercept.data, nSV.data, probA.data, probB.data)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm.pyx:358:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
sv_coef.data, intercept.data, nSV.data,
probA.data, probB.data)
cdef np.npy_intp n_class = get_nr(model)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm.pyx:464:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
n_class = 1
else:
n_class = get_nr(model)
n_class = n_class * (n_class - 1) // 2
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm.pyx:570:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
if error_msg:
raise ValueError(error_msg)
cdef np.ndarray[np.float64_t, ndim=1, mode='c'] target
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm.pyx:714:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Traceback (most recent call last):
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1345, in cythonize_one_helper
return cythonize_one(*m)
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: sklearn/svm/_libsvm.pyx
Error compiling Cython file:
------------------------------------------------------------
...
if error_msg:
free_problem(problem)
free_param(param)
raise ValueError(error_msg)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm_sparse.pyx:153:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
sv_coef.data, intercept.data,
nSV.data, probA.data, probB.data)
#TODO: use check_model
dec_values = np.empty(T_indptr.shape[0]-1)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm_sparse.pyx:284:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
#TODO: use check_model
cdef np.npy_intp n_class = get_nr(model)
cdef int rv
dec_values = np.empty((T_indptr.shape[0]-1, n_class), dtype=np.float64)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm_sparse.pyx:343:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Error compiling Cython file:
------------------------------------------------------------
...
n_class = get_nr(model)
n_class = n_class * (n_class - 1) // 2
dec_values = np.empty((T_indptr.shape[0] - 1, n_class), dtype=np.float64)
cdef BlasFunctions blas_functions
blas_functions.dot = _dot[double]
^
------------------------------------------------------------
sklearn/svm/_libsvm_sparse.pyx:412:29: Cannot assign type 'double (int, double *, int, double *, int) except * nogil' to 'dot_func' (alias of 'double (*)(int, double *, int, double *, int) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of the value being assigned.
Traceback (most recent call last):
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1345, in cythonize_one_helper
return cythonize_one(*m)
File "/private/var/folders/ff/xbjbc9jn70l2s3sql3fc5pdw0000gq/T/pip-build-env-9pyvr3k2/overlay/lib/python3.8/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: sklearn/svm/_libsvm_sparse.pyx
warning: sklearn/tree/_criterion.pxd:57:45: The keyword 'nogil' should appear at the end of the function signature line. Placing it before 'except' or 'noexcept' will be disallowed in a future version of Cython.
....
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
```
### Environment
```shell
- 🐸TTS Version: None
- PyTorch Version: 2.3.0
- Python version: 3.8.19
- OS: Darwin 64bit arm
- CUDA/cuDNN version: None
- GPU models and configuration: None
- How you installed PyTorch (`conda`, `pip`, source): pip
```
### Additional context
_No response_ | closed | 2024-05-12T23:06:03Z | 2024-05-17T16:04:34Z | https://github.com/coqui-ai/TTS/issues/3735 | [
"bug"
] | mirodil-ml | 1 |
d2l-ai/d2l-en | data-science | 2,478 | Chapter 15.4. Pretraining word2vec: AttributeError: Can't pickle local object 'load_data_ptb.<locals>.PTBDataset' | AttributeError: Can't pickle local object 'load_data_ptb.<locals>.PTBDataset'

can anyone help with this error? | open | 2023-04-30T20:01:53Z | 2023-07-12T03:00:55Z | https://github.com/d2l-ai/d2l-en/issues/2478 | [] | keyuchen21 | 2 |
streamlit/streamlit | python | 10,383 | Make st.toast appear/bring it to the front (stack order) when used in st.dialog | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Not sure to place this as a feature request or bug but it seems when using st.toast inside st.dialog, the dialog is sent to the background of the dialog.
### Reproducible Code Example
```Python
import streamlit as st
st.dialog(title="Streamlit Toast Notification")
def toast_notification():
activate_toast = st.button(label="send toast")
if activate_toast:
st.toast("Hi, I am in the background!")
toast_notification()
```
### Steps To Reproduce
1. Create dialog
2. Click button to show toast
### Expected Behavior
st.toast should be stacked at the front of the dialog.
### Current Behavior
Stacks behind st.dialog.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.10
- Operating System: Windows
- Browser: Chrome
### Additional Information
_No response_ | open | 2025-02-12T20:19:16Z | 2025-02-13T12:10:54Z | https://github.com/streamlit/streamlit/issues/10383 | [
"type:enhancement",
"feature:st.toast",
"feature:st.dialog"
] | Socvest | 4 |
pytest-dev/pytest-html | pytest | 466 | new feature request: adding a textfile as a clickable url to report | I'm trying to add a file in .extras as a file and clickable url.
extras.html adds html and extras.text adds text directly into the report
It is the collected log along with the test it is way to big to show in report, but you can click it as you can with extras.images
Anyone having an example or idea on how to do this.
| open | 2021-08-20T14:23:53Z | 2022-01-14T19:01:17Z | https://github.com/pytest-dev/pytest-html/issues/466 | [] | fenchu | 1 |
littlecodersh/ItChat | api | 750 | 请问有本项目的微信或者 qq 群吗 | 如题
希望有群可以交流,共同分享学习 | open | 2018-10-24T11:32:12Z | 2018-11-21T09:57:14Z | https://github.com/littlecodersh/ItChat/issues/750 | [] | kollyQAQ | 3 |
jupyter/nbgrader | jupyter | 1,297 | nbgrader issue with unicode | ### Operating system
Linux RedHat 7.4
### `nbgrader --version`
0.6.1
### `jupyterhub --version` (if used with JupyterHub)
1.0.0
### `jupyter notebook --version`
5.5.0
A prof ran into this problem while running nbgrader:
UnicodeEncodeError: 'charmap' codec can't encode character '\u2080' in position 54: character maps to <undefined>
His Comment:
That is the code for superscript zero. Eliminating that in the notebook,
brought up other messages: superscripts don’t work, → does not, ≤ does
not, but greek letters and ⇒ do work. The same characters in the course
notes (00 …, 01…) work, those notebooks can be generated and released, but
the Lab01 notebook not.
More of his comments:
John, I narrowed down the issue: the error appears only in cells that are marked
as “Read-only” under View → Cell toolbar → Create assignment. Since I don’t
bother with that for the course notes, but do mark cells with the questions are
read-only in assignments and exams, so students always see the original question,
that didn’t appear for the course notes. A temporary solution is not to use
read-only cells.
Any help on this would be appreciated. | open | 2020-01-08T17:28:00Z | 2020-01-08T17:28:00Z | https://github.com/jupyter/nbgrader/issues/1297 | [] | jnak12 | 0 |
scrapy/scrapy | python | 6,307 | Scrapy and Great Expectations: Error - __provides__ | ### Description
I am trying to use Scrapy and Great Expectations in the same virtual environment but there is an issue depending on the order I import the packages in.
I created an issue for Great Expectations with additional [details](https://github.com/great-expectations/great_expectations/issues/9698).
They were mentioning it might be something with abc being monkey-patched.
### Steps to Reproduce
**This does work:**
```
import great_expectations
import scrapy
```
**This does not work:**
```
import scrapy
import great_expectations
```
**Error:**
```
Traceback (most recent call last):
File
"/Users/grant/vs_code_projects/grants_projects/test_environment.py", line 2, in <module>
import great_expectations
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/__init__.py", line 32, in <module>
register_core_expectations()
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/registry.py", line 187, in register_core_expectations
from great_expectations.expectations import core # noqa: F401
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/core/__init__.py", line 1, in <module>
from .expect_column_distinct_values_to_be_in_set import (
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/core/expect_column_distinct_values_to_be_in_set.py", line 12, in <module>
from great_expectations.expectations.expectation import (
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 2350, in <module>
class BatchExpectation(Expectation, ABC):
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 287, in __new__
newclass._register_renderer_functions()
File "/Users/grant/Envs/test_env/lib/python3.8/site-packages/great_expectations/expectations/expectation.py", line 369, in _register_renderer_functions
attr_obj: Callable = getattr(cls, candidate_renderer_fn_name)
AttributeError: __provides__
```
**Expected behavior:** Be able to use the packages together in the same virtual environment
**Actual behavior:** Cannot import the packages together
**Reproduces how often:** 100%
### Versions
Scrapy 2.11.1
great-expectations 0.18.12
### Additional context
Looking for a possible solution on what could be done. Thank you!
| closed | 2024-04-05T15:00:38Z | 2024-06-22T12:05:35Z | https://github.com/scrapy/scrapy/issues/6307 | [
"bug"
] | culpgrant | 12 |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 1 | Save generator and load it only for prediction | Hello,
Thank you for your implementation of cycle gans, it is very clear. I would like to ask if there is a way to save the generators every 500 iterations (exactly when they predict the test images) so I can load them in a different moment and only perform prediction in a specific test set with the loaded model (in a new code, independent of cycle_gan.py)
Thank you,
Agelos | closed | 2020-10-15T04:12:30Z | 2020-10-27T12:04:35Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/1 | [
"question"
] | agelosk | 2 |
aleju/imgaug | machine-learning | 687 | Grayscale uses too much memory | When grayscale with alpha=(0,1), it uses more than 20 times the original image.
When alpha=1, use only 5 times more.
Is there a way to make a similar effect by using less memory? | open | 2020-06-09T08:40:49Z | 2020-06-09T08:40:49Z | https://github.com/aleju/imgaug/issues/687 | [] | zmfkzj | 0 |
xlwings/xlwings | automation | 2,059 | Reader: add DateTime support for xls and xlsb | closed | 2022-10-17T13:55:17Z | 2023-05-25T09:12:47Z | https://github.com/xlwings/xlwings/issues/2059 | [
"engine: reader [calamine]"
] | fzumstein | 0 | |
tensorpack/tensorpack | tensorflow | 1,398 | Question about save and load ckpt | I read the doc 'Save and Load models' and use `load_ckpt_vars` to get variables dict from ckpt file. I found it contains all tensors except missing their slots variables like `BatchNorm/beta/Momentum`. When I made some changes and save to ckpt file then restore back to finetune, I found the warning `BatchNorm/beta/Momentum is not available in checkpoint` . Although missing these triaing-related variables `*/Momentum` can also finetune, I want to ask whether the random-initialized those slot variables would affect the finetuning Or it is normal to miss those slot variables in finetuning? | closed | 2020-02-20T08:36:18Z | 2020-02-21T07:15:02Z | https://github.com/tensorpack/tensorpack/issues/1398 | [] | hunterkun | 2 |
sktime/sktime | data-science | 7,804 | [BUG] Segmentation fault in CI | In some recent PR worflow wuns, the tests settings `test-full` `macos` `python3.x` settings seems to be getting into segmentation faults.
The runners donot stop after these faults and keeps on running, holding up the runners leading to a creation of "queue" of PRs waiting for workflow runs.
segmentation fault traceback:
```
Fatal Python error: Segmentation fault
..[gw17] node down: Not properly terminated
Thread 0x000070000526c000 (most recent call first):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 534 in read
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 567Fatal Python error: Segmentation fault
F
Thread 0x000070000a71c000 (most recent call first):
replacing crashed worker gw17
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 534 in read
[gw16] node down: Not properly terminated
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 567 in from_io
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 1160 in _thread_receiver
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 341 in run
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 411 in _perform_spawn
Thread 0x00007ff84ff9f9c0 (most recent call first):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1166 in read
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1314 in recv_into
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socket.py", line 706 in readinto
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 286 in _read_status
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 325 in begin
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1395 in getresponse
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 516 in getresponse
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 534 in _make_request
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787 in urlopen
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 667 in send
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 93 in send
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 703 in send
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 589 in request
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 301 in _request_wrapper
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 278 in _request_wrapper
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1294 in get_hf_file_metadata
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114 in _inner_fn
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line [1374](https://github.com/sktime/sktime/actions/runs/13228932889/job/36923410870?pr=6570#step:11:1374) in _get_metadata_or_catch_error
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 923 in _hf_hub_download_to_cache_dir
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 860 in hf_hub_download
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114 in _inner_fn
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/utils/hub.py", line 398 in cached_file
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 686 in _get_config_dict
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 631 in get_config_dict
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 602 in from_pretrained
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3122 in from_pretrained
File "/Users/runner/work/sktime/sktime/sktime/libs/momentfm/models/moment.py", line 239 in _get_transformer_backbone
File "/Users/runner/work/sktime/sktime/sktime/libs/momentfm/models/moment.py", line 144 in __init__
File "/Users/runner/work/sktime/sktime/sktime/libs/momentfm/models/moment.py", line 636 in __init__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/hub_mixin.py", line 774 in _from_pretrained
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/hub_mixin.py", line 553 in from_pretrained
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114 in _inner_fn
File "/Users/runner/work/sktime/sktime/sktime/forecasting/hf_momentfm_forecaster.py", line 283 in _fit
File "/Users/runner/work/sktime/sktime/sktime/forecasting/base/_base.py", line 395 in fit
File "/Users/runner/work/sktime/sktime/sktime/forecasting/tests/test_all_forecasters.py", line 367 in test_predict_time_index
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/python.py", line 159 in pytest_pyfunc_call
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/python.py", line 1627 in runtest
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 174 in pytest_runtest_call
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 242 in <lambda>
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 341 in from_call
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 241 in call_and_report
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 132 in runtestprotocol
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/runner.py", line 113 in pytest_runtest_protocol
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/xdist/remote.py", line 195 in run_one_test
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/xdist/remote.py", line 174 in pytest_runtestloop
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/main.py", line 337 in _main
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/main.py", line 283 in wrap_session
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/_pytest/main.py", line 330 in pytest_cmdline_main
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_callers.py", line 103 in _multicall
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_manager.py", line 120 in _hookexec
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pluggy/_hooks.py", line 513 in __call__
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/xdist/remote.py", line 393 in <module>
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 1291 in executetask
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 341 in run
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 411 in _perform_spawn
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 389 in integrate_as_primary_thread
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 1273 in serve
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 1806 in serve
File "<string>", line 8 in <module>
File "<string>", line 1 in <module>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._comput
Thread 0x000070000b011000 (most recent call first):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 534 in read
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py", line 567 in from_io
F
maximum crashed workers reached: 16
[gw18] node down: Not properly terminated
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/execnet/gateway_base.py"2025-02-10 10:29:53.615580: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
Some problematic workflow links:
https://github.com/sktime/sktime/actions/runs/13232519194/job/36932621125?pr=7648
https://github.com/sktime/sktime/actions/runs/13232519194/job/36932622639?pr=7648
https://github.com/sktime/sktime/actions/runs/13228932889/job/36923410103?pr=6570 (operation cancelled after 6hrs)
https://github.com/sktime/sktime/actions/runs/13228932889/job/36923410599?pr=6570 (operation cancelled after 6hrs)
| open | 2025-02-10T15:41:22Z | 2025-02-10T21:06:27Z | https://github.com/sktime/sktime/issues/7804 | [
"bug",
"maintenance"
] | phoeenniixx | 1 |
yeongpin/cursor-free-vip | automation | 210 | [讨论]: 下载下来是文稿 无法打开 | ### Issue 检查清单
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我确认自己需要的是提出问题并且讨论问题,而不是 Bug 反馈或需求建议。
- [x] 我已阅读 [Github Issues](https://github.com/yeongpin/cursor-free-vip/issues) 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues) 和 [已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
### 平台
macOS Intel
### 版本
最新的
### 您的问题
下载下来应用是文稿 正常应该是dmg吧
### 补充信息
```shell
```
### 优先级
高 (阻碍工作进行) | closed | 2025-03-12T13:07:49Z | 2025-03-13T03:50:12Z | https://github.com/yeongpin/cursor-free-vip/issues/210 | [
"question"
] | adjoiningWang | 1 |
strawberry-graphql/strawberry-django | graphql | 488 | get_queryset() is not called | I'm trying to add a `.get_queryset()` class method to one of my types, but it is never called when querying. I just put a print on top of the method and it never happens. Fields on the type behave normally so I know the type is in use.
I can't get even the simple example from the docs to work:
```python
@strawberry_django.type(models.Fruit)
class Berry:
@classmethod
def get_queryset(cls, queryset, info, **kwargs):
return queryset.filter(name__contains="berry")
```
What could be going on here?
I'm on version 0.28.2. | closed | 2024-02-23T15:56:36Z | 2025-03-20T15:57:27Z | https://github.com/strawberry-graphql/strawberry-django/issues/488 | [
"bug"
] | alimony | 8 |
NullArray/AutoSploit | automation | 950 | Divided by zero exception326 | Error: Attempted to divide by zero.326 | closed | 2019-04-19T16:03:42Z | 2019-04-19T16:35:37Z | https://github.com/NullArray/AutoSploit/issues/950 | [] | AutosploitReporter | 0 |
comfyanonymous/ComfyUI | pytorch | 7,083 | Load workflows | ### Your question
Hi,
I've recently updated ComfyUI to v0.3.18 and am having trouble loading my saved workflows. I have multiple workflows stored in my pysssss-workflows folder (storage/pysssss-workflows on mimicpc), but I can't seem to find an option to open them in the new interface.
Previously, there was a Load button, but I no longer see it. I’m sure it’s something simple, but I’d really appreciate any help.
Thanks in advance!
### Logs
```powershell
```
### Other
_No response_ | open | 2025-03-05T09:40:01Z | 2025-03-05T10:24:20Z | https://github.com/comfyanonymous/ComfyUI/issues/7083 | [
"User Support"
] | PaulEvans78 | 3 |
healthchecks/healthchecks | django | 440 | API: allow specifying enabled integrations by name | Suggested in [#376](https://github.com/healthchecks/healthchecks/issues/376#issuecomment-689814284):
> Another way of having less API calls is if you added a parameter to let you pass in the channels by name rather than by id (it would assume channel names are unique) - that way I could skip the extra API call to get the channel id from the name :)
A few decisions:
* what to do when API client uses a channel name that does not exist?
* what to do when API client uses a non-unique channel name?
Could be strict or lenient here. I'm leaning towards strict to avoid subtle configuration mistakes: whenever a channel name in client's payload does not have precisely one match in the database, return HTTP 400 and a descriptive error message.
| closed | 2020-10-07T10:13:28Z | 2020-10-14T12:37:29Z | https://github.com/healthchecks/healthchecks/issues/440 | [] | cuu508 | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 681 | Bug in inference code |
synthesizer/inference.py:
len(inputs) returns you 1 (if demo_cli.py used), but inputs is a list which potentially can have any size. So batched_inputs is not "batched" in any sense.
# Batch inputs
batched_inputs = [inputs[i:i+hparams.synthesis_batch_size]
for i in range(0, len(inputs), hparams.synthesis_batch_size)]
batched_embeds = [embeddings[i:i+hparams.synthesis_batch_size]
for i in range(0, len(embeddings), hparams.synthesis_batch_size)] | closed | 2021-02-25T10:20:18Z | 2021-02-28T06:45:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/681 | [] | MaratZakirov | 4 |
Skyvern-AI/skyvern | api | 1,586 | How to fix these many errors? | How to fix these errors:
"
Alembic mode: online
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 146, in __init__
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3302, in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 449, in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 1263, in _checkout
fairy = _ConnectionRecord.checkout(pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 712, in checkout
rec = pool._do_get()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 308, in _do_get
return self._create_connection()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 390, in _create_connection
return _ConnectionRecord(self)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 674, in __init__
self.__connect()
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 900, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 643, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 621, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg/connection.py", line 748, in connect
raise last_ex.with_traceback(None)
psycopg.OperationalError: connection failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/alembic", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 636, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 626, in main
self.run_cmd(cfg, options)
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 603, in run_cmd
fn(
File "/usr/local/lib/python3.11/site-packages/alembic/command.py", line 406, in upgrade
script.run_env()
File "/usr/local/lib/python3.11/site-packages/alembic/script/base.py", line 586, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 95, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 113, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/alembic/env.py", line 81, in <module>
run_migrations_online()
File "/app/alembic/env.py", line 70, in run_migrations_online
with connectable.connect() as connection:
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3278, in connect
return self._connection_cls(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 148, in __init__
Connection._handle_dbapi_exception_noconnection(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2442, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 146, in __init__
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3302, in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 449, in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 1263, in _checkout
fairy = _ConnectionRecord.checkout(pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 712, in checkout
rec = pool._do_get()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 308, in _do_get
return self._create_connection()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 390, in _create_connection
return _ConnectionRecord(self)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 674, in __init__
self.__connect()
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 900, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 643, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 621, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg/connection.py", line 748, in connect
raise last_ex.with_traceback(None)
sqlalchemy.exc.OperationalError: (psycopg.OperationalError) connection failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
(Background on this error at: https://sqlalche.me/e/20/e3q8)"
when using docker run to run a local image?
Thanks a lot. | open | 2025-01-17T03:57:07Z | 2025-01-31T20:11:16Z | https://github.com/Skyvern-AI/skyvern/issues/1586 | [
"answered"
] | computer2s | 4 |
takapy0210/nlplot | plotly | 30 | github pagesの追加 | open | 2021-07-11T09:52:34Z | 2021-07-11T09:52:34Z | https://github.com/takapy0210/nlplot/issues/30 | [] | takapy0210 | 0 | |
aeon-toolkit/aeon | scikit-learn | 2,411 | [ENH] AutoETS implementation | ### Describe the feature or idea you want to propose
Our new experimental forecasting module starts with a really fast ETS implementation. The next stage is AutoETS. But how to search the parameter space? There are many alternatives, grid search, nelder mead, stochastic gradient descent etc. It would be really good to implement this in a configurable way. Feel free to come up with alternative heuristic search algorithms on this thread
### Describe your proposed solution
we need to think about how to design this efficiently, perhaps taking inspiration from scikit-learn?
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | open | 2024-11-28T09:08:53Z | 2025-01-28T16:50:11Z | https://github.com/aeon-toolkit/aeon/issues/2411 | [
"enhancement",
"forecasting"
] | TonyBagnall | 4 |
ranaroussi/yfinance | pandas | 1,925 | Throw specific errors instead of generic 'Exception' would make for safer and nicer client implementations | ## Summary
Not throwing specific exception make clients error handling to be coupled with weak interfaces in the library. Throwing specific errors means exposing more explicit and stable interface for errors.
## Example
In my implementation I had to look for a sub-string with the specific error message in the error to be able to know which type of error it is. In other words, an error from network, or proxy, or non listed ticker symbol all look the same from the outside of the library.
Currently my implementation is something like:
``` python
try:
hist = ticker.history(period='3mo', interval='1d', raise_errors=True)
except Exception as ex:
if 'No data found, symbol may be delisted' in str(ex):
flash('The provided ticker symbol was not found, perhaps you mispelled it?', 'error')
return redirect('/')
```
Where I'd like to go like:
``` python
try:
hist = ticker.history(period='3mo', interval='1d', raise_errors=True)
except YFinanceNotListedError as ex:
flash('The provided ticker symbol was not found, perhaps you mispelled it?', 'error')
return redirect('/')
```
The way it is today, if anyone unsuspectingly changes the error message, that would break all implementations relaying on that.
I think this person had the same issue I had https://github.com/ranaroussi/yfinance/pull/1918
(Thanks for the lib, BTW) | closed | 2024-05-07T10:59:54Z | 2024-05-13T08:41:22Z | https://github.com/ranaroussi/yfinance/issues/1925 | [] | marcofognog | 3 |
lux-org/lux | jupyter | 366 | [Feature Request] Show distributions before Correlations | **Is your feature request related to a problem? Please describe.**
Distributions show single dimensions. There are fewer distribution plots than correlations so they can be explored more quickly. Also one has to understand the distributions before they can understand correlations. Therefore, I wonder whether it makes sense to show distributions in the first tab instead of correlations.
**Describe the solution you'd like**
Swap the first two tabs.
**Describe alternatives you've considered**
Do not change.
**Additional context**
From the Voyager work and teaching vis, we learned that people tend to dive into multivariate charts before even understanding the basic distributions. This behavior leads to "premature fixation". In EDA, it's therefore usually better to do a breadth-first rather than a depth-first exploration. | open | 2021-04-19T15:38:45Z | 2021-04-21T22:39:00Z | https://github.com/lux-org/lux/issues/366 | [] | domoritz | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 87 | 如何将模型部署到移动端 | 仓库中已给出ONNX和TensorRT模型,如何将它们部署到安卓移动端呢? | open | 2023-04-16T08:19:28Z | 2023-04-25T02:36:31Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/87 | [] | SZN712 | 1 |
donnemartin/system-design-primer | python | 148 | Where is 'State' defined in social_graph_snippets.py? | __State__ is currently an _undefined name_ in the context of social_graph_snippets.py so the code would raise a NameError at runtime.
Is __State__ merely an Enum with two items (__visited__ and __unvisited__) or is it more complex than that? If it is just the simple Enum then perhaps it would be cleaner to rename the field to be __source.state_visited__ and use the values __True__ and __False__.
flake8 testing of https://github.com/donnemartin/system-design-primer on Python 3.6.3
$ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__
```
./solutions/system_design/social_graph/social_graph_snippets.py:10:30: F821 undefined name 'State'
source.visit_state = State.visited
^
./solutions/system_design/social_graph/social_graph_snippets.py:17:49: F821 undefined name 'State'
if adjacent_node.visit_state == State.unvisited:
^
./solutions/system_design/social_graph/social_graph_snippets.py:19:49: F821 undefined name 'State'
adjacent_node.visit_state = State.visited
^
3 F821 undefined name 'State'
```
Discovered via #93 | closed | 2018-03-14T11:46:56Z | 2018-07-15T00:01:58Z | https://github.com/donnemartin/system-design-primer/issues/148 | [
"bug"
] | cclauss | 2 |
databricks/koalas | pandas | 2,114 | Pandas 1.2.x support? | What are the plans for support of Pandas 1.2.x in a release? I saw that the CI system has moved there for testing already. Is this imminent or some time away?
**Background:** Pandas 1.2.x has support for `fsspec` to the extend that we need it. Would be nice to also use Koalas in the same session. | open | 2021-03-23T03:12:13Z | 2021-03-24T02:41:35Z | https://github.com/databricks/koalas/issues/2114 | [
"enhancement"
] | markusweimer | 2 |
explosion/spaCy | data-science | 13,528 | Numpy v2.0.0 breaks the ability to download models using spaCy | ## How to reproduce the behaviour
In my dockerfile, I run these commands:
```Dockerfile
FROM --platform=linux/amd64 python:3.12.4
RUN pip install --upgrade pip
RUN pip install torch --index-url https://download.pytorch.org/whl/cpu
RUN pip install spacy
RUN python -m spacy download en_core_web_lg
```
It returns the following error (and stacktrace):
```
2.519 Traceback (most recent call last):
2.519 File "<frozen runpy>", line 189, in _run_module_as_main
2.519 File "<frozen runpy>", line 148, in _get_module_details
2.519 File "<frozen runpy>", line 112, in _get_module_details
2.519 File "/usr/local/lib/python3.12/site-packages/spacy/__init__.py", line 6, in <module>
2.521 from .errors import setup_default_warnings
2.522 File "/usr/local/lib/python3.12/site-packages/spacy/errors.py", line 3, in <module>
2.522 from .compat import Literal
2.522 File "/usr/local/lib/python3.12/site-packages/spacy/compat.py", line 39, in <module>
2.522 from thinc.api import Optimizer # noqa: F401
2.522 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2.522 File "/usr/local/lib/python3.12/site-packages/thinc/api.py", line 1, in <module>
2.522 from .backends import (
2.522 File "/usr/local/lib/python3.12/site-packages/thinc/backends/__init__.py", line 17, in <module>
2.522 from .cupy_ops import CupyOps
2.522 File "/usr/local/lib/python3.12/site-packages/thinc/backends/cupy_ops.py", line 16, in <module>
2.522 from .numpy_ops import NumpyOps
2.522 File "thinc/backends/numpy_ops.pyx", line 1, in init thinc.backends.numpy_ops
2.524 ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
Locking to the previous version of numpy will resolve this issue:
```Dockerfile
FROM --platform=linux/amd64 python:3.12.4
RUN pip install --upgrade pip
RUN pip install torch --index-url https://download.pytorch.org/whl/cpu
RUN pip install numpy==1.26.4 spacy
RUN python -m spacy download en_core_web_lg
```
| open | 2024-06-16T15:42:21Z | 2024-12-29T11:20:23Z | https://github.com/explosion/spaCy/issues/13528 | [
"bug"
] | afogel | 16 |
mkhorasani/Streamlit-Authenticator | streamlit | 243 | Can't Login in any way | Hello,
I'm just following the official docs but it doesn't work, I can't login either with hashed or not hashed password...
```python
with open('./config.yaml') as file:
config = yaml.load(file, Loader=SafeLoader)
authenticator = stauth.Authenticate(
config['credentials'],
config['cookie']['name'],
config['cookie']['key'],
config['cookie']['expiry_days'],
auto_hash=False # I've tried with True and stauth.Hasher.hash_passwords(config['credentials'])
)
# [...]
def main():
# Authentication
authenticator.login(location='main')
if st.session_state["authentication_status"]:
authenticator.logout('Logout', 'sidebar')
st.sidebar.write(f'Welcome *{st.session_state["name"]}*')
elif st.session_state["authentication_status"] == False:
st.error('Username/password is incorrect') #<-- always
``` | open | 2024-11-25T11:51:02Z | 2025-03-04T18:07:38Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/243 | [
"help wanted"
] | diramazioni | 3 |
pywinauto/pywinauto | automation | 657 | Pywinauto type_keys() omits “%” in string | When attempting to input a string 'customer asked for 30% discount' to a form by using type_keys() in Pywinauto 0.6.5, the output it sends is 'customer asked for 30 discount" omitting `%.`
Tried escape character:
```
control.type_keys('customer asked for 30%% discount',with_spaces=True)
control.type_keys('customer asked for 30\% discount',with_spaces=True)
control.type_keys('customer asked for 30\x25 discount',with_spaces=True)
```
But it still omits the '%'
When printing data in console string outputs correctly. So it is not a Python 3.7 issue.
Directly typing '%' into from works as expected. | closed | 2019-01-17T04:11:46Z | 2019-01-18T11:26:15Z | https://github.com/pywinauto/pywinauto/issues/657 | [
"question"
] | medert | 1 |
dmlc/gluon-nlp | numpy | 1,420 | Beam search scorer question | ## Description
Hello , I have a question about the beam search scorer function
scores = (log_probs + scores) / length_penalty
length_penalty = (\frac{K + length}{K + 1})^\alpha
https://github.com/dmlc/gluon-nlp/blob/0484e6494edf0a40c7bac220b5a10d8245324750/src/gluonnlp/sequence_sampler.py#L74
if K = 5, alpha = 2

if K = 5, alpha =1

if K =5, alpha = 0.5

In these three cases, the length_penalty function is decreasing when length > 0
The log_probs are all negative number, so the score is negative as well.
a negative number divided by a decreasing function...
output will alway prefer shorter sequences results...
| closed | 2020-11-02T03:13:37Z | 2020-11-02T03:51:50Z | https://github.com/dmlc/gluon-nlp/issues/1420 | [
"bug"
] | carter54 | 1 |
pywinauto/pywinauto | automation | 892 | Click button with pywinauto implies crash of a simple widget QT 5.2 application | ## Expected Behavior
Click on PushButton works
## Actual Behavior
The QT 5.2 application crashes
## Steps to Reproduce the Problem
1. With QT 5.2.1 create a simple widget application with a button
2. Run this application
3. Click on the button with pywinauto (last version) (same behaviour => crash if use of invoke on the button on inspect.exe)
Important : I have tried at home with Windows 8 and it works but at the work on Windows 7/10 (There is not Windows 8) I have a crash . Moreover at home to set a simple string to a TextBox with QT 5.2.1 I had to update the source code of QT https://bugreports.qt.io/browse/QTBUG-55546
and after it works.
So I would have two ideas/questions please :
1) maybe it could be a problem of pywinauto requirements I mean on https://pywinauto.readthedocs.io/en/latest/getting_started.html it is said the framework works with QT5 but with my first tests QT 5.9 seems more stable for pywinauto (indeed QTBUG-55546 spoken before is solved in QT 5.9) Is there a recommended release of QT5 for pywinauto ? Some help to make work pywinauto with QT 5.2? (for my target, a more complex QT 5.2 application on Win10, the print_control_identifiers method doesnt even return the different controls but the control appear in the inspect.exe)
2) There is apparently an OS problem but could it be the different releases of Automation dlls (https://docs.microsoft.com/fr-fr/dotnet/framework/ui-automation/ui-automation-overview) between Windows 8 (it works at home) and Windows 10 (it doesnt work at the work) I doubt about it but why not or something in my Windows 10, a setting for example that could block ?
## Short Example of Code to Demonstrate the Problem
app = Application(backend="uia").start('./debug/sans_titre1.exe')
app.MainWindow.print_control_identifiers()
app.MainWindow.Custom.PushButton.click()
## Specifications
- Pywinauto version: last version
- Python version and bitness: 2.7.17 (I have tried with Python 3.X too)
- Platform and OS: Windows 10
| closed | 2020-02-18T23:06:13Z | 2020-02-26T17:33:19Z | https://github.com/pywinauto/pywinauto/issues/892 | [
"duplicate",
"enhancement",
"question",
"3rd-party issue"
] | diblud13 | 3 |
psf/black | python | 4,231 | Remove parentheses around simple top-level expressions | ```
% cat parens.py
(x)
(1)
(yield 42)
([])
({})
(a + b)
% black --unstable --diff parens.py
All done! ✨ 🍰 ✨
1 file would be left unchanged.
```
I think all of these parentheses should be removed.
We should keep parentheses around a top-level expression (ast.Expr) only if:
- There is a comment associated with the parentheses
- It's a ternary split into multiple lines
- Possibly other cases I haven't thought of
| open | 2024-02-13T18:27:29Z | 2024-02-13T18:27:29Z | https://github.com/psf/black/issues/4231 | [
"T: enhancement",
"F: parentheses"
] | JelleZijlstra | 0 |
google-research/bert | nlp | 906 | Processing book corpus | Hi team et al,
I'd like to know how to process bookcorpus to pre-training.
I am confusing to process this data.
Should I treat 1 book as a document including all sentences or 1 chapter as a document?
Thanks. | open | 2019-11-10T05:44:07Z | 2021-12-23T03:12:30Z | https://github.com/google-research/bert/issues/906 | [] | ngoanpv | 1 |
feature-engine/feature_engine | scikit-learn | 631 | remove the boston dataset from the user guides | For example for the arbitrary discretizer | closed | 2023-03-09T16:26:00Z | 2023-03-14T11:12:53Z | https://github.com/feature-engine/feature_engine/issues/631 | [] | solegalli | 2 |
idealo/imagededup | computer-vision | 229 | GPU usage suboptimal | So it works on GPU but only uses 6-10% of it. I assume you have not implemented batching yet, still a lot of room for improvement here. | open | 2025-01-20T11:29:10Z | 2025-01-20T11:29:10Z | https://github.com/idealo/imagededup/issues/229 | [] | asusdisciple | 0 |
plotly/dash-html-components | dash | 196 | Release v1.1.4 for Julia | closed | 2021-07-13T15:07:22Z | 2021-07-13T15:21:19Z | https://github.com/plotly/dash-html-components/issues/196 | [] | alexcjohnson | 2 | |
pallets/flask | python | 4,507 | Flask 2.1.0 can't handle request method properly when sending POST repeatedly with an empty body | With the following example:
```python
from flask import Flask
app = Flask(__name__)
@app.route('/', methods=['POST'])
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
```
When you set the request body to `{}` with Postman or any HTTP clients, the first request will return 200, while the second request will return a 405 error response. The log shows the request method is `{}POST`:
```
"{}POST / HTTP/1.1" 405
```
Notice the request body became the part of the request method.
| closed | 2022-03-30T08:14:13Z | 2022-04-28T16:58:23Z | https://github.com/pallets/flask/issues/4507 | [] | eleven-f | 6 |
computationalmodelling/nbval | pytest | 147 | Bad magic unexpectedly passes | If I create a new notebook `nb.ipynb` with two cells:
In[1]:
```python
%notmagic
x = 0
raise TypeError
```
In[2]:
```python
x = 1
```
I would expect `python -m pytest -v --nbval-lax nb.ipynb` to fail, but it passes.
```
$ python -m pytest -v --nbval-lax nb.ipynb
=========================================================================== test session starts ===========================================================================
platform linux -- Python 3.7.6, pytest-5.4.1, py-1.8.1, pluggy-0.12.0 -- /home/sefkw/mc3/envs/celltestsui/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.7.6', 'Platform': 'Linux-5.4.0-7634-generic-x86_64-with-debian-bullseye-sid', 'Packages': {'pytest': '5.4.1', 'py': '1.8.1', 'pluggy': '0.12.0'}, 'Plugins': {'nbval': '0.9.5', 'xdist': '1.32.0', 'html': '2.1.1', 'metadata': '1.8.0', 'cov': '2.8.1', 'forked': '1.1.2'}}
rootdir: /home/sefkw/code/external/nbval
plugins: nbval-0.9.5, xdist-1.32.0, html-2.1.1, metadata-1.8.0, cov-2.8.1, forked-1.1.2
collected 2 items
nb::ipynb::Cell 0 PASSED [ 50%]
nb::ipynb::Cell 1 PASSED [100%]
============================================================================ warnings summary =============================================================================
nbval/plugin.py:115
/home/sefkw/code/external/nbval/nbval/plugin.py:115: PytestDeprecationWarning: direct construction of IPyNbFile has been deprecated, please use IPyNbFile.from_parent
return IPyNbFile(path, parent)
nbval/plugin.py:312
nbval/plugin.py:312
/home/sefkw/code/external/nbval/nbval/plugin.py:312: PytestDeprecationWarning: direct construction of IPyNbCell has been deprecated, please use IPyNbCell.from_parent
cell, options)
nb.ipynb::Cell 0
/home/sefkw/mc3/envs/celltestsui/lib/python3.7/site-packages/jupyter_client/manager.py:63: DeprecationWarning: KernelManager._kernel_spec_manager_changed is deprecated in traitlets 4.1: use @observe and @unobserve instead.
def _kernel_spec_manager_changed(self):
-- Docs: https://docs.pytest.org/en/latest/warnings.html
====================================================================== 2 passed, 4 warnings in 0.55s ======================================================================
```
In jupyter lab, "run all" stops execution at the first cell and reports `UsageError: Line magic function '%notmagic' not found`

I'm replacing an existing "notebook checking tool" with nbval; that tool (based on nbconvert) also correctly reports the same failure.
| open | 2020-06-11T11:25:28Z | 2020-06-12T10:55:58Z | https://github.com/computationalmodelling/nbval/issues/147 | [] | ceball | 2 |
ading2210/poe-api | graphql | 51 | editbot bypassing method valid | edit bot to "beaver" or "a2_2",got "server error" response. | closed | 2023-04-18T13:45:12Z | 2023-04-18T18:34:57Z | https://github.com/ading2210/poe-api/issues/51 | [
"wontfix"
] | wingeva1986 | 2 |
fastapi/sqlmodel | sqlalchemy | 418 | Conda Forge | Hello! I have successfully put this package on to Conda Forge, and I have extending the invitation for the owner/maintainers of this package to be maintainers on Conda Forge as well. Let me know if you are interested! Thanks.
https://github.com/conda-forge/sqlmodel-feedstock | closed | 2022-08-28T19:00:47Z | 2022-08-30T17:49:57Z | https://github.com/fastapi/sqlmodel/issues/418 | [
"question",
"answered"
] | thewchan | 3 |
pallets/flask | python | 5,266 | Flask subdomain parameter doesn't work at version 2.3.3 | Bug:
Since Flask version 2.3.3 the subdomain parameter when defining a new route doesn't work only with blueprint.
Include a minimal reproducible example that demonstrates the bug:
An example route which will return 404 Not Found even it should:
```
from flask import Flask
app = Flask(__name__, template_folder='templates', static_folder='static')
@app.route('/logout', subdomain='panel', methods=['POST', 'GET'])
def logout():
return "ok"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=80, threaded=True, debug=True)
```
Describe the expected behavior that should have happened but didn't:
Normally when using the subdomain parameter when defining a new route, the route would've work on that specified subdomain.
With the 2.3.3 version this is only works with Flask Blueprint, the downside of this is that I cannot define multiple subdomains.
Environment:
- Python version: 3.10.12
- Flask version: 2.3.3
| closed | 2023-09-25T13:50:36Z | 2023-10-11T00:05:25Z | https://github.com/pallets/flask/issues/5266 | [] | GoekhanDev | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 804 | Dataset for Colorization | Which dataset did you use for colorization? Can you share that? | open | 2019-10-18T08:00:21Z | 2019-11-19T20:40:37Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/804 | [] | rabiaali95 | 3 |
davidteather/TikTok-Api | api | 678 | by_trending returns wrong values | calling api.by_trending returns only 10 oficial tiktok video
for exemple this code:
```python
from TikTokApi import TikTokApi
# tested with and without custom_verifyFp
api = TikTokApi.get_instance(custom_verifyFp="" )
results = api.by_trending(count=20)
print(len(results))
for res in results:
print(res['author']['nickname'])
```
returns:
```
10
TikTok
TikTok
TikTok
TikTok
TikTok
TikTok
TikTok
TikTok
TikTok
TikTok
```
which isn't what is expected. | closed | 2021-08-28T17:31:20Z | 2021-08-28T17:34:11Z | https://github.com/davidteather/TikTok-Api/issues/678 | [] | teo-goulois | 1 |
fastapi-users/fastapi-users | fastapi | 1,301 | Support for Python 3.12 | ## Describe the bug
Importing `fastapi_users` fails with Python 3.12.
## To Reproduce
Steps to reproduce the behavior:
1. Install `fastapi-users` v12.1.2 and run Python 3.12.
2. Execute `import fastapi_users`.
3. See error `ModuleNotFoundError: No module named 'pkg_resources'`.
## Expected behavior
Fastapi-users should work on Python 3.12.
## Configuration
- Python version : 3.12
- FastAPI version : 0.103.2
- FastAPI Users version : 12.1.2
## Additional context
It seems that `passlib` still uses `pkg_resources`, which is deprecated. On Python <3.10, `importlib_metadata` should be used, and `importlib.metadata` should be used on Python >=3.10. | closed | 2023-10-10T08:49:59Z | 2024-03-11T13:31:11Z | https://github.com/fastapi-users/fastapi-users/issues/1301 | [
"bug"
] | davidbrochart | 11 |
ivy-llc/ivy | numpy | 28,352 | fix `complex` dtype support at `paddle backend` in `ivy.maximum` | closed | 2024-02-20T15:18:59Z | 2024-02-20T17:59:05Z | https://github.com/ivy-llc/ivy/issues/28352 | [
"Sub Task"
] | samthakur587 | 0 | |
cobrateam/splinter | automation | 602 | Headless mode in remote webdriver | Not sure if its possible, but it would be great if we can look into getting headless mode working via remote webdriver.
Referencing discussion on https://github.com/cobrateam/splinter/issues/597#issuecomment-377111031 | closed | 2018-04-16T01:41:58Z | 2023-01-25T02:39:14Z | https://github.com/cobrateam/splinter/issues/602 | [] | j7an | 1 |
pyg-team/pytorch_geometric | pytorch | 8,817 | in utils.subgraph.py RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) | ### 🐛 Describe the bug
in utils.subgraph.py
edge_mask = node_mask[edge_index[0]] & node_mask[edge_index[1]]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
because edge_index on 'cuda:0' and node_mask on 'cpu'
being solved with: node_mask=node_mask.to(device=device)
### Versions
last version | closed | 2024-01-24T16:07:02Z | 2024-01-29T13:01:03Z | https://github.com/pyg-team/pytorch_geometric/issues/8817 | [
"bug"
] | allierc | 4 |
sktime/pytorch-forecasting | pandas | 1,316 | Predicted output length for new test data is not correct (How do you define newly test data to the model) | - PyTorch-Forecasting version:2.0
- PyTorch version:2.0
- Python version 3.10
- Operating System:
Hi everyone,
I have trained an NBeats model from PyTorch forecasting using [[TimeSeriesDataSet — pytorch-forecasting documentation](https://pytorch-forecasting.readthedocs.io/en/stable/api/pytorch_forecasting.data.timeseries.TimeSeriesDataSet.html)](https://pythorch/ Forecasting)
using TimeSeriesDataSet method.
Here is the configuration of the model and data:
```
training = TimeSeriesDataSet(
train_data,
time_idx="time",
target="target",
group_ids=["group"],
time_varying_unknown_reals=["target"],
max_encoder_length=100,
max_prediction_length=100
)
testing= TimeSeriesDataSet(
test_data,
time_idx="time",
target="target",
group_ids=["group"],
time_varying_unknown_reals=["target"],
max_encoder_length=100,
max_prediction_length=100
)
train_load= training.to_dataloader(train=True, batch_size=128)
test_load= testing.to_dataloader(train=False, batch_size=128)
```
My goal is to look back at the past data points and forecast the next 100 points in the future, and this is why I set the max_prediction = 100.
The test data has 234 samples.
I tried to look at the predicted value but the size of prediction is 35 and not 100 and this is super strange since I want to forecast the next 100 (the prediction_horizon).
The prediction is:
`Pred = mymodel.predict(dataloaders=test_load)`
```
len(Pred)
35
```
Also if you look at the
```
print(test_load.dataset)
TimeSeriesDataSet[length=35](...)
```
You see the output also shows the length is 35 and not 100.
Can someone please explain why I get 35 as the length and not 100?
I have a feeling that I am not defining **my test data** correctly, but I would appreciate it if someone can help me with what exactly is the issue? | open | 2023-05-27T08:11:04Z | 2023-06-03T17:45:01Z | https://github.com/sktime/pytorch-forecasting/issues/1316 | [] | manitadayon | 5 |
ageitgey/face_recognition | machine-learning | 698 | How can i do ? | Hello,
how can i do insert to db with this array and how can i do to convert this to json
```
unknown_face_encodings = face_recognition.face_encodings(img)
// export array
[-0.09634063, 0.12095481, -0.00436332, -0.07643753, 0.0080383,
0.01902981, -0.07184699, -0.09383309, 0.18518871, -0.09588896,
0.23951106, 0.0986533 , -0.22114635, -0.1363683 , 0.04405268,
0.11574756, -0.19899382, -0.09597053, -0.11969153, -0.12277931,
0.03416885, -0.00267565, 0.09203379, 0.04713435, -0.12731361,
-0.35371891, -0.0503444 , -0.17841317, -0.00310897, -0.09844551,
-0.06910533, -0.00503746, -0.18466514, -0.09851682, 0.02903969,
-0.02174894, 0.02261871, 0.0032102 , 0.20312519, 0.02999607,
-0.11646006, 0.09432904, 0.02774341, 0.22102901, 0.26725179,
0.06896867, -0.00490024, -0.09441824, 0.11115381, -0.22592428,
0.06230862, 0.16559327, 0.06232892, 0.03458837, 0.09459756,
-0.18777156, 0.00654241, 0.08582542, -0.13578284, 0.0150229 ,
0.00670836, -0.08195844, -0.04346499, 0.03347827, 0.20310158,
0.09987706, -0.12370517, -0.06683611, 0.12704916, -0.02160804,
0.00984683, 0.00766284, -0.18980607, -0.19641446, -0.22800779,
0.09010898, 0.39178532, 0.18818057, -0.20875394, 0.03097027,
-0.21300618, 0.02532415, 0.07938635, 0.01000703, -0.07719778,
-0.12651891, -0.04318593, 0.06219772, 0.09163868, 0.05039065,
-0.04922386, 0.21839413, -0.02394437, 0.06173781, 0.0292527 ,
0.06160797, -0.15553983, -0.02440624, -0.17509389, -0.0630486 ,
0.01428208, -0.03637431, 0.03971229, 0.13983178, -0.23006812,
0.04999552, 0.0108454 , -0.03970895, 0.02501768, 0.08157793,
-0.03224047, -0.04502571, 0.0556995 , -0.24374914, 0.25514284,
0.24795187, 0.04060191, 0.17597422, 0.07966681, 0.01920104,
-0.01194376, -0.02300822, -0.17204897, -0.0596558 , 0.05307484,
0.07417042, 0.07126575, 0.00209804]
```
i want to save in db (sql) and array format typing
| closed | 2018-12-07T10:23:23Z | 2018-12-09T11:44:54Z | https://github.com/ageitgey/face_recognition/issues/698 | [] | wwwakcan | 1 |
rthalley/dnspython | asyncio | 169 | Bad code type for URI RR conforming to RFC 7553 | Hi
Code type for URI RR IS set to 253 in CERT.py.
But the RFC tell us to use the type number 256.
Otherwise update request doesn't work with DNS server (i got FORMERR rcode)
Best regards
| closed | 2016-05-31T14:27:36Z | 2016-06-01T15:34:04Z | https://github.com/rthalley/dnspython/issues/169 | [] | Axili39 | 4 |
aio-libs/aiohttp | asyncio | 9,882 | Cathing internal exceptions | ### Describe the bug
How can I catch errors like this while using `aiohttp.web.Application()`:
```
Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/aiohttp/web_protocol.py", line 332, in data_received
messages, upgraded, tail = self._request_parser.feed_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
aiohttp.http_exceptions.BadHttpMessage: 400, message:
Invalid header value char:
[...]
```
The client-side receives HTTP 400 with the exception text which I want to avoid (and also log the exception using my logger). It does not seem like I can catch it using middleware or override in any way which I would possibly call a bug (unless I'm missing something obvious)
It seems like this is where this unwanted response for the client is being created: https://github.com/aio-libs/aiohttp/blob/master/aiohttp/web_protocol.py#L411
### To Reproduce
1. Create an Application
2. Run the server
3. Send the request with invalid headers
### Expected behavior
A way to catch and handle the error in my app to log it and send back a generic error message
### Logs/tracebacks
```python-traceback
None
```
### Python Version
```console
$ python --version
3.11
```
### aiohttp Version
```console
$ python -m pip show aiohttp
3.8.5
```
### multidict Version
```console
$ python -m pip show multidict
6.0.4
```
### propcache Version
```console
$ python -m pip show propcache
Not installed
```
### yarl Version
```console
$ python -m pip show yarl
1.9.2
```
### OS
Ubuntu 22.04 LTS
### Related component
Server
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | closed | 2024-11-14T19:05:06Z | 2024-11-15T01:17:31Z | https://github.com/aio-libs/aiohttp/issues/9882 | [
"bug"
] | daniel-kukiela | 1 |
litestar-org/litestar | api | 3,930 | Docs: Provide examples of how to create middleware | ### Summary
For those used to thinking of a [server as a function](https://monkey.org/~marius/funsrv.pdf), which Starlette's `BaseHTTPMiddleware` does (caveat: it [has its own problems](https://github.com/encode/starlette/issues/1678)), ASGI middleware can be quite confusing for two reasons:
1. The mapping of ASGI concepts to HTTP request/response cycle concepts, at a level that the average dev will be aware of, is not clear (what are `send` and `receive`, and how do they relate to requests and responses?)
2. Many operations are made much easier by not inspecting the raw scope `dict`, but rather performing framework-provided transformations (e.g. `litestar.datastructures.Headers.from_scope` or `litestar.datastructures.URL.from_scope`) on `scope`. These transformations are not inherently discoverable.
My suggestion: add multiple examples to the [creating Middleware page](https://docs.litestar.dev/2/usage/middleware/creating-middleware.html), like Starlette [already does](https://www.starlette.io/middleware/#pure-asgi-middleware). | open | 2025-01-07T00:08:01Z | 2025-01-07T00:08:23Z | https://github.com/litestar-org/litestar/issues/3930 | [
"Documentation :books:"
] | marcuslimdw | 0 |
yuka-friends/Windrecorder | streamlit | 262 | Search Annotation Idea | Hey man,
Do you think you can add a annotation feature to search results and save option like next to flag mark for notes etc?
We can highlight the video frame, draw, annotate, etc. also any where over the UI interface browser, and have it saved as a library of notes. Specifically for whatever we try to find in search results.
Similar to Rewind AI's OCR highlighting when indexing search results, u can also use Paddle OCR for reference.
Thanks!
| closed | 2025-01-27T09:23:31Z | 2025-02-08T01:45:18Z | https://github.com/yuka-friends/Windrecorder/issues/262 | [
"enhancement"
] | morningstar41131411811717116112213 | 8 |
mkhorasani/Streamlit-Authenticator | streamlit | 239 | using exact documentation, i can login rbriggs, but i cant login jsmith | hi, i try your code and your exact documentation, but why i can login "rbriggs" with his password "def", but i cant login jsmith with password "abc"? also you can update your documentation in installation section, add -> from streamlit_authenticator.utilities import (CredentialsError, ForgotError,Hasher,LoginError,RegisterError,ResetError,UpdateError,) | closed | 2024-10-29T11:50:28Z | 2024-10-29T13:43:35Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/239 | [
"help wanted"
] | rhavif-budiman | 12 |
coqui-ai/TTS | deep-learning | 2,800 | [Bug] FastSpeech2 Expanding Tensor RunTimeError | ### Describe the bug
I'm trying to train with Fastspeech2 on a custom dataset with the LJSpeech format and am running into this error (please note that I have done a clean install as directed by the repo):
The error message I got:
```
> EPOCH: 0/10000
--> /workspace/fastspeech2_ljspeech-July-25-2023_02+13PM-0000000
[*] Pre-computing energys...
76%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 1412/1849 [00:25<00:04, 105.44it/s]/opt/conda/lib/python3.10/site-packages/librosa/core/spectrum.py:256: UserWarning: n_fft=1024 is too large for input signal of length=2
warnings.warn(
/opt/conda/lib/python3.10/site-packages/librosa/core/spectrum.py:256: UserWarning: n_fft=1024 is too large for input signal of length=2
warnings.warn(
/opt/conda/lib/python3.10/site-packages/librosa/core/spectrum.py:256: UserWarning: n_fft=1024 is too large for input signal of length=2
warnings.warn(
76%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 1412/1849 [00:36<00:11, 38.71it/s]
! Run is removed from /workspace/fastspeech2_ljspeech-July-25-2023_02+13PM-0000000
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/trainer/trainer.py", line 1805, in fit
self._fit()
File "/opt/conda/lib/python3.10/site-packages/trainer/trainer.py", line 1757, in _fit
self.train_epoch()
File "/opt/conda/lib/python3.10/site-packages/trainer/trainer.py", line 1467, in train_epoch
self.train_loader = self.get_train_dataloader(
File "/opt/conda/lib/python3.10/site-packages/trainer/trainer.py", line 931, in get_train_dataloader
return self._get_loader(
File "/opt/conda/lib/python3.10/site-packages/trainer/trainer.py", line 895, in _get_loader
loader = model.get_data_loader(
File "/workspace/TTS/TTS/tts/models/base_tts.py", line 315, in get_data_loader
dataset = TTSDataset(
File "/workspace/TTS/TTS/tts/datasets/dataset.py", line 168, in __init__
self.energy_dataset = EnergyDataset(
File "/workspace/TTS/TTS/tts/datasets/dataset.py", line 855, in __init__
self.precompute(precompute_num_workers)
File "/workspace/TTS/TTS/tts/datasets/dataset.py", line 881, in precompute
for batch in dataloder:
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/opt/conda/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/workspace/TTS/TTS/tts/datasets/dataset.py", line 954, in collate_fn
energys_torch[i, :energy_len] = torch.LongTensor(energys[i])
RuntimeError: expand(torch.LongTensor{[513, 1]}, size=[513]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)
```
### To Reproduce
I found some place that mentioned changing `r=1` to `r=7` but this error persisted across both changes.
This is the script I ran for training the model:
```python
import os
from trainer import Trainer, TrainerArgs
from TTS.config.shared_configs import BaseAudioConfig, BaseDatasetConfig
from TTS.tts.configs.fastspeech2_config import Fastspeech2Config
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.forward_tts import ForwardTTS
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
from TTS.utils.manage import ModelManager
from TTS.tts.configs.shared_configs import CharactersConfig
output_path = os.path.dirname(os.path.abspath(__file__))
dataset = "./dataset"
assert os.path.exists(dataset), "Dataset path specified does not exist!"
# init configs
dataset_config = BaseDatasetConfig(
formatter="ljspeech",
meta_file_train="metadata.csv",
path=os.path.join(output_path, dataset)
)
audio_config = BaseAudioConfig(
sample_rate=44100,
do_trim_silence=True,
trim_db=60.0,
signal_norm=False,
mel_fmin=0.0,
mel_fmax=8000,
spec_gain=1.0,
log_func="np.log",
ref_level_db=20,
preemphasis=0.0,
)
config = Fastspeech2Config(
run_name="fastspeech2_ljspeech",
audio=audio_config,
batch_size=4,
eval_batch_size=16,
num_loader_workers=8,
num_eval_loader_workers=4,
compute_input_seq_cache=True,
compute_f0=False,
f0_cache_path=os.path.join(output_path, "f0_cache"),
compute_energy=True,
energy_cache_path=os.path.join(output_path, "energy_cache"),
run_eval=False,
test_delay_epochs=-1,
epochs=10000,
text_cleaner="english_cleaners",
use_phonemes=False,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
precompute_num_workers=4,
print_step=50,
print_eval=False,
mixed_precision=False,
max_seq_len=500000,
output_path=output_path,
datasets=[dataset_config],
r=1,
characters=CharactersConfig(
pad="<PAD>",
eos="<EOS>",
bos="<BOS>",
blank="<BLNK>",
characters="0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ۵ٹثۃگںف۲۱ظچئژذ۳لیڈن۸ۓزم۹بھۂےطغ۰ہأؤسواتصعدء۴۶شخضڑپآحک۷رجق",
punctuations=" ,.'٫؟،۔؛٪",
is_unique=False,
is_sorted=True
),
test_sentences=[
]
)
# compute alignments
if not config.model_args.use_aligner:
manager = ModelManager()
model_path, config_path, _ = manager.download_model("tts_models/en/ljspeech/tacotron2-DCA")
# TODO: make compute_attention python callable
os.system(
f"python TTS/bin/compute_attention_masks.py --model_path {model_path} --config_path {config_path} --dataset ljspeech --dataset_metafile metadata.csv --data_path ./recipes/ljspeech/LJSpeech-1.1/ --use_cuda true"
)
# INITIALIZE THE AUDIO PROCESSOR
# Audio processor is used for feature extraction and audio I/O.
# It mainly serves to the dataloader and the training loggers.
ap = AudioProcessor.init_from_config(config)
# INITIALIZE THE TOKENIZER
# Tokenizer is used to convert text to sequences of token IDs.
# If characters are not defined in the config, default characters are passed to the config
tokenizer, config = TTSTokenizer.init_from_config(config)
# LOAD DATA SAMPLES
# Each sample is a list of ```[text, audio_file_path, speaker_name]```
# You can define your custom sample loader returning the list of samples.
# Or define your custom formatter and pass it to the `load_tts_samples`.
# Check `TTS.tts.datasets.load_tts_samples` for more details.
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=False,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
# init the model
model = ForwardTTS(config, ap, tokenizer, speaker_manager=None)
# init the trainer and 🚀
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
trainer.fit()
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- TTS: 0.15.6 (from `conda list` since the `collect_env_info.py` script gave the error `TTS has no attribute '__version__'`)
- PyTorch: 2.0.1
- OS: Ubuntu 16.04
```
### Additional context
_No response_ | closed | 2023-07-25T14:27:36Z | 2023-07-31T13:20:03Z | https://github.com/coqui-ai/TTS/issues/2800 | [
"bug"
] | zohaib-khan5040 | 2 |
plotly/dash | data-visualization | 3,106 | Search param removes values after ampersand, introduced in 2.18.2 | I pass to one of my pages a search string, like:
?param1=something¶m2=something2
Accessing it uding:
def layout(**kwargs):
In 2.18.1, this works for values that included ampersand. For example:
?param1=something&bla¶m2=something2
would result in
kwargs[param1]='something&bla'
With 2.18.2 I get just:
kwargs[param1]='something'
with anything after the ampersand removed.
I would guess this is related to #2991 .
To be clear, I specifically downgraded dash to 2.18.1 and the issue went away. | open | 2024-12-12T11:43:21Z | 2025-01-17T14:05:16Z | https://github.com/plotly/dash/issues/3106 | [
"regression",
"bug",
"P2"
] | ezwc | 6 |
PokeAPI/pokeapi | api | 187 | super-contest-effect skips id | Hey there, thanks for this amazing resource!
I was making your recommendation and persisting the dataset to a personal database when I hit a snag. For super-contest-effects, I was looping over the results and would get a 404 on id 3. Sure enough, id 3 doesn't exist! I've handled this in my code, but I am unsure of how it can be fixed on your end. I will update this issue with any other skipped values that I may come across. Thanks guys/gals!
| closed | 2016-05-12T12:31:37Z | 2016-05-13T03:37:03Z | https://github.com/PokeAPI/pokeapi/issues/187 | [] | zberk | 2 |
sinaptik-ai/pandas-ai | pandas | 1,263 | Update DuckDB version to the new major release 1.0 | ### 🚀 The feature
It would be a good time to upgrade the duckdb package version to the new major release 1.0, since it's the latest stable version from where most of the APIs should be final.
https://github.com/Sinaptik-AI/pandas-ai/blob/f895e5feb3a4a657ca866905748d323fe03d1adb/pyproject.toml#L19
### Motivation, pitch
Having it locked to the <1 can be a block to someone who wants to use the new release.locked
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-07-02T11:31:40Z | 2024-10-08T16:08:22Z | https://github.com/sinaptik-ai/pandas-ai/issues/1263 | [
"enhancement"
] | pedrosalgadowork | 0 |
ultralytics/ultralytics | computer-vision | 19,239 | How to test the latency speed? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Thank you for your work. I would like to ask how you measured the speed of YOLOv11-N on the T4 to be 1.5ms. Is the following code correct? If it is correct, how many images did you test on?
```python
model = YOLO('yolo11n.yaml')
path = model.export(format="engine", half=True)
model = YOLO('yolo11n.engine')
model.predict()
```
Is this code correct? This code will read images from ultralytics/assets directory. Thanks!
### Additional
_No response_ | closed | 2025-02-14T03:04:48Z | 2025-02-15T03:17:23Z | https://github.com/ultralytics/ultralytics/issues/19239 | [
"question",
"detect",
"exports"
] | sunsmarterjie | 8 |
proplot-dev/proplot | matplotlib | 343 | Artefacts in hist2d and hexbin output | When I'm making hist2d or hexbin plots in `proplot`, I have an issue where in the hist2d the rectangles arent correctly shaped, specifically it seems the lowest pixel row, and when I use hexbin, some of the hexes appear to be smaller or below other hexes.
Proplot code:
```python
import proplot as pplt
import numpy as np
import matplotlib.pyplot as plt
# Sample data
N = 500
state = np.random.RandomState(51423)
x = state.normal(size=(N,))
y = state.normal(size=(N,))
bins = pplt.arange(-3, 3, 0.25)
fig, axs = pplt.subplots(refaspect=1, width='50mm')
axs[0].hist2d(
x, y, bins, vmin=0, vmax=10,
cmap='Blues',
)
```
Proplot result:

Zoom of the rectangles:

Matplotlib code:
```python
import numpy as np
import matplotlib.pyplot as plt
# Sample data
N = 500
state = np.random.RandomState(51423)
x = state.normal(size=(N,))
y = state.normal(size=(N,))
bins = np.arange(-3, 3.2, 0.25)
fig, ax = plt.subplots()
ax.set_aspect(1)
ax.hist2d(x, y, bins, cmap='Blues', vmin=0, vmax=10)
```
Matplotlib result

For the hist2d it isnt too bad, especially with high resolutions (since its appears only to be the lowest pixel line). pdf output has the same effect.
The hexbin looks like this:
Proplot:
```python
...
axs[0].hexbin(
x, y, gridsize=20, vmin=0, vmax=10,
cmap='Blues',
)
```

Matplotlib:
```python
ax.hexbin(x, y, cmap='Blues', gridsize=20, vmin=0, vmax=10)
```

Version info:
`proplot==0.9.5`
`matplotlib==3.4.3`
Using py39 on windows 11
| open | 2022-02-16T11:59:56Z | 2022-02-17T10:09:32Z | https://github.com/proplot-dev/proplot/issues/343 | [
"enhancement"
] | Jhsmit | 3 |
fastapi/sqlmodel | sqlalchemy | 308 | failed to sync data from sqlite to mysql using sqlmodel | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def sync_sqlite_to_mysql():
# sqlite_file_name = 'db.db'
sqlite = f"sqlite:////Users/m0nst3r/dev/py/tespla/db.db"
connect_args = {"check_same_thread": False}
engine1 = create_engine(sqlite, echo=True, connect_args=connect_args)
print("engine 1 for sqlite:", engine1)
mysql = "mysql+pymysql://root:root@127.0.0.1:3306/tespla"
engine2 = create_engine(mysql, echo=True)
print("engine 1 for msyql:", engine2)
from sqlmodel import Session, select
with Session(engine1) as s1:
clients = s1.exec(select(Client)).all()
tasks = s1.exec(select(Task)).all()
vulns = s1.exec(select(Vuln)).all()
users = s1.exec(select(User)).all()
templates = s1.exec(select(Template)).all()
with Session(engine2) as s2:
for c in clients:
print("adding client", c)
s2.add(c)
s2.commit()
for t in tasks:
print("adding task", t)
s2.add(t)
s2.commit()
for v in vulns:
print("adding vuln:", v)
s2.add(v)
s2.commit()
for u in users:
print("adding user:", u)
s2.add(u)
s2.commit()
for t in templates:
print("adding tempalte:", t)
s2.add(t)
s2.commit()
s2.commit()
print("Done")
if __name__ == "__main__":
sync_sqlite_to_mysql()
```
### Description
I searched some sqlite to mysql converter, but no luck.
So I guess I can initialize 2 sessions, one with sqlite database which has data, and another mysql database without data.
The console don't show errors, but the mysql database is still empty after run the script.
Any idea what the problem is?
Thanks.
<img width="498" alt="image" src="https://user-images.githubusercontent.com/38524240/164196375-e1b544f3-fe41-4d0f-8b00-218072c6ca7d.png">
### Operating System
macOS
### Operating System Details
m1, newest
### SQLModel Version
0.0.6
### Python Version
Python 3.9.9
### Additional Context
_No response_ | closed | 2022-04-20T09:26:05Z | 2022-04-26T02:58:49Z | https://github.com/fastapi/sqlmodel/issues/308 | [
"question"
] | mr-m0nst3r | 3 |
deezer/spleeter | deep-learning | 162 | [Bug] Resource exhausted | ## Resource exhausted: OOM when allocating tensor with shape[2,21803,2049]
I'm running `spleeter-gpu` and I get the following error amongst many more:
```
Resource exhausted: OOM when allocating tensor with shape[2,21803,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node stft/rfft}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
```
## Step to reproduce
1. Get a low-end GPU such as GTX 760
2. Run as `spleeter-gpu`
## Output
```bash
INFO:spleeter:Audio data loaded successfully
Traceback (most recent call last):
File "c:\anaconda\envs\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "c:\anaconda\envs\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "c:\anaconda\envs\spleeter-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[2,21803,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node stft/rfft}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[strided_slice_48/_2243]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[2,21803,2049] and type complex64 on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node stft/rfft}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | Conda |
| RAM available | 8 GB |
| Hardware spec | i7-960, GTX 760 |
## Additional context
Is there a fix to this?
Maybe some TF reconfiguration?
[I read here that](https://github.com/tensorflow/models/issues/1993) lowering the batch size should fix the issue.
How is this done for spleeter? Which file should I edit? | closed | 2019-12-05T13:41:22Z | 2020-05-23T02:25:53Z | https://github.com/deezer/spleeter/issues/162 | [
"bug",
"GPU",
"Tensorflow"
] | aidv | 8 |
encode/databases | sqlalchemy | 580 | create table does not create unique keys and indexes | By defining the index on the columns and as a unique key, the table is created with the CreateTable function, but the keys are not.
```py
users = Table(
'users',
metadata,
Column('id', Integer, primary_key=True),
Column('name', String(length=255)),
Column('email', String(length=255), unique=True, index=True)
)
```
```py
for table in metadata.tables.values():
schema = sqlalchemy.schema.CreateTable(table, if_not_exists=True)
query = schema.compile(dialect=postgresql.dialect())
await database.execute(query=str(query))
``` | open | 2024-01-20T15:39:45Z | 2024-01-20T15:39:45Z | https://github.com/encode/databases/issues/580 | [] | YuriFontella | 0 |
pandas-dev/pandas | pandas | 61,135 | Issue Title Here | Description of the issue goes here. | closed | 2025-03-17T07:41:22Z | 2025-03-17T07:42:01Z | https://github.com/pandas-dev/pandas/issues/61135 | [] | saurav-chakravorty | 0 |
deeppavlov/DeepPavlov | tensorflow | 1,467 | Windows support for DeepPavlov v0.14.1 and v0.15.0 | **DeepPavlov version** : 0.14.1 and 0.15.0
**Python version**: 3.7
**Operating system** (ubuntu linux, windows, ...): Windows 10
**Issue**:
Attempting to upgrade to v0.14.1 or v0.15.0 encounters an error on Windows as uvloop is not supported on Windows. Is this a noted issue or are there any workarounds?
See installation error traceback (file paths removed):
**Error (including full traceback)**:
```
(venv) C:\...>pip install --upgrade deeppavlov==0.14.1
Collecting deeppavlov==0.14.1
Using cached deeppavlov-0.14.1-py3-none-any.whl (988 kB)
Requirement already satisfied: numpy==1.18.0 in c:\... (from deeppavlov==0.14.1) (1.18.0)
Requirement already satisfied: overrides==2.7.0 in c:\... (from deeppavlov==0.14.1) (2.7.0)
Requirement already satisfied: h5py==2.10.0 in c:\... (from deeppavlov==0.14.1) (2.10.0)
Requirement already satisfied: click==7.1.2 in c:\... (from deeppavlov==0.14.1) (7.1.2)
Requirement already satisfied: rusenttokenize==0.0.5 in c:\... (from deeppavlov==0.14.1) (0.0.5)
Requirement already satisfied: pytelegrambotapi==3.6.7 in c:\... (from deeppavlov==0.14.1) (3.6.7)
Requirement already satisfied: pandas==0.25.3 in c:\... (from deeppavlov==0.14.1) (0.25.3)
Requirement already satisfied: scikit-learn==0.21.2 in c:\... (from deeppavlov==0.14.1) (0.21.2)
Requirement already satisfied: aio-pika==6.4.1 in c:\... (from deeppavlov==0.14.1) (6.4.1)
Requirement already satisfied: pytz==2019.1 in c:\... (from deeppavlov==0.14.1) (2019.1)
Requirement already satisfied: Cython==0.29.14 in c:\... (from deeppavlov==0.14.1) (0.29.14)
Requirement already satisfied: scipy==1.4.1 in c:\... (from deeppavlov==0.14.1) (1.4.1)
Requirement already satisfied: pymorphy2==0.8 in c:\... (from deeppavlov==0.14.1) (0.8)
Requirement already satisfied: sacremoses==0.0.35 in c:\... (from deeppavlov==0.14.1) (0.0.35)
Requirement already satisfied: pyopenssl==19.1.0 in c:\... (from deeppavlov==0.14.1) (19.1.0)
Requirement already satisfied: pymorphy2-dicts-ru in c:\... (from deeppavlov==0.14.1) (2.4.417127.4579844)
Requirement already satisfied: prometheus-client==0.7.1 in c:\... (fromdeeppavlov==0.14.1) (0.7.1)
Requirement already satisfied: ruamel.yaml==0.15.100 in c:\... (from deeppavlov==0.14.1) (0.15.100)
Requirement already satisfied: filelock==3.0.12 in c:\... (from deeppavlov==0.14.1) (3.0.12)
Requirement already satisfied: fastapi==0.47.1 in c:\... (from deeppavlov==0.14.1) (0.47.1)
Requirement already satisfied: tqdm==4.41.1 in c:\... (from deeppavlov==0.14.1) (4.41.1)
Requirement already satisfied: requests==2.22.0 in c:\... (from deeppavlov==0.14.1) (2.22.0)
Requirement already satisfied: pydantic==1.3 in c:\... (from deeppavlov==0.14.1) (1.3)
Collecting uvloop==0.14.0
Using cached uvloop-0.14.0.tar.gz (2.0 MB)
ERROR: Command errored out with exit status 1:
command: 'c:\...\venv\scripts\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\...\\AppData\\Local\\Temp\\pip-install-_6m26rhl\\uvloop_f3b1771d349c4f5facada899f0c22cec\\setup.py'"'"'; __file__='"'"'C:\\...\\AppData\\Local\\Temp\\pip-install-_6m26rhl\\uvloop_f3b1771d349c4f5facada899f0c22cec\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.re
ad().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\...\AppData\Local\Temp\pip-pip-egg-info-x2q39lc9'
cwd: C:\...\AppData\Local\Temp\pip-install-_6m26rhl\uvloop_f3b1771d349c4f5facada899f0c22cec\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\...\AppData\Local\Temp\pip-install-_6m26rhl\uvloop_f3b1771d349c4f5facada899f0c22cec\setup.py", line 15, in <module>
raise RuntimeError('uvloop does not support Windows at the moment')
RuntimeError: uvloop does not support Windows at the moment
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/84/2e/462e7a25b787d2b40cf6c9864a9e702f358349fc9cfb77e83c38acb73048/uvloop-0.14.0.ta
r.gz#sha256=123ac9c0c7dd71464f58f1b4ee0bbd81285d96cdda8bc3519281b8973e3a461e (from https://pypi.org/simple/uvloop/). Command errored out with e
xit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement uvloop==0.14.0 (from deeppavlov) (from versions: 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4
.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15, 0.4.16, 0.4.17, 0.4.18, 0.4.19, 0.4.20, 0.4.21, 0.4.22, 0.4.23,
0.4.24, 0.4.25, 0.4.26, 0.4.27, 0.4.28, 0.4.29, 0.4.30, 0.4.31, 0.4.32, 0.4.33, 0.4.34, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.6.0, 0.6.5
, 0.6.6, 0.6.7, 0.6.8, 0.7.0, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0r
c1, 0.12.0, 0.12.1, 0.12.2, 0.13.0rc1, 0.13.0, 0.14.0rc1, 0.14.0rc2, 0.14.0, 0.15.0, 0.15.1, 0.15.2, 0.15.3)
ERROR: No matching distribution found for uvloop==0.14.0
```
| closed | 2021-07-21T19:24:20Z | 2022-11-09T00:15:14Z | https://github.com/deeppavlov/DeepPavlov/issues/1467 | [
"bug"
] | priyankshah7 | 6 |
jonaswinkler/paperless-ng | django | 794 | [BUG] Error storing parsed document to postgres | **Describe the bug**
This is against paperless-ng 1.3.2
Several (non-scanned) PDFs had the same error while being consumed:
`A string literal cannot contain NUL (0x00) characters`
I believe that's postgres complaining - the top search results give a very relevant [Django+Postgres issue](https://stackoverflow.com/questions/57371164/django-postgres-a-string-literal-cannot-contain-nul-0x00-characters)
Happy to track this down and commit a patch if there's a consensus on a preferred way to remove/replace NULLs.
**To Reproduce**
Steps to reproduce the behavior:
1. _Consume_ the attached file:
[Free Sample CARFAX Vehicle History Report.pdf](https://github.com/jonaswinkler/paperless-ng/files/6186943/Free.Sample.CARFAX.Vehicle.History.Report.pdf)
(_edit: swapped original for file with same issue_)
2. Error reported above appears in web interface and logs.
**Expected behavior**
Full import succeeds without error.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Webserver logs**
First run is below:
```
[2021-03-18 21:57:29,437] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/src/../consume/CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf to the task queue.
[2021-03-18 21:57:29,514] [INFO] [paperless.consumer] Consuming CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf
[2021-03-18 21:57:29,583] [DEBUG] [paperless.consumer] Parsing CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf...
[2021-03-18 21:57:33,892] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /usr/src/paperless/src/../consume/CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf
[2021-03-18 21:57:33,979] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': '/usr/src/paperless/src/../consume/CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf', 'output_file': '/tmp/paperless/paperless-xx1t5ohq/archive.pdf', 'use_threads': True, 'jobs': 4, 'language': 'eng', 'output_type': 'pdfa', 'progress_bar': False, 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': '/tmp/paperless/paperless-xx1t5ohq/sidecar.txt'}
[2021-03-18 21:57:47,134] [DEBUG] [paperless.consumer] Generating thumbnail for CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf...
[2021-03-18 21:57:55,535] [ERROR] [paperless.consumer] The following error occured while consuming CARFAX Vehicle History Report for this Free Sample CARFAX Vehicle History Report.pdf: A string literal cannot contain NUL (0x00) characters.
```
**Relevant information**
- Ubuntu 20.04.2 LTS
- Google Chrome 89.0.4389.90
- Installation method: docker
- NO configuration changes made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`.
| closed | 2021-03-19T18:31:29Z | 2021-04-06T19:55:21Z | https://github.com/jonaswinkler/paperless-ng/issues/794 | [
"bug",
"fixed in next release"
] | jessedp | 8 |
joouha/euporie | jupyter | 94 | Installation issues | Hi,
I think this could be a great deal for my dev workflow, only if I manage to get it to work. So far, nothing works as expected. Here some insights:

When opening euporie-notebook, the top bar just dissapears. And hitting enter does not work to edit cells. Plus keybindings using Ctrl or Shift do not work at all.
When using euporie-console the main problem begins

Nothing works, and this escape characters start rendering for any key that is not a letter. I'm using Kitty, and added the remaps specified in the docs.
The weirdest thing is that I can run Console and Notebook when using Floatterm within Neovim

But I still have some rendering issues.
I'm using Kitty as my terminal and I installed euporie in my conda environment since I want to make use of my packages. Is there any part of the setup that I'm missing? Any guidelines would be deeply appreciated.
Thanks,
Alfonso
| closed | 2023-08-19T23:26:08Z | 2024-01-18T13:21:52Z | https://github.com/joouha/euporie/issues/94 | [] | datacubeR | 3 |
dmlc/gluon-cv | computer-vision | 1,079 | [Semantic Segmentation]`iters_per_epoch` computed incorrectly in training script? | Looking at the input to LRScheduler from Semantic segmentation [training script](https://github.com/dmlc/gluon-cv/blob/master/scripts/segmentation/train.py#L151), shouldn't it be `len(self.train_data) // self.args.batch_size` which is the number of batches? | closed | 2019-12-04T22:08:38Z | 2019-12-05T21:18:48Z | https://github.com/dmlc/gluon-cv/issues/1079 | [] | chenliu0831 | 4 |
microsoft/qlib | deep-learning | 1,884 | Which two time series is used in CORR5? | ## ❓ Questions and Help
I was unable to find how the technical indicator CORR5 is calculated. Please advice which two time series were taken?

Thank you,
Vishal | closed | 2025-01-14T11:56:49Z | 2025-01-19T11:34:18Z | https://github.com/microsoft/qlib/issues/1884 | [
"question"
] | vishalkhialani | 2 |
streamlit/streamlit | data-visualization | 10,722 | Add a bottom layout container | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add a `st.bottom` root container. This container behaves similarly to st.sidebar but is always pinned to the bottom of the main section. It will always be visible even when the user is scrolling.
### Why?
Allow pinning a container with elements to the bottom of the main section. These elements will always be visible. This also allows to add elements next to the bottom pinned st.chat_input element as requested here: https://github.com/streamlit/streamlit/issues/8185
### How?
```python
with st.bottom:
st.button("Button pinned to bottom")
st.chat_input("Chat input pinned to bottom")
st.bottom.write("Text pinned to bottom")
```
### Additional Context
- [Bottom container in streamlit-extras](https://arnaudmiribel.github.io/streamlit-extras/extras/bottom_container/) (No 2. most used extra)
Related issues:
- https://github.com/streamlit/streamlit/issues/8185
- https://github.com/streamlit/streamlit/issues/9545
- https://github.com/streamlit/streamlit/issues/7311
- https://github.com/streamlit/streamlit/issues/8564
| open | 2025-03-11T18:37:29Z | 2025-03-11T19:22:06Z | https://github.com/streamlit/streamlit/issues/10722 | [
"type:enhancement",
"area:layout"
] | lukasmasuch | 1 |
sktime/sktime | scikit-learn | 7,325 | [MNT] CI timeouts on macos-13, test-full | The macos-13 CI times out in the `test_full` workflow - even on pull requests with minimal changes, where no suspect code should be run.
We ought to investigate this.
Example: https://github.com/sktime/sktime/pull/7288
Potential solution: https://github.com/sktime/sktime/pull/7275 | open | 2024-10-23T08:46:58Z | 2024-10-23T08:47:34Z | https://github.com/sktime/sktime/issues/7325 | [
"maintenance"
] | fkiraly | 0 |
flairNLP/flair | nlp | 3,024 | Wrong context preperation in SequenceTagger.predict() | Hi everyone,
I finetuned a TransformerWordEmbeddings based on 'microsoft/mdeberta-v3-base' for a NER task and set use_context=True. I noticed some discrepancy between the test that happens automatically after the training (implemented by flair) and the tests that I do with SequenceTagger manually afterwards. After debugging I noticed that the sentences in a batch are being reordered in SequenceTagger.predict():
https://github.com/flairNLP/flair/blob/5a13598ef5c20fdb79e441720b82250c0881094e/flair/models/sequence_tagger_model.py#L465
So the context that is being expanded from this batch is wrong. While the context that is being prepared in the automatic test after training is correct (no sentence reordering happens and SequenceTagger._prepare_tensors(batch) is directly called).
Can you let me know if my observation is correct or if I missed something? | closed | 2022-12-13T13:28:39Z | 2023-02-20T22:57:09Z | https://github.com/flairNLP/flair/issues/3024 | [
"fix-for-release-0.12"
] | mohammad-al-zoubi | 4 |
saulpw/visidata | pandas | 1,568 | [sqlite/addcol-incr] addcol-incr column shows only null values in dup-selected sheet | **Small description**
In a SQLite sheet, addcol-incr `i`, then select some rows, then `"` dup-select. New sheet has the column but only with null values.
**Expected result**
The incr column would have the values from the original sheet.
**Actual result with screenshot**
https://asciinema.org/a/xIcI3lNiGpYtQGdlM765gOKFU
**Steps to reproduce with sample data and a .vd**
```
vd sample_data/employees.sqlite +:emp::
```
Then `i` `,` `"`
**Additional context**
Please include the version of VisiData. Latest develop version.
| closed | 2022-10-15T18:53:58Z | 2023-01-06T01:07:42Z | https://github.com/saulpw/visidata/issues/1568 | [
"bug",
"fixed"
] | frosencrantz | 1 |
chatopera/Synonyms | nlp | 77 | absl-py version conflict | # description
synonyms依赖absl-py=0.1.10,但tensorflow1.13.1依赖absl-py>=0.4,无法共存。
## current
## expected
升级absl-py到0.4版本以上
# solution
# environment
ubuntu16.04,linux4.4.0,python2.7
* version:
The commit hash (`git rev-parse HEAD`)
| closed | 2019-03-11T02:51:31Z | 2019-04-21T01:20:22Z | https://github.com/chatopera/Synonyms/issues/77 | [] | andyxzq | 1 |
explosion/spaCy | data-science | 13,751 | Apple and Docker support | I'm using Spacy 3.8.4 on a MacMin with M4PRO chip and this works great.
When trying to build a Docker container it fails because of blis and thinc compatiblity.
SO i can run it from my virtual environment which works but on our development platform we use a MacBook Pro with M1Max chip and pip install spacy and pip install -U "spacy[apple]" do NOT work.
Questions:
1. Is there a wheel to download for M1Max
2. Can you make it useable within Docker
| open | 2025-02-12T13:22:27Z | 2025-03-20T11:39:33Z | https://github.com/explosion/spaCy/issues/13751 | [] | arikivandeput | 2 |
pytest-dev/pytest-html | pytest | 438 | Output the stdout in the use case to the console and html report at the same time | I‘m useing pytest-html and find the plugin generally fantastic.
But I have been in contact with pytest-html for a short time. During the use, I found that adding the'-s' parameter to the pytest command can display the standard output of the use case to the console, but the standard output of the html report cannot be captured. With the'--capture=sys' parameter, the html report is captured, but the standard output in the console disappears. Is there any way to display it in the console and html report at the same time? | open | 2021-01-06T08:35:19Z | 2021-02-12T09:27:07Z | https://github.com/pytest-dev/pytest-html/issues/438 | [] | wangqian0818 | 2 |
gunthercox/ChatterBot | machine-learning | 2,330 | An error in installing chatterbot | I have used the command pip install git+git://github.com/gunthercox/ChatterBot.git@master.
But i received an error:subprocess-exited-with-error
Exit code:128
Note:This error originates from a subprocess and is likely not a problem with pip.
Kindly help me to fix this issue......
| closed | 2023-10-24T11:10:56Z | 2025-02-17T19:23:15Z | https://github.com/gunthercox/ChatterBot/issues/2330 | [] | AmalDeepthi | 2 |
healthchecks/healthchecks | django | 92 | expose publicly accessible OK/NOT_OK curlable URL per check | This idea is somewhat similar to the publicly accessible badges per tag, but the feature request here is a text string per check (rather then svg image per tag).
It would be cool if a keyword is exposed at a per check unique URL so that I can poll whether a healthcheck is still in good shape or not.
health check in good shape:
```
$ curl https://hchk.io/txt/0b5eb61f-58af-4c8a-b6ce-2732892abe
OK
$
```
healthcheck in bad shape
```
$ curl https://hchk.io/txt/0b5eb61f-58af-4c8a-b6ce-2732892abe
NOT_OK
$
```
| closed | 2016-10-08T09:55:39Z | 2017-06-29T13:12:59Z | https://github.com/healthchecks/healthchecks/issues/92 | [] | job | 8 |
SYSTRAN/faster-whisper | deep-learning | 584 | CUDA 12.1 and CUBLAS and CUDNN without having to compile from source | Is CUDA 12.1 support coming or in the works? Just curious since faster-whisper keeps looking for cublas11.dll...and although I don't use cudnn, I'm assuming that would be another aspect to consider? Thanks. | closed | 2023-11-26T16:40:45Z | 2024-11-26T12:28:21Z | https://github.com/SYSTRAN/faster-whisper/issues/584 | [] | BBC-Esq | 33 |
vitalik/django-ninja | rest-api | 1,159 | [BUG] CustomParser request header issue when sending json and file | **Bug description**
The challenge encountered arises within the `CustomParser` during the instantiation of the `parse_body` component. This occurs when handling requests that may include both file uploads and JSON payloads. The request is processed utilizing an instance from `_HttpRequest`. However, a noted limitation surfaces in this context; the class lacks provisions for header accessibility.
In summary, the primary concern lies with extracting headers from within the `_HttpRequest` class during the parsing procedure. This highlights a requirement to either introduce mechanisms to retrieve headers or adjust the current workflow to accommodate this constraint.
if this is considered a limitation, it would be my delight to address this limitation in a pull request.
**Versions:**
- Python version: 3.11.7
- Django version: 4.2.5
- Django-Ninja version: 1.1.0
| open | 2024-05-09T14:07:16Z | 2024-05-09T14:09:50Z | https://github.com/vitalik/django-ninja/issues/1159 | [] | BQBB | 0 |
scikit-tda/kepler-mapper | data-visualization | 160 | What is the meaning of colors? | Hello,
What is the meaning of colors in the visualization? Can you please update documentation with it?
Thanks! | closed | 2019-04-03T13:22:24Z | 2019-04-12T11:02:26Z | https://github.com/scikit-tda/kepler-mapper/issues/160 | [] | larionov-s | 4 |
axnsan12/drf-yasg | rest-api | 383 | Try-it-out presenting wrong input type in swagger's UI | Howdy - a deep dive into the documentation isn't getting me anywhere so hopefully someone can help here.
I've got a model with a lookup field generated using `hashid_field.HashidField()` as provided by the [django-hash-field](https://github.com/nshafer/django-hashid-field) library.
I've got my serializers all hooked up so this is used as the lookup field when passing data around. So far so good, from a DRF point of view everything is working as expected.
The problem is the swagger UI. This reference field is a string, however the try-it-out feature of swagger is presenting an Integer input box and rejecting any string placed in there.
Here is my model, url view and serializer. I've stripped out a lot of code but the core here still generates the same incorrect Try-it-out panel:
```
## model.py
class Schedule(models.Model):
reference = HashidField()
modified = models.DateTimeField(auto_now=True)
## view.py
class ScheduleAPIView(viewsets.ModelViewSet):
queryset = models.Schedule.objects.all()
serializer_class = serializers.ScheduleSerializer
permission_classes = [IsAuthenticated]
lookup_field = 'reference'
lookup_value_regex = '[0-z]+
## serializer.py
class ScheduleSerializer(serializers.ModelSerializer):
reference = HashidSerializerCharField(
source_field='schedule.Schedule.reference',
read_only=True,
)
url = serializers.HyperlinkedIdentityField(
view_name='schedule:api:ScheduleObject-detail',
lookup_field='reference'
)
class Meta:
model = schedule_models.Schedule
fields = (
'reference',
'url',
'modified'
)
## urls.py
router = routers.SimpleRouter()
router.register(
r'schedule',
views.ScheduleAPIView,
base_name='ScheduleObject',
)
api_info = openapi.Info(
title="ExpressWay Global",
default_version='v0.1',
description="ExpressWay Global",
)
schema_view = get_schema_view(
api_info,
public=True,
permission_classes=(permissions.AllowAny,),
)
urlpatterns = [
# API Documentation
url(
r'^swagger/$',
schema_view.with_ui(
'swagger',
cache_timeout=0
),
name='schema-swagger-ui'
),
]
```
As you can see the `reference` field in Try-it-out only accepts integers:

If anyone has any clues about getting the correct field in the form there that'd be awesome.
Cheers!
| closed | 2019-06-14T15:23:14Z | 2019-06-14T15:44:42Z | https://github.com/axnsan12/drf-yasg/issues/383 | [] | ColinWaddell | 3 |
pyro-ppl/numpyro | numpy | 1,921 | Raise a warning to disable progress_bar when chain_method is a callable | When chain_method is a callable, we can't control how progress bar logic is interacted with those callables, hence we should raise a warning to inform users to set `progress_bar=False` in the MCMC construction.
Context: https://github.com/pyro-ppl/numpyro/issues/1725#issuecomment-2508621736
cc @mdmould | closed | 2024-12-01T12:54:14Z | 2024-12-04T15:53:58Z | https://github.com/pyro-ppl/numpyro/issues/1921 | [
"warnings & errors"
] | fehiepsi | 0 |
jwkvam/bowtie | plotly | 262 | Error building with webpack | ```python
yarn install v1.22.4
warning package.json: No license field
warning No license field
[1/4] Resolving packages...
success Already up-to-date.
Done in 0.45s.
'.' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "example.py", line 126, in <module>
def main():
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\bowtie\_command.py", line 103, in command
sys.exit(cmd(arg))
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\bowtie\_command.py", line 61, in build
app._build()
File "C:\Users\sabar\.conda\envs\Drug_Analysis\lib\site-packages\bowtie\_app.py", line 949, in _build
raise WebpackError('Error building with webpack')
bowtie.exceptions.WebpackError: Error building with webpack
``` | open | 2020-04-07T18:06:54Z | 2020-04-07T18:06:54Z | https://github.com/jwkvam/bowtie/issues/262 | [] | SabarishVT | 0 |
streamlit/streamlit | python | 10,424 | URL: http://localhost:8080 | ### Message
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.
You can now view your Streamlit app in your browser.
URL: http://localhost:8080 | closed | 2025-02-18T07:39:53Z | 2025-02-18T09:58:29Z | https://github.com/streamlit/streamlit/issues/10424 | [
"type:kudos"
] | gn9209366 | 0 |
aminalaee/sqladmin | sqlalchemy | 474 | Detail and Edit view queries make joins for all related models even if those rrelationships are not specified as fields | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
My Stack:
- FastAPI
- SqlModel
- SqlAdmin
- PostgreSQL 12
If I have a SqlAdmin `ModelView` for a SqlModel model with many relationships defined, the query used to populate the details view and the edit view does a join on all relations even when the related fields are not included in the `ModelView`'s `column_details_list` or `form_columns`.
This is an issue for me because one of my models has a many-to-many relationship with another table that can include 10's of thousands of records in the relationship. The model has other relations as well, but those have fewer records. When attempting to load the details or edit view for this model the request never completes because the join is so large and I need to restart the server.
I suspect this issue may be related to either SqlModel or SqlAlchemy internal implementation details/behavior. i.e. they are joining all relationships by default. However, I did see there was a fix released for SqlAdmin that fixed a similar issue for the list view here: https://github.com/aminalaee/sqladmin/pull/409
### Steps to reproduce the bug
Please note that this minimal example will likely not reproduce the issue because it only occurs due to the high number of related records in my database. `Brand` is the model I'm trying to load the detail and edit view for. `Asset` is the related model with 10's of thousands of related records.
```python
# models.py
class BrandAssetLink(SQLModel, table=True):
__tablename__ = 'asset_asset_brands'
id: Optional[int] = Field(default=None, primary_key=True)
asset_id: str = Field(max_length=512)
brand_id: int
class Brand(SQLModel, table=True):
__tablename__ = 'brands'
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(max_length=255)
assets: List['Asset'] = Relationship(
back_populates='brands',
link_model=BrandAssetLink,
sa_relationship_kwargs=dict(
primaryjoin='Brand.id==BrandAssetLink.brand_id',
secondaryjoin='Asset.page_url_id==BrandAssetLink.asset_id'
)
)
class Asset(SQLModel, table=True):
__tablename__ = 'asset_asset'
page_url_id: str = Field(max_length=512, primary_key=True)
title: str = Field(max_length=1024)
url: str = Field(max_length=4096)
brands: List['Brand'] = Relationship(
back_populates='assets',
link_model=BrandAssetLink,
sa_relationship_kwargs=dict(
primaryjoin='Asset.page_url_id==BrandAssetLink.asset_id',
secondaryjoin='Brand.id==BrandAssetLink.brand_id'
)
)
```
```python
# admin.py
class BrandAdmin(ModelView, model=Brand):
column_list = [
Brand.id, Brand.name
]
column_searchable_list = [Brand.name]
column_sortable_list = [
Brand.id, Brand.name
]
column_default_sort = (Brand.id, False)
column_details_list = [
Brand.id,
Brand.name
]
form_columns = [
Brand.name
]
```
### Expected behavior
I would expect the detail and edit views to only perform joins for the relationships defined in the `column_details_list` and `form_columns` attributes.
### Actual behavior
Detail and edit views perform joins for ALL related fields even when not specified in `column_details_list` and `form_columns`.
### Debugging material
_No response_
### Environment
- Mac
- application is running in a docker container: `python:3.8-slim`
- Connected to external Postgres Db
### Additional context
I also tried putting the related fields in the `ModelView`'s `form_ajax_refs` attribute to see if that would defer the joins, but that didn't seem to change anything. | closed | 2023-04-19T17:07:26Z | 2023-05-10T23:34:18Z | https://github.com/aminalaee/sqladmin/issues/474 | [] | FFX01 | 2 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 254 | 在VOC07上跑的结果比原论文低了约7%的mAP[0.5] | 在VOC2007的trainval图片上训练,在VOC2007test图片上测试结果是63%mAP[0.5] 但是Faster RCNN论文中提到的 测试结果应该是接近70%的(不过论文中proposals是300) 请问是因为某些设定不一样嘛? 必如 冻结了backbone的一些层,可是如果VOC数据不够的话 冻结了效果不该更好吗?

| closed | 2021-05-10T20:39:29Z | 2021-05-22T02:19:35Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/254 | [] | LinfengYuan1997 | 1 |
axnsan12/drf-yasg | django | 140 | Allow use of custom OpenAPISchemaGenerator in generate_swagger command | The `generate_swagger` always uses `OpenAPISchemaGenerator`, it would be great to use custom schema generator defined via `SWAGGER_SETTINGS`. | closed | 2018-06-08T18:58:27Z | 2018-06-16T13:31:33Z | https://github.com/axnsan12/drf-yasg/issues/140 | [] | intellisense | 0 |
matplotlib/mplfinance | matplotlib | 51 | err = f'NOT is_color_like() for {key}[\'{updown}\'] = {colors[updown]}' | Hi,
Getting this error:
Debian, Python 3.5
import mplfinance as mpf
File "/usr/local/lib/python3.5/dist-packages/mplfinance/__init__.py", line 1, in <module>
from mplfinance.plotting import plot, make_addplot
File "/usr/local/lib/python3.5/dist-packages/mplfinance/plotting.py", line 15, in <module>
from mplfinance._utils import _construct_ohlc_collections
File "/usr/local/lib/python3.5/dist-packages/mplfinance/_utils.py", line 15, in <module>
from mplfinance._styles import _get_mpfstyle
File "/usr/local/lib/python3.5/dist-packages/mplfinance/_styles.py", line 215
err = f'NOT is_color_like() for {key}[\'{updown}\'] = {colors[updown]}'
------------------
mc = mpf.make_marketcolors(ohlc='black')
s = mpf.make_mpf_style(marketcolors=mc)
mpf.plot(item['Close'],type='bars',style=s)
-------------------
This code works perfectly within the Anaconda Env.
I cannot run in on my VPS...
Thanks in advance...
| closed | 2020-03-12T19:53:51Z | 2020-03-15T12:15:12Z | https://github.com/matplotlib/mplfinance/issues/51 | [
"question"
] | thePragmaticOwl | 8 |
ageitgey/face_recognition | machine-learning | 1,093 | face_distance function | * face_recognition version: 1.2.3
* Python version: 3.5.3
* Operating System: Debian Linux 9.9
Hi,
Just wanted to know the `face_distance` function. So if the distance is low does that mean they are more similar? Furthermore, is there a range for the euclidean distance for example between 0 and 1. Where 0 means the the images are identical and 1 means they cant be more different?
Thanks for your help.
Thanks & Best Regards
Michael | open | 2020-03-24T11:31:41Z | 2020-03-24T15:01:52Z | https://github.com/ageitgey/face_recognition/issues/1093 | [] | MichaelSchroter | 1 |
QuivrHQ/quivr | api | 3,344 | Add ColPali on Diff Assistant | Investigate and implementation of the usage of ColPali for Diff Assistant : <br>[https://danielvanstrien.xyz/posts/post-with-code/colpali-qdrant/2024-10-02_using_colpali_with_qdrant.html](https://danielvanstrien.xyz/posts/post-with-code/colpali-qdrant/2024-10-02_using_colpali_with_qdrant.html) | closed | 2024-10-08T08:48:10Z | 2024-12-23T10:27:50Z | https://github.com/QuivrHQ/quivr/issues/3344 | [] | chloedia | 1 |
pmaji/crypto-whale-watching-app | dash | 99 | Time Stamping - Alt Coin Buzz | You guys have a great tool here! I haven't seen any work lately. I'd like to promote you on my segments that I do on Alt Coin Buzz. It's a TA segment along with the news. Are you still working on this? If so can I promote, or would it be better to wait?
Also wondering if you guys are working on a time stamp for the trades to be included in the roll-over. Certainly would be helpful to see if the larger orders are set up to affect the price.
Thank you in advance! | closed | 2018-04-14T19:37:16Z | 2018-04-14T20:17:42Z | https://github.com/pmaji/crypto-whale-watching-app/issues/99 | [] | MarkyVee | 2 |
TencentARC/GFPGAN | deep-learning | 610 | Broken face ( GFPGAN - Reactor - SD ) | i don't know why, its been a while, face are slightly broken when i try to replace it using gfpgan model on reactor. have a solution? | open | 2025-03-19T06:31:39Z | 2025-03-19T06:31:39Z | https://github.com/TencentARC/GFPGAN/issues/610 | [] | Ryan-infitech | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.